Belitsky, A V
2016-01-01
The Operator Product Expansion for null polygonal Wilson loop in planar maximally supersymmetric Yang-Mills theory runs systematically in terms of multiparticle pentagon transitions which encode the physics of excitations propagating on the color flux tube ending on the sides of the four-dimensional contour. Their dynamics was unravelled in the past several years and culminated in a complete description of pentagons as an exact function of the 't Hooft coupling. In this paper we provide a solution for the last building block in this program, the SU(4) matrix structure arising from internal symmetry indices of scalars and fermions. This is achieved by a recursive solution of the Mirror and Watson equations obeyed by the so-called singlet pentagons and fixing the form of the twisted component in their tensor decomposition. The non-singlet, or charged, pentagons are deduced from these by a limiting procedure.
Isoperimetric Pentagonal Tilings
Chung, Ping Ngai; Li, Yifei; Mara, Michael; Morgan, Frank; Plata, Isamar Rosa; Shah, Niralee; Vieira, Luis Sordo; Wikner, Elena
2011-01-01
We identify least-perimeter unit-area tilings of the plane by convex pentagons, namely tilings by Cairo and Prismatic pentagons, find infinitely many, and prove that they minimize perimeter among tilings by convex polygons with at most five sides.
Dimensionally regulated pentagon integrals
Bern, Z; Kosower, D A
1994-01-01
We present methods for evaluating the Feynman parameter integrals associated with the pentagon diagram in 4-2 epsilon dimensions, along with explicit results for the integrals with all masses vanishing or with one non-vanishing external mass. The scalar pentagon integral can be expressed as a linear combination of box integrals, up to O(epsilon) corrections, a result which is the dimensionally-regulated version of a D=4 result of Melrose, and of van Neerven and Vermaseren. We obtain and solve differential equations for various dimensionally-regulated box integrals with massless internal lines, which appear in one-loop n-point calculations in QCD. We give a procedure for constructing the tensor pentagon integrals needed in gauge theory, again through O(epsilon^0).
1983-01-01
sophisticated, modem weaponry. " Correspondent Roberto Suro , then Time’s man at the Pentagon, spent five weeks tracking Defense Secretary Weinberger. He also...as national syndrome, 111 Suro , Roberto (Time), 84 as policy of Time, 82 TASS (Soviet News Agency), 12 search and rescue mission, 57 Taylor, Fred
On the Threshold of Maximum-Distance Separable Codes
Kindarji, Bruno; Chabanne, Hervé
2010-01-01
Starting from a practical use of Reed-Solomon codes in a cryptographic scheme published in Indocrypt'09, this paper deals with the threshold of linear $q$-ary error-correcting codes. The security of this scheme is based on the intractability of polynomial reconstruction when there is too much noise in the vector. Our approach switches from this paradigm to an Information Theoretical point of view: is there a class of elements that are so far away from the code that the list size is always superpolynomial? Or, dually speaking, is Maximum-Likelihood decoding almost surely impossible? We relate this issue to the decoding threshold of a code, and show that when the minimal distance of the code is high enough, the threshold effect is very sharp. In a second part, we explicit lower-bounds on the threshold of Maximum-Distance Separable codes such as Reed-Solomon codes, and compute the threshold for the toy example that motivates this study.
A note on generalized pentagons
Brandt, Stephan
2010-01-01
Thomassen introduced the notion of a generalized pentagon and proved that the chromatic number of a triangle-free graph with n vertices and minimum degree at least cn, c>13, is at most 2(3c-1)-(4c-1)(3c-1), the first bound independent of the order n. We present a short proof of the stronger upper...... bound (3c-1)-1, again based on generalized pentagons. © 2010 Elsevier B.V....
On factorization of multiparticle pentagons
A.V. Belitsky
2015-08-01
Full Text Available We address the near-collinear expansion of multiparticle NMHV amplitudes, namely, the heptagon and octagons in the dual language of null polygonal super Wilson loops. In particular, we verify multiparticle factorization of charged pentagon transitions in terms of pentagons for single flux-tube excitations within the framework of refined operator product expansion. We find a perfect agreement with available tree and one-loop data.
Nonsinglet pentagons and NMHV amplitudes
A.V. Belitsky
2015-07-01
Full Text Available Scattering amplitudes in maximally supersymmetric gauge theory receive a dual description in terms of the expectation value of the super Wilson loop stretched on a null polygonal contour. This makes the analysis amenable to nonperturbative techniques. Presently, we elaborate on a refined form of the operator product expansion in terms of pentagon transitions to compute twist-two contributions to NMHV amplitudes. To start with, we provide a novel derivation of scattering matrices starting from Baxter equations for flux-tube excitations propagating on magnon background. We propose bootstrap equations obeyed by pentagon form factors with nonsinglet quantum numbers with respect to the R-symmetry group and provide solutions to them to all orders in 't Hooft coupling. These are then successfully confronted against available perturbative calculations for NMHV amplitudes to four-loop order.
Regular Pentagons and the Fibonacci Sequence.
French, Doug
1989-01-01
Illustrates how to draw a regular pentagon. Shows the sequence of a succession of regular pentagons formed by extending the sides. Calculates the general formula of the Lucas and Fibonacci sequences. Presents a regular icosahedron as an example of the golden ratio. (YP)
Volkov's Pentagon for the Modular Quantum Dilogarithm
Faddeev, L D
2012-01-01
The new form of pentagon equations suggested by Volkov for the $ q $-exponential on the basis of formal series is derived within the Hilbert space framework for the modular version of the quantum dilogarithm.
Maximum Time Separation of Events in Cyclic Systems with Linear and Latest Timing Constraints
Jin, Fen; Hulgaard, Henrik; Cerny, Eduard
1998-01-01
The determination of the maximum time separations of events is important in the design, synthesis, and verification of digital systems, especially in interface timing verification. Many researchers have explored solutions to the problem with various restrictions: a) on the type of constraints......, and b) on whether the events in the specification are allowed to occur repeatedly. When the events can occur only once, the problem is well solved. There are fewer concrete results for systems where the events can occur repeatedly. We extend the work by Hulgaard et al.\\ for computing the maximum...
Experimental Realization of a Quantum Pentagonal Lattice
Yamaguchi, Hironori; Okubo, Tsuyoshi; Kittaka, Shunichiro; Sakakibara, Toshiro; Araki, Koji; Iwase, Kenji; Amaya, Naoki; Ono, Toshio; Hosokoshi, Yuko
2015-01-01
Geometric frustration, in which competing interactions give rise to degenerate ground states, potentially induces various exotic quantum phenomena in magnetic materials. Minimal models comprising triangular units, such as triangular and Kagome lattices, have been investigated for decades to realize novel quantum phases, such as quantum spin liquid. A pentagon is the second-minimal elementary unit for geometric frustration. The realization of such systems is expected to provide a distinct platform for studying frustrated magnetism. Here, we present a spin-1/2 quantum pentagonal lattice in the new organic radical crystal α-2,6-Cl2-V [=α-3-(2,6-dichlorophenyl)-1,5-diphenylverdazyl]. Its unique molecular arrangement allows the formation of a partially corner-shared pentagonal lattice (PCPL). We find a clear 1/3 magnetization plateau and an anomalous change in magnetization in the vicinity of the saturation field, which originate from frustrated interactions in the PCPL. PMID:26468930
Mohammad H. Radfar
2006-11-01
Full Text Available We present a new technique for separating two speech signals from a single recording. The proposed method bridges the gap between underdetermined blind source separation techniques and those techniques that model the human auditory system, that is, computational auditory scene analysis (CASA. For this purpose, we decompose the speech signal into the excitation signal and the vocal-tract-related filter and then estimate the components from the mixed speech using a hybrid model. We first express the probability density function (PDF of the mixed speech's log spectral vectors in terms of the PDFs of the underlying speech signal's vocal-tract-related filters. Then, the mean vectors of PDFs of the vocal-tract-related filters are obtained using a maximum likelihood estimator given the mixed signal. Finally, the estimated vocal-tract-related filters along with the extracted fundamental frequencies are used to reconstruct estimates of the individual speech signals. The proposed technique effectively adds vocal-tract-related filter characteristics as a new cue to CASA models using a new grouping technique based on an underdetermined blind source separation. We compare our model with both an underdetermined blind source separation and a CASA method. The experimental results show that our model outperforms both techniques in terms of SNR improvement and the percentage of crosstalk suppression.
Dansereau Richard M
2007-01-01
Full Text Available We present a new technique for separating two speech signals from a single recording. The proposed method bridges the gap between underdetermined blind source separation techniques and those techniques that model the human auditory system, that is, computational auditory scene analysis (CASA. For this purpose, we decompose the speech signal into the excitation signal and the vocal-tract-related filter and then estimate the components from the mixed speech using a hybrid model. We first express the probability density function (PDF of the mixed speech's log spectral vectors in terms of the PDFs of the underlying speech signal's vocal-tract-related filters. Then, the mean vectors of PDFs of the vocal-tract-related filters are obtained using a maximum likelihood estimator given the mixed signal. Finally, the estimated vocal-tract-related filters along with the extracted fundamental frequencies are used to reconstruct estimates of the individual speech signals. The proposed technique effectively adds vocal-tract-related filter characteristics as a new cue to CASA models using a new grouping technique based on an underdetermined blind source separation. We compare our model with both an underdetermined blind source separation and a CASA method. The experimental results show that our model outperforms both techniques in terms of SNR improvement and the percentage of crosstalk suppression.
Durer-pentagon-based complex network
Rui Hou
2016-04-01
Full Text Available A novel Durer-pentagon-based complex network was constructed by adding a centre node. The properties of the complex network including the average degree, clustering coefficient, average path length, and fractal dimension were determined. The proposed complex network is small-world and fractal.
Constructing I[subscript h] Symmetrical Fullerenes from Pentagons
Gan, Li-Hua
2008-01-01
Twelve pentagons are sufficient and necessary to form a fullerene cage. According to this structural feature of fullerenes, we propose a simple and efficient method for the construction of I[subscript h] symmetrical fullerenes from pentagons. This method does not require complicated mathematical knowledge; yet it provides an excellent paradigm for…
A graphene composed of pentagons and octagons
Chi-Pui Tang
2012-12-01
Full Text Available We report a possible stable structure of graphene on basis of the first principle calculation. This possible two-dimensional (2D structure consists of pentagons and octagons (PO, and likely be formed from ordinary graphene by periodically inserting specific defects. Its density is 2.78 Atom/Å2 and the cohesive energy per atom is −8.96 eV, slightly higher than that of graphene. The calculation indicates that PO-graphene behaves like a 2D anisotropic metal. The dispersion relation of electrons near the Fermi surface shows a significant flat segment along a direction and linear behavior in different regions of the Brillouin zone. If the growth of samples is successful, the PO-graphene not only be used as anisotropy conductor and other practical application, but also can be served as a good sample for experiments which need 2D anisotropic materials.
2003-03-01
ANSI Std Z39-18 PERSONNEL AND READINESS THE UNDER SECRETARY OF DEFENSE 4000 DEFENSE PENTAGON WASHINGTON , DC 20301-4000 FEB 2 0 2003 The...Base, Delaware Port Mortuary. This expert discussed the entire mortuary and identifi cation process, including the importance of DNA specimens and... adolescent counseling, family counseling, consultation services and referral for longer-term follow-up counseling. The staff also attended all special
Frequency scanning antenna arrays with pentagonal dipoles of different impedances
Bošković Nikola
2015-01-01
Full Text Available In this work we present the benefits of using pentagonal dipoles as radiating elements instead of classical printed dipoles in the design of frequency scanning antenna arrays. We investigate how impedance of pentagonal dipoles, which can be changed in a wide range, influences the overall characteristics of the uniform antenna array. Some very important antenna characteristics such as side lobe level, gain and scanning angle are compared for three different antenna arrays consisting of identical pentagonal dipoles with impedances of 500 Ω, 1000 Ω and 1500 Ω. [Projekat Ministarstva nauke Republike Srbije, br. TR-32024 i br. III-45016
A maximum entropy approach to separating noise from signal in bimodal affiliation networks
Dianati, Navid
2016-01-01
In practice, many empirical networks, including co-authorship and collocation networks are unimodal projections of a bipartite data structure where one layer represents entities, the second layer consists of a number of sets representing affiliations, attributes, groups, etc., and an inter-layer link indicates membership of an entity in a set. The edge weight in the unimodal projection, which we refer to as a co-occurrence network, counts the number of sets to which both end-nodes are linked. Interpreting such dense networks requires statistical analysis that takes into account the bipartite structure of the underlying data. Here we develop a statistical significance metric for such networks based on a maximum entropy null model which preserves both the frequency sequence of the individuals/entities and the size sequence of the sets. Solving the maximum entropy problem is reduced to solving a system of nonlinear equations for which fast algorithms exist, thus eliminating the need for expensive Monte-Carlo sam...
The concept of 'giftedness': a pentagonal implicit theory.
Sternberg, R J
1993-01-01
This paper presents a pentagonal implicit theory of giftedness and a set of data testing the theory. The exposition is divided into five parts. First, I discuss what an implicit theory is and why such theories are important. Second, I describe the pentagonal theory, specifying five conditions claimed to be individually necessary and jointly sufficient for a person to be labelled as gifted. These conditions not only help us understand why some people are labelled as 'gifted', but also why some others are not. Third, I consider the relation of the pentagonal theory to explicit theories of giftedness. Fourth, I present data supporting the theory. Finally, I discuss some implications of the pentagonal theory for gifted education.
Formation of pentagonal atomic chains in BCC Fe nanowires
Sainath, G.; Choudhary, B. K.
2016-12-01
For the first time, we report the formation of pentagonal atomic chains during tensile deformation of ultra thin BCC Fe nanowires. Extensive molecular dynamics simulations have been performed on /{110} BCC Fe nanowires with different cross section width varying from 0.404 to 3.634 nm at temperatures ranging from 10 to 900 K. The results indicate that above certain temperature, long and stable pentagonal atomic chains form in BCC Fe nanowires with cross section width less than 2.83 nm. The temperature, above which the pentagonal chains form, increases with increase in nanowire size. The pentagonal chains have been observed to be highly stable over large plastic strains and contribute to high ductility in Fe nanowires.
A new pentagon identity for the tetrahedron index
Gahramanov, Ilmar
2013-01-01
Recently Kashaev, Luo and Vartanov, using the reduction from a four-dimensional superconformal index to a three-dimensional partition function, found a pentagon identity for a special combination of hyperbolic Gamma functions. Following their idea we have obtained a new pentagon identity for a certain combination of so-called tetrahedron indices arising from the equality of superconformal indices of dual three-dimensional N=2 supersymmetric theories.
Nobuo Kimizuka
2011-08-01
Full Text Available Pentagonal conjugates of tryptophane zipper-forming peptide (CKTWTWTE with a pentaazacyclopentadecane core (Pentagonal-Gly-Trpzip and Pentagonal-Ala-Trpzip were synthesized and their self-assembling behaviors were investigated in water. Pentagonal-Gly-Trpzip self-assembled into nanofibers with the width of about 5 nm in neutral water (pH 7 via formation of tryptophane zipper, which irreversibly converted to nanoribbons by heating. In contrast, Pentagonal-Ala-Trpzip formed irregular aggregates in water.
The standardised copy of pentagons test
Terzoglou Vassiliki A
2011-04-01
Full Text Available Abstract Background The 'double-diamond copy' task is a simple paper and pencil test part of the Bender-Gestalt Test and the Mini Mental State Examination (MMSE. Although it is a widely used test, its method of scoring is crude and its psychometric properties are not adequately known. The aim of the present study was to develop a sensitive and reliable method of administration and scoring. Methods The study sample included 93 normal control subjects (53 women and 40 men aged 35.87 ± 12.62 and 127 patients suffering from schizophrenia (54 women and 73 men aged 34.07 ± 9.83. Results The scoring method was based on the frequencies of responses of healthy controls and proved to be relatively reliable with Cronbach's α equal to 0.61, test-retest correlation coefficient equal to 0.41 and inter-rater reliability equal to 0.52. The factor analysis produced two indices and six subscales of the Standardised Copy of Pentagons Test (SCPT. The total score as well as most of the individual items and subscales distinguished between controls and patients. The discriminant function correctly classified 63.44% of controls and 75.59% of patients. Discussion The SCPT seems to be a satisfactory, reliable and valid instrument, which is easy to administer, suitable for use in non-organic psychiatric patients and demands minimal time. Further research is necessary to test its psychometric properties and its usefulness and applications as a neuropsychological test.
Kazi Takpaya; Wei Gang
2003-01-01
Blind identification-blind equalization for Finite Impulse Response (FIR) Multiple Input-Multiple Output (MIMO) channels can be reformulated as the problem of blind sources separation. It has been shown that blind identification via decorrelating sub-channels method could recover the input sources. The Blind Identification via Decorrelating Sub-channels(BIDS)algorithm first constructs a set of decorrelators, which decorrelate the output signals of subchannels, and then estimates the channel matrix using the transfer functions of the decorrelators and finally recovers the input signal using the estimated channel matrix. In this paper, a new approximation of the input source for FIR-MIMO channels based on the maximum likelihood source separation method is proposed. The proposed method outperforms BIDS in the presence of additive white Gaussian noise.
AaziTakpaya; WeiGang
2003-01-01
Blind identification-blind equalization for finite Impulse Response(FIR)Multiple Input-Multiple Output(MIMO)channels can be reformulated as the problem of blind sources separation.It has been shown that blind identification via decorrelating sub-channels method could recover the input sources.The Blind Identification via Decorrelating Sub-channels(BIDS)algorithm first constructs a set of decorrelators,which decorrelate the output signals of subchannels,and then estimates the channel matrix using the transfer functions of the decorrelators and finally recovers the input signal using the estimated channel matrix.In this paper,a new qpproximation of the input source for FIR-MIMO channels based on the maximum likelihood source separation method is proposed.The proposed method outperforms BIDS in the presence of additive white Garssian noise.
Integral pentagon relations for 3d superconformal indices
Gahramanov, Ilmar; Rosengren, Hjalmar
2014-01-01
The superconformal index of a three-dimensional supersymmetric field theory can be expressed in terms of basic hypergeometric integrals. By comparing the indices of dual theories, one can find new integral identities for basic hypergeometric integrals. Some of these integral identities have the form of the pentagon identity which can be interpreted as the 2-3 Pachner move for triangulated 3-manifolds.
Defending a New Domain: The Pentagon’s Cyberstrategy
2010-01-01
Defending a New Domain The Pentagons ~ , -., -, ,o’ C.~. be1 sn ategy IN zooS , the U.8. Department of Defense suffered a significant compromise of...professionals annually as a few years, ago. Follo’adng industry practices, the Pentagon’s nem~ork administrators are now trained in " ethical hacking," which
The Pentagon's Military Analyst Program
Valeri, Andy
2014-01-01
This article provides an investigatory overview of the Pentagon's military analyst program, what it is, how it was implemented, and how it constitutes a form of propaganda. A technical analysis of the program is applied using the theoretical framework of the propaganda model first developed by Noam Chomsky and Edward S. Herman. Definitions…
The Pentagon's Military Analyst Program
Valeri, Andy
2014-01-01
This article provides an investigatory overview of the Pentagon's military analyst program, what it is, how it was implemented, and how it constitutes a form of propaganda. A technical analysis of the program is applied using the theoretical framework of the propaganda model first developed by Noam Chomsky and Edward S. Herman. Definitions…
G. Munhoven
2009-06-01
Full Text Available Many sensitivity studies have been carried out, using climate models of different degrees of complexity to test the climate response to Last Glacial Maximum boundary conditions. Here, instead of adding the forcings successively as in most previous studies, we applied the separation method of U. Stein et P. Alpert 1993, in order to determine rigorously the different contributions of the boundary condition modifications, and isolate the pure contributions from the interactions among the forcings. We carried out a series of sensitivity experiments with the model of intermediate complexity Planet Simulator, investigating the contributions of the ice sheet expansion and elevation, the lowering of the atmospheric CO2 and of the vegetation cover change on the LGM climate. The separation of the ice cover and orographic contributions shows that the ice albedo effect is the main contributor to the cooling of the Northern Hemisphere, whereas orography has only a local cooling impact over the ice sheets. The expansion of ice cover in the Northern Hemisphere causes a disruption of the tropical precipitation, and a southward shift of the ITCZ. The orographic forcing mainly contributes to the disruption of the atmospheric circulation in the Northern Hemisphere, leading to a redistribution of the precipitation, but weakly impacts the tropics. The isolated vegetation contribution also induces strong cooling over the continents of the Northern Hemisphere that further affects the tropical precipitation and reinforce the southward shift of the ITCZ, when combined with the ice forcing. The combinations of the forcings generate many non-linear interactions that reinforce or weaken the pure contributions, depending on the climatic mechanism involved, but they are generally weaker than the pure contributions. Finally, the comparison between the LGM simulated climate and climatic reconstructions over Eurasia suggests that our results reproduce well the south-west to
G. Munhoven
2009-01-01
Full Text Available Many sensitivity studies have been carried out, using simplified GCMs to test the climate response to Last Glacial Maximum boundary conditions. Here, instead of adding the forcings successively as in previous studies, we applied the separation method of Stein and Alpert (1993, in order to determine rigourously the different contributions of the boundary condition modifications, and isolate the pure contributions from the interactions among the forcings. We carried out a series of sensitivity experiments with the model of intermediate complexity Planet Simulator, investigating the contributions of the ice sheet expansion and elevation, the lowering of the atmospheric CO2 and of the vegetation cover change on the LGM climate. The results clearly identify the ice cover forcing as the main contributor to the cooling of the Northern Hemisphere, and also to the tropical precipitation disruption, leading to the shouthward shift of the ITCZ, while the orographic forcing mainly contributes to the disruption of the atmospheric circulation in the Northern Hemisphere. The isolated vegetation contribution also induces strong cooling over the continents of the Northern Hemisphere, that is further sufficient to affect the tropical precipitation and reinforce the southwards shift of the ITCZ, when combined with the ice forcing. The combinations of the forcings generate many non linear interactions, that reinforce or weaken the pure contributions, depending on the climatic mechanism involved, but they are generally weaker than the pure contributions. Finally, the comparison between the LGM simulated climate and climatic reconstructions over Eurasia suggests that our results reproduce well the south-west to north-east temperature gradients over Eurasia.
Crystallographic interpretation of Galois symmetries for magnetic pentagonal ring
Milewski, J.; Lulek, T.; Łabuz, M.
2017-03-01
Galois symmetry of exact Bethe Ansatz eigenstates for the magnetic pentagonal ring within the XXX model are investigated by a comparison with crystallographic constructions of space groups. It follows that the arithmetic symmetry of Bethe parameters for the interior of the Brillouin zone admits crystallographic interpretation, in terms of the periodic square Z2 ×Z2 , that is the two-dimensional crystal lattice with Born-Karman period two in both directions.
Pentagonal monolayer crystals of carbon, boron nitride, and silver azide
Yagmurcukardes, M., E-mail: mehmetyagmurcukardes@iyte.edu.tr; Senger, R. T., E-mail: tugrulsenger@iyte.edu.tr [Department of Physics, Izmir Institute of Technology, 35430 Urla, Izmir (Turkey); Sahin, H.; Kang, J.; Torun, E.; Peeters, F. M. [Department of Physics, University of Antwerp, Campus Groenenborgerlaan, 2020, Antwerp (Belgium)
2015-09-14
In this study, we present a theoretical investigation of structural, electronic, and mechanical properties of pentagonal monolayers of carbon (p-graphene), boron nitride (p-B{sub 2}N{sub 4} and p-B{sub 4}N{sub 2}), and silver azide (p-AgN{sub 3}) by performing state-of-the-art first principles calculations. Our total energy calculations suggest feasible formation of monolayer crystal structures composed entirely of pentagons. In addition, electronic band dispersion calculations indicate that while p-graphene and p-AgN{sub 3} are semiconductors with indirect bandgaps, p-BN structures display metallic behavior. We also investigate the mechanical properties (in-plane stiffness and the Poisson's ratio) of four different pentagonal structures under uniaxial strain. p-graphene is found to have the highest stiffness value and the corresponding Poisson's ratio is found to be negative. Similarly, p-B{sub 2}N{sub 4} and p-B{sub 4}N{sub 2} have negative Poisson's ratio values. On the other hand, the p-AgN{sub 3} has a large and positive Poisson's ratio. In dynamical stability tests based on calculated phonon spectra of these pentagonal monolayers, we find that only p-graphene and p-B{sub 2}N{sub 4} are stable, but p-AgN{sub 3} and p-B{sub 4}N{sub 2} are vulnerable against vibrational excitations.
Asymmetric Pentagonal Metal Meshes for Flexible Transparent Electrodes and Heaters.
Lordan, Daniel; Burke, Micheal; Manning, Mary; Martin, Alfonso; Amann, Andreas; O'Connell, Dan; Murphy, Richard; Lyons, Colin; Quinn, Aidan J
2017-02-08
Metal meshes have emerged as an important class of flexible transparent electrodes. We report on the characteristics of a new class of asymmetric meshes, tiled using a recently discovered family of pentagons. Micron-scale meshes were fabricated on flexible polyethylene terephthalate substrates via optical lithography, metal evaporation (Ti 10 nm, Pt 50 nm), and lift-off. Three different designs were assessed, each with the same tessellation pattern and line width (5 μm), but with different sizes of the fundamental pentagonal unit. Good mechanical stability was observed for both tensile strain and compressive strain. After 1000 bending cycles, devices subjected to tensile strain showed fractional resistance increases in the range of 8-17%, while devices subjected to compressive strain showed fractional resistance increases in the range of 0-7%. The performance of the pentagonal metal mesh devices as visible transparent heaters via Joule heating was also assessed. Rapid response times (∼15 s) at low bias voltage (≤5 V) and good thermal resistance characteristics (213-258 °C cm(2)/W) were found using measured thermal imaging data. Deicing of an ice-bearing glass coupon on top of the transparent heater was also successfully demonstrated.
Pentagonal shaped microstrip patch antenna in wireless capsule endoscopy system
Bondili Kohitha Bai
2012-01-01
Full Text Available Wireless capsule endoscopy is a best option for exploring inaccessible areas of small intestine for inspection of gastrointestinal tract. This technique brings less pain compare to conventional endoscopy technique. The wireless endoscopy system comprises of three main modules: an ingestible capsule that is swallowed by the patient, an external control unit and display device for image display. In this paper we proposed pentagonal shape microstrip patch antenna for wireless capsule endoscopy system. Inhibiting characteristics of a single microstrip patch like low gain, light weight, thin thickness and smaller bandwidth, make it more popular. This kind of antenna is aggressive miniaturized to meet the requirements of the wireless capsule endoscope. The simulation results show that the designed Circular Polarization (CP pentagonal shaped microstrip patch antenna gives axial ratio of 0.6023 at 2.38 GHz and CP axial ratio bandwidth of 36MHz with 1.5%. The antenna designed for wireless capsule endoscopy is a proposed one, which may work effectively when compared to other antennas in the capsule.
Li, Jie-Wei; Liu, Yu-Yu; Xie, Ling-Hai; Shang, Jing-Zhi; Qian, Yan; Yi, Ming-Dong; Yu, Ting; Huang, Wei
2015-02-21
Defect engineering and the non-covalent interaction strategy allow for dramatically tuning the optoelectronic features of graphene. Herein, we theoretically investigated the intrinsic mechanism of non-covalent interactions between pentagon-octagon-pentagon (5-8-5) defect graphene (DG) and absorbed molecules, tetrathiafulvalene (TTF), perfluoronaphthalene (FNa), tetracyanoquinodimethane (TCNQ) and 2,3,5,6-tetrafluoro-7,7,8,8-tetracyanoquinodimethane (F4TCNQ), through geometry, distance, interaction energy, Mulliken charge distribution, terahertz frequency vibration, visualization of the interactions, charge density difference, electronic transition behaviour, band structure and density of state. All the calculations were performed using density functional theory including a dispersion correction (DFT-D). The calculated results indicate that the cyano- (CN) group (electron withdraw group) in TCNQ and F4TCNQ, rather than the F group, gain the electron from DG effectively and exhibit much stronger interactions via wavefunction overlap with DG, leading to a short non-covalent interaction distance, a large interaction energy and a red-shift of out-of-plane terahertz frequency vibration, changing the bands near the Fermi level and enhancing the infrared (IR) light absorption significantly. The enhancement of such IR absorbance offering a broader absorption (from 300 to 1200 nm) will benefit light harvesting in potential applications of solar energy conversion.
Construction of the discrete hull for the combinatorics of a regular pentagonal tiling of the plane
Ramirez-Solano, Maria
2016-01-01
The article A “regular” pentagonal tiling of the plane by P. L. Bowers and K. Stephenson, Conform. Geom. Dyn. 1, 58–86, 1997, defines a conformal pentagonal tiling. This is a tiling of the plane with remarkable combinatorial and geometric properties. However, it doesn’t have finite local complexi...
Design of A Pentagon Microstrip Antenna for Radar Altimeter Application
K. RamaDevi
2012-11-01
Full Text Available In the navigational applications, radar and satellite requires a device that is a radar altimeter. Theworking frequency of this system is 4.2 to 4.3GHz and also requires less weight, low profile, and high gainantennas. The above mentioned application is possible with microstrip antenna as also known as planarantenna. In this paper, the microstrip antennas are designed at 4.3GHz (C-band in rectangular andcircular shape patch antennas in single element and arrays with parasitic elements placed in H-planecoupling. The performance of all these shapes is analyzed in terms of radiation pattern, half power points,and gain and impedance bandwidth in MATLAB. This work extended here with designed in different shapeslike Rhombic, Pentagon, Octagon and Edges-12 etc. Further these parameters are simulated in ANSOFTHFSSTMV9.0 simulator.
Equilateral pentagon polarization maintaining photonic crystal fibre with low nonlinearity
Yang Han-Rui; Li Xu-You; Hong Wei; Hao Jin-Hui
2012-01-01
A new pentagon polarization maintaining photonic crystal fibre with low nonlinearity is introduced. The full vector finite element method was used to investigate the distribution and the effective area of modal field,the nonlinear properties,the effective indices of two orthogonal polarization modes and the birefringence of the new PM-PCF effectively.It is found that the birefringence of the new polarization maintaining photonic crystal fibre can easily achieve the order of 10-4,and it can obtain higher birefringence,larger effectively mode-field area and lower nonlinearity than traditional hexagonal polarization maintaining photonic crystal fibre with the same hole pitch,same hole diameter,and same ring number.It is important for sensing and communication applications,especially has potential application for fibre optical gyroscope.
Liu, Fupin; Wang, Song; Gao, Cong Li
2017-01-01
Fused-pentagons results in an increase of local steric strain according to the isolated pentagon rule (IPR), and for all reported non-IPR clusterfullerenes multiple (two or three) metals are required to stabilize the strained fused-pentagons, making it difficult to access the single-atom properti...
Liu, Fupin; Wang, Song; Gao, Cong Li
2017-01-01
Fused-pentagons results in an increase of local steric strain according to the isolated pentagon rule (IPR), and for all reported non-IPR clusterfullerenes multiple (two or three) metals are required to stabilize the strained fused-pentagons, making it difficult to access the single-atom properti...... (SMM)....
Fulgueras, Alyssa Marie; Poudel, Jeeban; Kim, Dong Sun; Cho, Jungho [Kongju National University, Cheonan (Korea, Republic of)
2016-01-15
The separation of ethylenediamine (EDA) from aqueous solution is a challenging problem because its mixture forms an azeotrope. Pressure-swing distillation (PSD) as a method of separating azeotropic mixture were investigated. For a maximum-boiling azeotropic system, pressure change does not greatly affect the azeotropic composition of the system. However, the feasibility of using PSD was still analyzed through process simulation. Experimental vapor liquid equilibrium data of water-EDA system was studied to predict the suitability of thermodynamic model to be applied. This study performed an optimization of design parameters for each distillation column. Different combinations of operating pressures for the low- and high-pressure columns were used for each PSD simulation case. After the most efficient operating pressures were identified, two column configurations, low-high (LP+HP) and high-low (HP+ LP) pressure column configuration, were further compared. Heat integration was applied to PSD system to reduce low and high temperature utility consumption.
无
2003-01-01
Based on some necessary conditions for double pyramidal central configurations with a concave pentagonal base, for any given ratio of masses, the existence and uniqueness of a class of double pyramidal central configurations with a concave pentagonal base in 7-body problems are proved and the range of the ratio between radius and half-height is obtained, within which the 7 bodies involved form a central configuration or form uniquely a central configuration.
Da Costa, M J; Colson, G; Frost, T J; Halley, J; Pesti, G M
2017-09-01
The objective of this experiment was to determine the maximum net returns digestible lysine (dLys) levels (MNRL) when maintaining the ideal amino acid ratio for starter diets of broilers raised sex separate or comingled (straight-run). A total of 3,240 Ross 708 chicks was separated by sex and placed in 90 pens by 2 rearing types: sex separate (36 males or 36 females) or straight-run (18 males + 18 females). Each rearing type was fed 6 starter diets (25 d) formulated to have dLys levels between 1.05 and 1.80%. A common grower diet with 1.02% of dLys was fed from 25 to 32 days. Body weight gain (BWG) and feed intake were assessed at 25 and 32 d for performance evaluation. Additionally, at 26 and 33 d, 4 birds per pen were sampled for carcass yield evaluation. Data were modeled using response surface methodology in order to estimate feed intake and whole carcass weight at 1,600 g live BW. Returns over feed cost were estimated for a 1.8-million-broiler complex of each rearing system under 9 feed/meat price scenarios. Results indicated that females needed more feed to reach market weight, followed by straight-run birds, and then males. At medium meat and feed prices, female birds had MNRL at 1.07% dLys, whereas straight-run and males had MNRL at 1.05%. As feed and meat prices increased, females had MNRL increased up to 1.15% dLys. Sex separation resulted in increased revenue under certain feed and meat prices, and before sex separation cost was deducted. When the sexing cost was subtracted from the returns, sex separation was not shown to be economically viable when targeting birds for light market BW. © 2017 Poultry Science Association Inc.
Design of modified pentagonal patch antenna on defective ground for Wi-Max/WLAN application
Rawat, Sanyog; Sharma, K. K.
2016-04-01
This paper presents the design and performance of a modified pentagonal patch antenna with defective ground plane. A pentagonal slot is inserted in the pentagonal patch and slot loaded ground through optimized dimensions is used in the antenna to resonate it at dual frequency. The geometry operates at two resonant frequencies (2.5 GHz and 5.58 GHz) and offers impedance bandwidth of 864 MHz and 554 MHz in the two bands of interest. The proposed antenna covers the lower band (2.45 to 2.484/2.495 to 2.695 GHz) and upper band (5.15 to 5.825 GHz/5.25 to 5.85 GHz) allocated for Wi-Max and WLAN communication systems.
Dr.M.S.Annie Christi,
2016-02-01
Full Text Available This paper presents a solution methodology for transportation problem in an intuitionistic fuzzy environment in which cost are represented by pentagonal intuitionistic fuzzy numbers. Transportation problem is a particular class of linear programming, which is associated with day to day activities in our real life. It helps in solving problems on distribution and transportation of resources from one place to another. The objective is to satisfy the demand at destination from the supply constraints at the minimum transportation cost possible. The problem is solved using a ranking technique called Accuracy function for pentagonal intuitionistic fuzzy numbers and Russell’s Method. An illustrative example is given to verify this approach.
Comparison of aerodynamic characteristics of pentagonal and hexagonal shaped bridge decks
Haque, Md. Naimul; Katsuchi, Hiroshi; Yamada, Hitoshi; Nishio, Mayuko
2016-07-01
Aerodynamics of the long-span bridge deck should be well understood for an efficient design of the bridge system. For practical bridges various deck shapes are being recommended and adopted, yet not all of their aerodynamic behaviors are well interpreted. In the present study, a numerical investigation was carried out to explore the aerodynamic characteristics of pentagonal and hexagonal shaped bridge decks. A relative comparison of steady state aerodynamic responses was made and the flow field was critically analyzed for better understanding the aerodynamic responses. It was found that the hexagonal shaped bridge deck has better aerodynamic characteristics as compared to the pentagonal shaped bridge deck.
Ramirez-Solano, Maria
The article ”A regular pentagonal tiling of the plane” by Philip L. Bowers and Kenneth Stephenson defines a conformal pentagonal tiling. This is a tiling of the plane with remarkable combinatorial and geometric properties.However, it doesn’t have finite local complexity in any usual sense, and th...
On the chromatic number of pentagon-free graphs of large minimum degree
Thomassen, Carsten
2007-01-01
We prove that, for each fixed real number c > 0, the pentagon-free graphs of minimum degree at least cn (where n is the number of vertices) have bounded chromatic number. This problem was raised by Erdős and Simonovits in 1973. A similar result holds for any other fixed odd cycle, except the tria...
Pentagonal dodecahedron methane hydrate cage and methanol system—An ab initio study
Snehanshu Pal; T K Kundu
2013-03-01
Density functional theory based studies have been performed to elucidate the role of methanol as an methane hydrate inhibitor. A methane hydrate pentagonal dodecahedron cage’s geometry optimization, natural bond orbital (NBO) analysis, Mullikan charge determination, electrostatic potential evaluation and vibrational frequency calculation with and without the presence of methanol using WB97XD/6-31++G(d,p) have been carried out. Calculated geometrical parameters and interaction energies indicate that methanol destabilizes pentagonal dodecahedron methane hydrate cage (1CH4@512) with and without the presence of sodium ion. NBO analysis and red shift of vibrational frequency reveal that hydrogen bond formation between methanol and water molecules of 1CH4@512 cage is favourable subsequently after breaking its original hydrogen bonded network.
The formation of pentagonal Ni nanowires: dependence on the stretching direction and the temperature
Garcia-Mochales, P. [Departamento de Fisica de la Materia Condensada, Facultad de Ciencias, Universidad Autonoma de Madrid, c/Francisco Tomas y Valiente 7, Cantoblanco, 28049 Madrid (Spain); Paredes, R. [Centro de Fisica, Instituto Venezolano de Investigaciones Cientificas, Apto. 20632, Caracas 1020A (Venezuela); Instituto de Ciencia de Materiales de Madrid, Consejo Superior de Investigaciones Cientificas, c/Sor Juana Ines de la Cruz 3, Cantoblanco, 28049 Madrid (Spain); Pelaez, S.; Serena, P.A. [Instituto de Ciencia de Materiales de Madrid, Consejo Superior de Investigaciones Cientificas, c/Sor Juana Ines de la Cruz 3, Cantoblanco, 28049 Madrid (Spain)
2008-06-15
We have constructed computational minimum cross-section histograms that statistically unveil the presence of preferred configuration during the breakage of Ni nanowires. The computed histograms showed strong dependence on the nanowire stretching direction. For the[100] and[110] stretching directions we have observed a very large peak associated to a minimum cross-section of 5 atoms. We have confirmed that the configurations that contribute to this peak are staggered pentagonal nanowires. We have found that the formation of these nanowires is enhanced by increasing the temperature up to 550 K. At higher temperatures, the formation of pentagonal nanowires declines due to the competence against the nanowire melting processes. (copyright 2008 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)
Novel platens to measure the hardness of a pentagonal shaped tablet.
Malladi, Jaya; Sidik, Kurex; Wu, Sutan; McCann, Ryan; Dougherty, Jeffrey; Parab, Prakash; Carragher, Thomas
2017-03-01
Tablet hardness, a measure of the breaking force of a tablet, is based on numerous factors. These include the shape of the tablet and the mode of the application of force. For instance, when a pentagonal-shaped tablet was tested with a traditional hardness tester with flat platens, there was a large variation in hardness measurements. This was due to the propensity of vertices of the tablet to crush, referred to as an "improper break". This article describes a novel approach to measure the hardness of pentagonal-shaped tablets using modified platens. The modified platens have more uniform loading than flat platens. This is because they reduce loading on the vertex of the pentagon and apply forces on tablet edges to generate reproducible tablet fracture. The robustness of modified platens was assessed using a series of studies, which included feasibility and Gauge Repeatability & Reproducibility (R&R) studies. A key finding was that improper breaks, generated frequently with a traditional hardness tester using flat platens, were eliminated. The Gauge R&R study revealed that the tablets tested with novel platens generated consistent values in hardness measurements, independent of batch, hardness level, and day of testing, operator and tablet dosage strength.
Pugnaloni, Luis A.; Carlevaro, C. Manuel; Kramár, M.; Mischaikow, K.; Kondic, L.
2016-06-01
The force network of a granular assembly, defined by the contact network and the corresponding contact forces, carries valuable information about the state of the packing. Simple analysis of these networks based on the distribution of force strengths is rather insensitive to the changes in preparation protocols or to the types of particles. In this and the companion paper [Kondic et al., Phys. Rev. E 93, 062903 (2016), 10.1103/PhysRevE.93.062903], we consider two-dimensional simulations of tapped systems built from frictional disks and pentagons, and study the structure of the force networks of granular packings by considering network's topology as force thresholds are varied. We show that the number of clusters and loops observed in the force networks as a function of the force threshold are markedly different for disks and pentagons if the tangential contact forces are considered, whereas they are surprisingly similar for the network defined by the normal forces. In particular, the results indicate that, overall, the force network is more heterogeneous for disks than for pentagons. Such differences in network properties are expected to lead to different macroscale response of the considered systems, despite the fact that averaged measures (such as force probability density function) do not show any obvious differences. Additionally, we show that the states obtained by tapping with different intensities that display similar packing fraction are difficult to distinguish based on simple topological invariants.
Analysis of Doppler Lidar Data Acquired During the Pentagon Shield Field Campaign
Newsom, Rob K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2011-04-14
Observations from two coherent Doppler lidars deployed during the Pentagon Shield field campaign are analyzed in conjunction with other sensors to characterize the overall boundary-layer structure, and identify the dominant flow characteristics during the entire two-week field campaign. Convective boundary layer (CBL) heights and cloud base heights (CBH) are estimated from an analysis of the lidar signal-to-noise-ratio (SNR), and mean wind profiles are computed using a modified velocity-azimuth-display (VAD) algorithm. Three-dimensional wind field retrievals are computed from coordinated overlapping volume scans, and the results are analyzed by visualizing the flow in horizontal and vertical cross sections. The VAD winds show that southerly flows dominate during the two-week field campaign. Low-level jets (LLJ) were evident on all but two of the nights during the field campaign. The LLJs tended to form a couple hours after sunset and reach maximum strength between 03 and 07 UTC. The surface friction velocities show distinct local maxima during four nights when strong LLJs formed. Estimates of the convective boundary layer height and residual layer height are obtained through an analysis of the vertical gradient of the lidar signal-to-noise-ratio (SNR). Strong minimum in the SNR gradient often develops just above the surface after sunrise. This minimum is associated with the developing CBL, and increases rapidly during the early portion of the daytime period. On several days, this minimum continues to increase until about sunset. Secondary minima in the SNR gradient were also observed at higher altitudes, and are believed to be remnants of the CBL height from previous days, i.e. the residual layer height. The dual-Doppler analysis technique used in this study makes use of hourly averaged radial velocity data to produce three-dimensional grids of the horizontal velocity components, and the horizontal velocity variance. Visualization of horizontal and vertical cross
Nazerani Shahram
2012-08-01
Full Text Available 【Abstract】Objective: Interphalangeal joint con-tracture is a challenging complication of hand trauma, which reduces the functional capacity of the entire hand. In this study we evaluated the results of soft tissue distraction with no collateral ligament transection or volar plate removal in comparison with traditional operation of contracture re-lease and partial ligament transection and volar plate removal. Methods: In this prospective study, a total of 40 pa-tients in two equal groups (A and B were studied. Patients suffering from chronic flexion contracture of abrasive trau-matic nature were included. Group A were treated by soft tissue distraction using pentagonal frame technique and in Group B the contracture release was followed by finger splinting. Results: Analyzed data revealed a significant differ-ence between the two groups for range of motion in the proximal interphalangeal joints (P<0.05, while it was not meaningful in the distal interphalangeal joints (P>0.05. There was not a significant difference in the degrees of flexion contracture between groups (P>0.05. Regression analysis showed that using pentagonal frame technique significantly increased the mean improvement in range of motion of proxi-mal interphalangeal joints (P<0.001, while the higher the preoperative flexion contracture was observed in proximal interphalangeal joints, the lower improvement was achieved in range of motion of proximal interphalangeal joints after intervention (P<0.001. Conclusion: Soft tissue distraction using pentagonal frame technique with gradual and continuous collateral liga-ment and surrounding joint tissues distraction combined with skin Z-plasty significantly improves the range of mo-tion in patients with chronic traumatic flexion deformity of proximal and/or distal interphalangeal joints. Key words: Osteogenesis, distraction; Finger joint; Hand deformities
Cerdá, Jorge I.; Sławińska, Jagoda; Le Lay, Guy; Marele, Antonela C.; Gómez-Rodríguez, José M.; Dávila, María E.
2016-01-01
Carbon and silicon pentagonal low-dimensional structures attract a great interest as they may lead to new exotic phenomena such as topologically protected phases or increased spin–orbit effects. However, no pure pentagonal phase has yet been realized for any of them. Here we unveil through extensive density functional theory calculations and scanning tunnelling microscope simulations, confronted to key experimental facts, the hidden pentagonal nature of single- and double-strand chiral Si nano-ribbons perfectly aligned on Ag(110) surfaces whose structure has remained elusive for over a decade. Our study reveals an unprecedented one-dimensional Si atomic arrangement solely comprising almost perfect alternating pentagons residing in the missing row troughs of the reconstructed surface. We additionally characterize the precursor structure of the nano-ribbons, which consists of a Si cluster (nano-dot) occupying a silver di-vacancy in a quasi-hexagonal configuration. The system thus materializes a paradigmatic shift from a silicene-like packing to a pentagonal one. PMID:27708263
Cerdá, Jorge I.; Sławińska, Jagoda; Le Lay, Guy; Marele, Antonela C.; Gómez-Rodríguez, José M.; Dávila, María E.
2016-10-01
Carbon and silicon pentagonal low-dimensional structures attract a great interest as they may lead to new exotic phenomena such as topologically protected phases or increased spin-orbit effects. However, no pure pentagonal phase has yet been realized for any of them. Here we unveil through extensive density functional theory calculations and scanning tunnelling microscope simulations, confronted to key experimental facts, the hidden pentagonal nature of single- and double-strand chiral Si nano-ribbons perfectly aligned on Ag(110) surfaces whose structure has remained elusive for over a decade. Our study reveals an unprecedented one-dimensional Si atomic arrangement solely comprising almost perfect alternating pentagons residing in the missing row troughs of the reconstructed surface. We additionally characterize the precursor structure of the nano-ribbons, which consists of a Si cluster (nano-dot) occupying a silver di-vacancy in a quasi-hexagonal configuration. The system thus materializes a paradigmatic shift from a silicene-like packing to a pentagonal one.
Pietrobon, Brendan; McEachran, Matthew; Kitaev, Vladimir
2009-01-27
Monodisperse size-controlled faceted pentagonal silver nanorods were synthesized by thermal regrowth of decahedral silver nanoparticle (AgNPs) in aqueous solution at 95 degrees C, using citrate as a reducing agent. The width of the silver nanorods was determined by the size of the starting decahedral particle, while the length was varied from 50 nm to 2 mum by the amount of new silver added to the growth solution. Controlled regrowth allowed us to produce monodisperse AgNPs with a shape of elongated pentagonal dipyramid (regular Johnson solid, J(16)). Faceted pentagonal particles exhibited remarkable optical properties with sharp plasmon resonances precisely tunable across visible and NIR. Due to the narrow size distribution, faceted pentagonal silver nanorods readily self-assembled into the 3-D arrays similar to smectic mesophases. Hexagonal arrangement in the array completely overrode five-fold symmetry of the nanorods. Overall, our findings highlight the importance of pentagonal symmetry in metal nanoparticles and offer a facile method of the preparation of monodisperse AgNPs with controlled dimensions and plasmonic properties that are promising for optical applications and functional self-assembly.
Nanostructure and Optical Properties of Silver Helical Pentagon Nanosculptured Thin Films
Hadi Savaloni
2014-01-01
Full Text Available Silver helical pentagon shaped nanosculptured thin films (HPNSTFs were produced using oblique angle deposition method in conjunction with the rotation of sample holder under controlled conditions. The s-polarization extinction spectra obtained at different azimuthal angles (φ and low incidence angle (i.e., 10∘ from the Ag (HPNSTF did not show significant change in the plasmon peak position, while at higher incidence angle (i.e., 70∘ a blue shift appeared for the broad peak which was observed for lower incidence angle (i.e., 10∘ and occurred at lower wavelength. In case of p-polarized light a very broad peak was obtained for the 70∘ incidence angle and for different φ angles and when compared with the lower incidence angle results it can be concluded that it is gone under a red shift. Polar diagrams of the samples showed slight anisotropy that should be due to high symmetry of the pentagon helical structure.
Kondic, L.; Kramár, M.; Pugnaloni, Luis A.; Carlevaro, C. Manuel; Mischaikow, K.
2016-06-01
In the companion paper [Pugnaloni et al., Phys. Rev. E 93, 062902 (2016), 10.1103/PhysRevE.93.062902], we use classical measures based on force probability density functions (PDFs), as well as Betti numbers (quantifying the number of components, related to force chains, and loops), to describe the force networks in tapped systems of disks and pentagons. In the present work, we focus on the use of persistence analysis, which allows us to describe these networks in much more detail. This approach allows us not only to describe but also to quantify the differences between the force networks in different realizations of a system, in different parts of the considered domain, or in different systems. We show that persistence analysis clearly distinguishes the systems that are very difficult or impossible to differentiate using other means. One important finding is that the differences in force networks between disks and pentagons are most apparent when loops are considered: the quantities describing properties of the loops may differ significantly even if other measures (properties of components, Betti numbers, force PDFs, or the stress tensor) do not distinguish clearly or at all the investigated systems.
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
David Machín-Valle
2013-09-01
Full Text Available Las herramientas de corte con sujeción mecánica (calzos o insertos son utilizadas en procesos de mecanizado de alta productividad. El objetivo del trabajo consiste en determinar la influencia de la velocidad de corte del proceso de torneado en la productividad y los costos. En el estudio se emplearon calzos pentagonales de carburo cementado de producción nacional. Como material a elaborar fueutilizado el acero Cr12Mo, utilizándose varias velocidades de corte, manteniéndose constante el avance y la profundidad, hasta alcanzar el desgaste límite del flanco del calzo (0,5 mm para obtener la ecuación que relaciona la velocidad de corte y el tiempo de vida útil del calzo. También fue establecida la influencia de la velocidad de corte sobre los tiempos de maquinado y los costos productivos respectivamente. Por último, la velocidad de corte obtenida para máxima productividad es 255.45 m/min y para el menor costo 120,03 m/min.Palabras claves: torneado, calzos pentagonales, productividad, velocidad de corte.______________________________________________________________________________AbstractCutting tools with mechanical clamping (inserts are used in high productivity machining processes. The objective of the study was to determine the influence of turning process cutting speed on productivity and costs. Cuban pentagonal cemented carbide inserts were used. The turning of Cr12Mo was performed atvarious cutting speeds and remained constant feed and depth. The insert´s flank wear behavior and the equation that relates the cutting speed and the insert life were determined. The influence of the cutting speed on machining time and production costs respectively was established too. It was determined thatthe cutting speed for maximum productivity is 255.45 m / min and for the lowest cost is 120.03 m / min.Key words: turning, pentagonal inserts, productivity, cutting speed.
P. García-Mochales
2008-01-01
Full Text Available We present molecular dynamics calculations on the evolution of Ni nanowires stretched along the (111 and (100 directions, and at two different temperatures. Using a methodology similar to that required to build experimental conductance histograms, we construct minimum crosssection histograms H(Sm. These histograms are useful to understand the type of favorable atomic configurations appearing during the nanowire breakage. We have found that minimum crosssection histograms obtained for (111 and (100 stretching directions are rather different. When the nanowire is stretched along the (111 direction, monomer and dimer-like configurations appear, giving rise to well-defined peaks in H(Sm. On the contrary, (100 nanowire stretching presents a different breaking pattern. In particular, we have found, with high probability, the formation of staggered pentagonal nanowires, as it has been reported for other metallic species.
"The only feasible means." The Pentagon's ambivalent relationship with the Nuremberg Code.
Moreno, J D
1996-01-01
Convinced that armed conflict with the Soviet Union was all but inevitable, that such conflict would involve unconventional atomic, biological, and chemical warfare, and that research with human subjects was essential to respond to the threat, in the early 1950s the U.S. Department of Defense promulgated a policy governing human experimentation based on the Nuremberg Code. Yet the policymaking process focused on the abstract issue of whether human experiments should go forward at all, ignoring the reality of humans subjects research already under way and leaving unanswered ethical questions about how to conduct such research. Documents newly released to the Advisory Committee on Human Radiation Experiments tell the story of the Pentagon policy.
Flow field analysis of a pentagonal-shaped bridge deck by unsteady RANS
Md. Naimul Haque
2016-01-01
Full Text Available Long-span cable-stayed bridges are susceptible to dynamic wind effects due to their inherent flexibility. The fluid flow around the bridge deck should be well understood for the efficient design of an aerodynamically stable long-span bridge system. In this work, the aerodynamic features of a pentagonal-shaped bridge deck are explored numerically. The analytical results are compared with past experimental work to assess the capability of two-dimensional unsteady RANS simulation for predicting the aerodynamic features of this type of deck. The influence of the bottom plate slope on aerodynamic response and flow features was investigated. By varying the Reynolds number (2 × 104 to 20 × 104 the aerodynamic behavior at high wind speeds is clarified.
Shen, Yao; Chen, YuZhu
2017-07-01
In nature, some molecules have broken conjugate symmetry configurations, which might result in a special optical phenomenon called negative refraction. Under such circumstances, both permittivity and permeability are negative simultaneously. When light at certain frequency is transmitted through a transparent medium (e.g., slide glass) in which a psychoactive drug with negative indexes has been deposited, the refracted light is detected at different locations in the transparent medium. This is because the refracted light travels in a direction opposite to the expected path when it passes through material with a negative index. Using this method, it is possible to distinguish synthetic cannabinoids from other abusive psychoactive drugs in the UV-vis region. In this study, we use a tight-binding model to calculate the permittivity and permeability of pentagonal configurations with different broken symmetries. Furthermore, a qualitative analysis of the negative refraction with respect to heptagonal models is discussed.
Freivogel, William H.
2011-01-01
History has placed the stamp of approval on the publication of the Pentagon Papers, the top-secret history of the Vietnam War. If WikiLeaks editor-in-chief Julian Assange is another Daniel Ellsberg, then it is possible the website's disclosures will be viewed over time as similarly in the public interest. A classroom discussion on the release of…
Liu, Fupin; Wang, Song; Gao, Cong‐Li; Deng, Qingming; Zhu, Xianjun; Kostanyan, Aram; Westerström, Rasmus; Jin, Fei
2017-01-01
Abstract Fused‐pentagons results in an increase of local steric strain according to the isolated pentagon rule (IPR), and for all reported non‐IPR clusterfullerenes multiple (two or three) metals are required to stabilize the strained fused‐pentagons, making it difficult to access the single‐atom properties. Herein, we report the syntheses and isolations of novel non‐IPR mononuclear clusterfullerenes MNC@C76 (M=Tb, Y), in which one pair of strained fused‐pentagon is stabilized by a mononuclear cluster. The molecular structures of MNC@C76 (M=Tb, Y) were determined unambiguously by single‐crystal X‐ray diffraction, featuring a non‐IPR C 2v(19138)‐C76 cage entrapping a nearly linear MNC cluster, which is remarkably different from the triangular MNC cluster within the reported analogous clusterfullerenes based on IPR‐obeying C82 cages. The TbNC@C76 molecule is found to be a field‐induced single‐molecule magnet (SMM). PMID:28079303
Li, Mian; Li, Dan; O'Keeffe, Michael; Su, Zhong-Min
2015-08-07
The structure of a recently-published metal-organic framework is deconstructed into its underlying net which is found to be of exceptional complexity. It is shown that this is because of local pentagonal symmetry and the structure is in fact the simplest possible (minimal transitivity) given that local symmetry.
Freivogel, William H.
2011-01-01
History has placed the stamp of approval on the publication of the Pentagon Papers, the top-secret history of the Vietnam War. If WikiLeaks editor-in-chief Julian Assange is another Daniel Ellsberg, then it is possible the website's disclosures will be viewed over time as similarly in the public interest. A classroom discussion on the release of…
THE EVOLUTION OF THE MACROECONOMIC STABILISATION PENTAGON IN ROMANIA, CZECH REPUBLIC AND HUNGARY
Ionita Rodica Oana
2015-07-01
Full Text Available This paper aims to achieve the pentagon analysis of macroeconomic stabilization in Romania, Czech Republic and Hungary in the period 2000 to 2013. It is a comparative analysis of the countries above mentioned in terms of the five key targets of economic policy, aiming the increasing, dynamic balance of each economy: economic growth rate, unemployment rate, inflation rate, the budget deficit as a percentage of Gross Domestic Product, the current account deficit of the balance of payments as a percentage of Gross Domestic Product. The main objective of each economy which passes from planned to market economy should be to cease the economic decline, followed by the elimination of internal and external imbalances and only after that it should be followed by a continuous growth process. All the above mentioned indicators shall be represented on an ad hoc graduated scale. The period of research was chosen so as to obtain a view of the macroeconomic policies in transition from one period to another, in order to highlight the common as well as the main differences in the approach used for economy stabilization. Therefore I have computed the graphical analysis of macroeconomic stabilization pentagon for the three countries in the period 2000- 2013 to captures the dynamics of the economic policy mix. This benchmark tool shows the interdependence which exists between inflation and other important economic indicators. The events occurred in the period starting with 2007/2008 have raised the interest of economics researchers, highlighting the need for significant improvements in the surveillance of the economic and financial system. The global fragility generated concerns regarding the vulnerabilities and causes which led to the occurrence of such events, thus generating different measurement techniques. Despite all its advantages, this approach has a significant limitation consisting in the fact that it can only reveal a picture without surprising other
Remodeling the Pentagon After the Events of 2/23/06
Banks, T
2006-01-01
The meta-stable SUSY breaking mechanism of Intriligator Seiberg and Shih can be used to simplify the Pentagon model of TeV scale physics. The simplified model has only a single scalar field and no troublesome low energy axion. One significant signature is $l^+ l^- X$ plus missing energy, where $X$ might be the two photons of gauge mediated models, but is likely to be different. There is a new strongly interacting sector with a scale around 1.5 TeV. The penta-hadrons of this sector have masses of order 6 TeV or more. Dark matter is probably the pseudo-goldstone boson of spontaneously broken penta-baryon number. This can be a viable dark matter candidate if an appropriate asymmetry in penta-baryon number is generated in the early universe. The pseudo-Goldstone particle has a mass of $\\sim 1$ eV and is produced predominantly in flavor changing charged current decays of ordinary particles. The model solves the flavor problems of SUSY, but has two low energy CP violating phases, whose value is strongly constrained...
Li, Xuyou; Liu, Pan; Xu, Zhenlong; Zhang, Zhiyong
2015-08-20
Novel pentagonal photonic crystal fiber with high birefringence, large flattened negative dispersion, and high nonlinearity is proposed. The dispersion and birefringence properties of this structure are simulated and analyzed numerically based on the full vector finite element method (FEM). Numerical results indicate that the fiber obtains a large average dispersion of -611.9 ps/nm/km over 1,460-1,625 nm and -474 ps/nm/km over 1425-1675 nm wavelength bands for two kinds of optimized designs, respectively. In addition, the proposed PCF shows a high birefringence of 1.67×10-2 and 1.75×10-2 at the operating wavelength of 1550 nm. Moreover, the influence of the possible variation in the parameters during the fabrication process on the dispersion and birefringence properties is studied. The proposed PCF would have important applications in polarization maintaining transmission systems, residual dispersion compensation, supercontinuum generation, and the design of widely tunable wavelength converters based on four-wave mixing.
Elasticity and yield strength of pentagonal silver nanowires: In situ bending tests
Vlassov, Sergei, E-mail: vlassovs@ut.ee [Institute of Physics, University of Tartu, Riia 142, 51014 Tartu (Estonia); Estonian Nanotechnology Competence Centre, Riia 142, 51014 Tartu (Estonia); Institute of Solid State Physics, University of Latvia, Kengaraga 8, LV-1063 Riga (Latvia); Polyakov, Boris [Institute of Solid State Physics, University of Latvia, Kengaraga 8, LV-1063 Riga (Latvia); Dorogin, Leonid M.; Antsov, Mikk [Institute of Physics, University of Tartu, Riia 142, 51014 Tartu (Estonia); Estonian Nanotechnology Competence Centre, Riia 142, 51014 Tartu (Estonia); Mets, Magnus; Umalas, Madis; Saar, Rando [Institute of Physics, University of Tartu, Riia 142, 51014 Tartu (Estonia); Lõhmus, Rünno; Kink, Ilmar [Institute of Physics, University of Tartu, Riia 142, 51014 Tartu (Estonia); Estonian Nanotechnology Competence Centre, Riia 142, 51014 Tartu (Estonia)
2014-02-14
This paper reports in situ mechanical characterization of silver nanowires (Ag NWs) inside a scanning electron microscope using a cantilevered beam bending technique. Measurements consisted in controlled bending of a cantilevered NW by the tip of an atomic force microscope glued to the force sensor. Relatively high degree of elasticity followed by either plastic deformation or fracture was observed in bending experiments. Experimental data were numerically fitted into the model based on the elastic beam theory and values of Young modulus and yield strength were extracted. Measurements were performed on twenty Ag NWs with diameters from 76 nm to 211 nm. Average Young modulus and yield strength were found to be 90 GPa and 4.8 GPa respectively. In addition, fatigue tests with several millions of cycles were performed and high fatigue resistance of Ag NWs was demonstrated. - Highlights: • Mechanical properties of pentagonal silver nanowires were measured. • Cantilevered beam bending technique was used. • Measurements were performed inside a scanning electron microscope. • Young's modulus and yield point were calculated. • Both plastic deformation and fracture of nanowires were observed.
Fei Yu
2016-01-01
Full Text Available A novel high birefringence and nearly zero dispersion-flattened photonic crystal fiber (PCF with elliptical defected core (E-DC and equilateral pentagonal architecture is designed. By applying the full-vector finite element method (FEM, the characteristics of electric field distribution, birefringence, and chromatic dispersion of the proposed E-DC PCF are numerically investigated in detail. The simulation results reveal that the proposed PCF can realize high birefringence, ranging from 10-3 to 10-2 orders of magnitude, owing to the embedded elliptical air hole in the core center. However, the existence of the elliptical air hole gives rise to an extraordinary electric field distribution, where a V-shaped notch appears and the size of the V-shaped notch varies at different operating wavelengths. Also, the mode field diameter is estimated to be about 2 μm, which implies the small effective mode area and highly nonlinear coefficient. Furthermore, the investigation of the chromatic dispersion characteristic shows that the introduction of the elliptical air hole is helpful to control the chromatic dispersion to be negative or nearly zero flattened over a wide wavelength bandwidth.
Novel pentagonal silicon rings and nanowheels stabilized by flat pentacoordinate carbon(s).
Zdetsis, Aristides D
2011-03-07
It is predicted by accurate density functional and coupled-cluster theory that planar [Si(5)C](2-) and [Si(5)C](1-) rings can be stabilized by flat pentacoordinate carbon-silicon bonds. The energy difference of the [Si(5)C](2-) dianion from the lowest energy three-dimensional isomer is about 12.2 kcal∕mol at the level of the density functional theory using the Becke 3-parameter (exchange), Lee, Yang and Parr functional, and the triple-ζ doubly polarized basis sets. Stable composite [Si(5)C](2) structures are formed either as nanowheels with axial C-C bonds of 1.51 Å or as isoenergetic pentagonal graphiticlike layers with double C-C distance (3.02 Å) and almost double aromaticity index, based on nucleus independent chemical shifts. Both of these structures are at least 12 kcal∕mol lower in energy than the lowest energy Si(10)C(2) structure reported in the literature, but about 5 kcal∕mol higher than the lowest energy structure found here.
Skarstrom, C.
1959-03-10
A centrifugal separator is described for separating gaseous mixtures where the temperature gradients both longitudinally and radially of the centrifuge may be controlled effectively to produce a maximum separation of the process gases flowing through. Tbe invention provides for the balancing of increases and decreases in temperature in various zones of the centrifuge chamber as the result of compression and expansions respectively, of process gases and may be employed effectively both to neutralize harmful temperature gradients and to utilize beneficial temperaturc gradients within the centrifuge.
范天佑; 郭玉翠
1997-01-01
The mathematical theory of elasticity for planar pentagonal quasicrystals is developed and some analytic solutions for a class of mixed boundary-value problems (corresponding to a Griffith crack) of the theory are offered.An alternate procedure and a direct integral approach are proposed.Some analytical solutions are constructed and the stress and displacement fields of a Griffith crack in the quasicrystals are determined.A basis for further studying the mechanical behavior of the material related to planar defects is provided.
Uţă, M M; Cioloboc, D; King, R B
2012-09-13
One of the most significant recent developments (in 2009) is the discovery of the clusters M@Ge10(3-) (M = Fe, Co) in which the outer Ge10 polyhedron is a pentagonal prism rather than a deltahedral structure of the type predicted by the Wade-Mingos rules. Consistent with this experimental observation, density functional theory shows the lowest energy structures to be pentagonal prisms for the iron-centered clusters Fe@Ge10(z) in all nine charge states ranging from -5 to +3. This contrasts with the previously studied cobalt-centered germanium clusters Co@Ge10(z) for which the lowest energy structures are pentagonal prisms only for the electron richest systems where z ranges from -3 to -5. The C3v structures derived from the tetracapped trigonal prism found as lowest energy structures of the electron poorer Co@Ge10(z) (z = 0, -1, -2) systems are higher energy structures for the iron-centered germanium clusters Fe@Ge10(z) (z = 0, -1, -2). The strong energetic preference for pentagonal prismatic structures in the Fe@Ge10(z) clusters can be attributed to the need for the larger volume of the pentagonal prism relative to other 10-vertex closed polyhedra to accommodate the interstitial iron atom.
Ota, Norio
2015-01-01
Modeling a promising carrier of the astronomically observed polycyclic aromatic hydrocarbon (PAH), infrared (IR) spectra of ionized molecules (C9H7) n+ were calculated based on density functional theory (DFT). In a previous study, it was found that void induced coronene C23H12++ could reproduce observed spectra from 3 to 15 micron, which has carbon two pentagons connected with five hexagons. In this paper, we tried to test the simplest model, that is, one pentagon connected with one hexagon, which is indene like molecule (C9H7) n+ (n=0 to 4). DFT based harmonic frequency analysis resulted that observed spectrum could be almost reproduced by a suitable sum of ionized C9H7n+ molecules. Typical example is C9H7++. Calculated peaks were 3.2, 7.4, 7.6, 8.4, and 12.7 micron, whereas observed one 3.3, 7.6, 7.8, 8.6 and 12.7 micron. By a combination of different degree of ionized molecules, we can expect to reproduce total spectrum. For a comparison, hexagon-hexagon molecule naphthalene (C10H8) n+ was studied. Unfortu...
Tkachuk, Andriy V; Mar, Arthur
2010-08-14
In confirmation of its predicted existence in the Sr-Hg phase diagram, the mercury-rich intermetallic compound SrHg(8) has been prepared by reaction of the elements at 200 degrees C. Single-crystal X-ray diffraction analysis revealed that it adopts a new structure type (Pearson symbol oP72, space group Pnma, a = 13.328(1) A, b = 4.9128(5) A, c = 26.446(3) A). The Sr atoms are centred within two types of 18-vertex Hg polyhedra formed by augmenting pentagonal prisms with octagonal waists. The condensation of these Sr@Hg(18) clusters is associated with the formation of a complex anionic Hg-Hg bonding network, as supported by electronic structure calculations which reveal strong mixing of Hg 6s and 6p states in highly delocalized bands superimposed with a narrower 5d band below the Fermi level.
Ramirez-Solano, Maria
automatically has finite local complexity. In this thesis we give a construction of the continuous and discrete hull just from the combinatorial data. For the discrete hull we construct a C-algebra and a measure. Since this tiling possesses no natural R2 action by translation, there is no a priori reason......, and therefore we cannot study it with the usual tiling theory. The appeal of the tiling is that all the tiles are conformally regular pentagons. But conformal maps are not allowable under finite local complexity. On the other hand, the tiling can be described completely by its combinatorial data, which rather...... to expect that the K-theory of the C-algebra of the tiling is the same as the K-theory or cohomology of the hull. So it would be very interesting to know the outcome. For the continuous hull, we compute its K-theory and an absolute continuous invariant measure...
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
Cheng, D.; Hou, M.
2010-04-01
Classical molecular dynamics and Metropolis Monte Carlo simulations were carried out to investigate the thermal stability and melting behaviors of free-standing Pd-Pt bimetallic nanowires (NWs) with pentagonal multi-shell-type (PMS-type) structure in the whole composition range. Equilibrium configurations at 100 K are predicted in the semi-grand canonical ensemble. Pd-Pt PMS-type NWs are stable with a multishell structure of alternating Pd and Pt compositions and Pd segregating systematically to the surface. On thermal heating, an interesting composition-dependent structural transformation from the PMS-type to face-centred-cubic (FCC) by overcoming a high energy barrier is observed for Pd-Pt bimetallic NWs before the melting. Consequently, the system energy is decreased. The FCC structure is found more stable than PMS-type over the whole range of composition. The melting of Pd-Pt bimetallic NWs is also studied. It is found to start at the edges, then propagate over the whole surface, and next to the interior. It occurs in a composition-dependent range of temperature.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
Michael Saur
Full Text Available Nicotinic acetylcholine receptors (nAChR play important neurophysiological roles and are of considerable medical relevance. They have been studied extensively, greatly facilitated by the gastropod acetylcholine-binding proteins (AChBP which represent soluble structural and functional homologues of the ligand-binding domain of nAChR. All these proteins are ring-like pentamers. Here we report that AChBP exists in the hemolymph of the planorbid snail Biomphalaria glabrata (vector of the schistosomiasis parasite as a regular pentagonal dodecahedron, 22 nm in diameter (12 pentamers, 60 active sites. We sequenced and recombinantly expressed two ∼25 kDa polypeptides (BgAChBP1 and BgAChBP2 with a specific active site, N-glycan site and disulfide bridge variation. We also provide the exon/intron structures. Recombinant BgAChBP1 formed pentamers and dodecahedra, recombinant BgAChBP2 formed pentamers and probably disulfide-bridged di-pentamers, but not dodecahedra. Three-dimensional electron cryo-microscopy (3D-EM yielded a 3D reconstruction of the dodecahedron with a resolution of 6 Å. Homology models of the pentamers docked to the 6 Å structure revealed opportunities for chemical bonding at the inter-pentamer interfaces. Definition of the ligand-binding pocket and the gating C-loop in the 6 Å structure suggests that 3D-EM might lead to the identification of functional states in the BgAChBP dodecahedron.
2015-01-01
In the thermal infrared (TIR) waveband, solving the target emissivity spectrum and temperature leads to an ill-posed problem in which the number of unknown parameters is larger than that of available measurements. Generally, the approaches developed for solving this kind of problems are called, by a joint name, the TES (temperature and emissivity separation) algorithm. As is shown in the name, the TES algorithm is dedicated to separating the target temperature and emissivity in the calculating procedure. In this paper, a novel method called the new MaxEnt (maximum entropy) TES algorithm is proposed, which is considered as a promotion of the MaxEnt TES algorithm proposed by Barducci. The maximum entropy estimation is utilized as the basic framework in the two preceding algorithms, so that the two algorithms both could make temperature and emissivity separation, independent of experiential information derived by some special data bases. As a result, the two algorithms could be applied to solve the temperature and emissivity spectrum of the targets which are absolutely unknown to us. However, what makes the two algorithms different is that the alpha spectrum derived by the ADE (alpha derived emissivity) method is considered as priori information to be added in the new MaxEnt TES algorithm. Based on the Wien approximation, the ADE method is dedicated to the calculation of the alpha spectrum which has a similar distribution to the true emissivity spectrum. Based on the preceding promotion, the new MaxEnt TES algorithm keeps a simpler mathematical formalism. Without any doubt, the new MaxEnt TES algorithm provides a faster computation for large volumes of data (i.e. hyperspectral images of the Earth). Some numerical simulations have been performed; the data and results show that, the maximum RMSE of emissivity estimation is 0.017, the maximum absolute error of temperature estimation is 0.62 K. Added with Gaussian white noise in which the signal to noise ratio is measured
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
2007-09-01
simple function like unbuttoning each section of my blouse to get some relief from the heat. We would later discover that the blast had shifted the...carte blanche , purchasing agents culled from county offices secured a wide range of supplies-fencing, boots, bottles, hoses, airpacks, cranes, gloves
Bodlaender, H.L.; Koster, A.M.C.A.
2003-01-01
A set of vertices S Í V is called a safe separator for treewidth, if S is a separator of G, and the treewidth of G equals the maximum of the treewidth over all connected components W of G - S of the graph, obtained by making S a clique in the subgraph of G, induced by W È S. We show that such safe s
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Eugenia V. Peresypkina
2016-08-01
Full Text Available The ligand exchange in (n-Bu4N2OsIVCl6 (n-Bu4N = tetra-n-butylammonium leads to the formation of the osmium(IV heptacyanide, the first fully inorganic homoleptic complex of heptacoordinated osmium. The single-crystal X-ray diffraction (SC-XRD study reveals the pentagonal bipyramidal molecular structure of the [Os(CN7]3− anion. The latter being a diamagnetic analogue of the highly anisotropic paramagnetic synthon, [ReIV(CN7]3− can be used for the synthesis of the model heterometallic coordination compounds for the detailed study and simulation of the magnetic properties of the low-dimensional molecular nanomagnets involving 5d metal heptacyanides.
Reynolds, John C.
2002-01-01
expressions) for accessing and modifying shared structures, and for explicit allocation and deallocation of storage. Assertions are extended by introducing a "separating conjunction" that asserts that its sub-formulas hold for disjoint parts of the heap, and a closely related "separating implication". Coupled......, dynamically allocated arrays, and recursive procedures. We will also discuss promising future directions....
Learning Isometric Separation Maps
Vasiloglou, Nikolaos; Anderson, David V
2008-01-01
Maximum Variance Unfolding (MVU) and its variants have been very successful in embedding data-manifolds in lower dimensionality spaces, often revealing the true intrinsic dimensions. In this paper we show how to also incorporate supervised class information into an MVU-like method without breaking its convexity. We call this method the Isometric Separation Map and we show that the resulting kernel matrix can be used for a binary/multiclass Support Vector Machine in a semi-supervised (transductive) framework. We also show that the method always finds a kernel matrix that linearly separates the training data exactly without projecting them in infinite dimensional spaces.
Kernel-based Maximum Entropy Clustering
JIANG Wei; QU Jiao; LI Benxi
2007-01-01
With the development of Support Vector Machine (SVM),the "kernel method" has been studied in a general way.In this paper,we present a novel Kernel-based Maximum Entropy Clustering algorithm (KMEC).By using mercer kernel functions,the proposed algorithm is firstly map the data from their original space to high dimensional space where the data are expected to be more separable,then perform MEC clustering in the feature space.The experimental results show that the proposed method has better performance in the non-hyperspherical and complex data structure.
2016-01-01
Footage of the 70 degree ISOLDE GPS separator magnet MAG70 as well as the switchyard for the Central Mass and GLM (GPS Low Mass) and GHM (GPS High Mass) beamlines in the GPS separator zone. In the GPS20 vacuum sector equipment such as the long GPS scanner 482 / 483 unit, faraday cup FC 490, vacuum valves and wiregrid piston WG210 and WG475 and radiation monitors can also be seen. Also the RILIS laser guidance and trajectory can be seen, the GPS main beamgate switch box and the actual GLM, GHM and Central Beamline beamgates in the beamlines as well as the first electrostatic quadrupoles for the GPS lines. Close up of the GHM deflector plates motor and connections and the inspection glass at the GHM side of the switchyard.
2016-01-01
Footage of the 90 and 60 degree ISOLDE HRS separator magnets in the HRS separator zone. In the two vacuum sectors HRS20 and HRS30 equipment such as the HRS slits SL240, the HRS faraday cup FC300 and wiregrid WG210 can be spotted. Vacuum valves, turbo pumps, beamlines, quadrupoles, water and compressed air connections, DC and signal cabling can be seen throughout the video. The HRS main and user beamgate in the beamline between MAG90 and MAG60 and its switchboxes as well as all vacuum bellows and flanges are shown. Instrumentation such as the HRS scanner unit 482 / 483, the HRS WG470 wiregrid and slits piston can be seen. The different quadrupoles and supports are shown as well as the RILIS guidance tubes and installation at the magnets and the different radiation monitors.
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
Nylon separators. [thermal degradation
Lim, H. S.
1977-01-01
A nylon separator was placed in a flooded condition in K0H solution and heated at various high temperatures ranging from 60 C to 110 C. The weight decrease was measured and the molecular weight and decomposition product were analyzed to determine: (1) the effect of K0H concentration on the hydrolysis rate; (2) the effect of K0H concentration on nylon degradation; (3) the activation energy at different K0H concentrations; and (4) the effect of oxygen on nylon degradation. The nylon hydrolysis rate is shown to increase as K0H concentration is decreased 34%, giving a maximum rate at about 16%. Separator hydrolysis is confirmed by molecular weight decrease in age of the batteries, and the reaction of nylon with molecular oxygen is probably negligible, compared to hydrolysis. The extrapolated rate value from the high temperature experiment correlates well with experimental values at 35 degrees.
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Kuang, Xiaojun; Pan, Fengjuan; Cao, Jiang; Liang, Chaolun; Suchomel, Matthew R; Porcher, Florence; Allix, Mathieu
2013-11-18
New insight into the defect chemistry of the tetragonal tungsten bronze (TTB) Ba(0.5-x)TaO(3-x) is established here, which is shown to adapt to a continuous and extensive range of both cationic and anionic defect stoichiometries. The highly nonstoichiometric TTB Ba(0.5-x)TaO(3-x) (x = 0.25-0.325) compositions are stabilized via the interpolation of Ba(2+) cations and (TaO)(3+) groups into pentagonal tunnels, forming distinct Ba chains and alternate Ta-O rows in the pentagonal tunnels along the c axis. The slightly nonstoichiometric Ba(0.5-x)TaO(3-x) (x = 0-0.1) compositions incorporate framework oxygen and tunnel cation deficiencies in the TTB structure. These two mechanisms result in phase separation within the 0.1< x < 0.25 nonstoichiometric range, resulting in two closely related (TaO)(3+)-containing and (TaO)(3+)-free TTB phases. The highly nonstoichiometric (TaO)(3+)-containing phase exhibits Ba(2+) cationic migration. The incorporation of (TaO)(3+) units into the pentagonal tunnel and the local relaxation of the octahedral framework around the (TaO)(3+) units are revealed by diffraction data analysis and are shown to affect the transport and polarization properties of these compositions.
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
A Pentagon of Creative Economy
Rasa Levickaitė
2011-04-01
Full Text Available The article presents five concepts of the creative economy based on creative economy theories and interpretations developed by five authors. John Howkins interpretation is based on the theory that the creative economy consists of fifteen creative industries (classified by the author. The creativity and economics is nothing new, but new is the relationship between them in its nature and extent. Broad interpretation of creativity led to the theory of creative class developed by Richard Florida. The creative class is a group of professionals, scientists and artists, whose existence creates economic, social and cultural dynamism, especially in urban areas. Richard Caves characterizes creative industries by seven economic properties and states that creative industries themselves are not unique, but sectors of the creative industries which are driven by creativity generate new approaches to business processes, new product supply and demand both in terms of economic and socio-economic development indicators of countries. Charles Landry has developed a concept of the creative city. The author argues that cities have the single most important resource - its people. Creativity substitutes the location, natural resources and access to the market, becoming a key engine in the dynamic growth of the city. This term is used to define a city where varied cultural activities are an integral part of economic and social functioning of the city. A theory developed by John Hartley is based on the concept of creative identities. The main factors behind the rapid growth of the creative industries worldwide are connected both to the technology and the economy. Digital revolution and economic environments in which this revolution took place has caused technological and communicational changes which have merged creating the conditions for the development of the creative economy.
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
Bayesian Source Separation and Localization
Knuth, K H
1998-01-01
The problem of mixed signals occurs in many different contexts; one of the most familiar being acoustics. The forward problem in acoustics consists of finding the sound pressure levels at various detectors resulting from sound signals emanating from the active acoustic sources. The inverse problem consists of using the sound recorded by the detectors to separate the signals and recover the original source waveforms. In general, the inverse problem is unsolvable without additional information. This general problem is called source separation, and several techniques have been developed that utilize maximum entropy, minimum mutual information, and maximum likelihood. In previous work, it has been demonstrated that these techniques can be recast in a Bayesian framework. This paper demonstrates the power of the Bayesian approach, which provides a natural means for incorporating prior information into a source model. An algorithm is developed that utilizes information regarding both the statistics of the amplitudes...
Facile biosynthesis, separation and conjugation of gold nanoparticles to doxorubicin
Kumar, S. Anil; Peter, Yves-Alain; Nadeau, Jay L.
2008-12-01
Particle shape and size determine the physicochemical and optoelectronic properties of nanoscale materials, including optical absorption, fluorescence, and electric and magnetic moments. It is thus desirable to be able to synthesize and separate various particle shapes and sizes. Biosynthesis using microorganisms has emerged as a more ecologically friendly, simpler, and more reproducible alternative to chemical synthesis of metal and semiconductor nanoparticles, allowing the generation of rare forms such as triangles. Here we show that the plant pathogenic fungus Helminthosporum solani, when incubated with an aqueous solution of chloroaurate ions, produces a diverse mixture of extracellular gold nanocrystals in the size range from 2 to 70 nm. A plurality are polydisperse spheres, but a significant number are homogeneously sized rods, triangles, pentagons, pyramids, and stars. The particles can be separated according to their size and shape by using a sucrose density gradient in a tabletop microcentrifuge, a novel and facile approach to nanocrystal purification. Conjugation to biomolecules can be performed without further processing, as illustrated with the smallest fraction of particles which were conjugated to the anti-cancer drug doxorubicin (Dox) and taken up readily into HEK293 cells. The cytotoxicity of the conjugates was comparable to that of an equivalent concentration of Dox.
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
Stone, Wesley W.; Gilliom, Robert J.; Crawford, Charles G.
2008-01-01
Regression models were developed for predicting annual maximum and selected annual maximum moving-average concentrations of atrazine in streams using the Watershed Regressions for Pesticides (WARP) methodology developed by the National Water-Quality Assessment Program (NAWQA) of the U.S. Geological Survey (USGS). The current effort builds on the original WARP models, which were based on the annual mean and selected percentiles of the annual frequency distribution of atrazine concentrations. Estimates of annual maximum and annual maximum moving-average concentrations for selected durations are needed to characterize the levels of atrazine and other pesticides for comparison to specific water-quality benchmarks for evaluation of potential concerns regarding human health or aquatic life. Separate regression models were derived for the annual maximum and annual maximum 21-day, 60-day, and 90-day moving-average concentrations. Development of the regression models used the same explanatory variables, transformations, model development data, model validation data, and regression methods as those used in the original development of WARP. The models accounted for 72 to 75 percent of the variability in the concentration statistics among the 112 sampling sites used for model development. Predicted concentration statistics from the four models were within a factor of 10 of the observed concentration statistics for most of the model development and validation sites. Overall, performance of the models for the development and validation sites supports the application of the WARP models for predicting annual maximum and selected annual maximum moving-average atrazine concentration in streams and provides a framework to interpret the predictions in terms of uncertainty. For streams with inadequate direct measurements of atrazine concentrations, the WARP model predictions for the annual maximum and the annual maximum moving-average atrazine concentrations can be used to characterize
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
Finding maximum JPEG image block code size
Lakhani, Gopal
2012-07-01
We present a study of JPEG baseline coding. It aims to determine the minimum storage needed to buffer the JPEG Huffman code bits of 8-bit image blocks. Since DC is coded separately, and the encoder represents each AC coefficient by a pair of run-length/AC coefficient level, the net problem is to perform an efficient search for the optimal run-level pair sequence. We formulate it as a two-dimensional, nonlinear, integer programming problem and solve it using a branch-and-bound based search method. We derive two types of constraints to prune the search space. The first one is given as an upper-bound for the sum of squares of AC coefficients of a block, and it is used to discard sequences that cannot represent valid DCT blocks. The second type constraints are based on some interesting properties of the Huffman code table, and these are used to prune sequences that cannot be part of optimal solutions. Our main result is that if the default JPEG compression setting is used, space of minimum of 346 bits and maximum of 433 bits is sufficient to buffer the AC code bits of 8-bit image blocks. Our implementation also pruned the search space extremely well; the first constraint reduced the initial search space of 4 nodes down to less than 2 nodes, and the second set of constraints reduced it further by 97.8%.
Jensen, Jonas Buhrkal; Birkedal, Lars
2012-01-01
, separation means physical separation. In this paper, we introduce \\emph{fictional separation logic}, which includes more general forms of fictional separating conjunctions P * Q, where "*" does not require physical separation, but may also be used in situations where the memory resources described by P and Q...
Separation Anxiety (For Parents)
... Kids to Be Smart About Social Media Separation Anxiety KidsHealth > For Parents > Separation Anxiety Print A A ... both of you get through it. About Separation Anxiety Babies adapt pretty well to other caregivers. Parents ...
Separation Anxiety (For Parents)
... Feeding Your 1- to 2-Year-Old Separation Anxiety KidsHealth > For Parents > Separation Anxiety A A A ... both of you get through it. About Separation Anxiety Babies adapt pretty well to other caregivers. Parents ...
Xantheas, Sotiris S.
2012-08-01
We rely on a hierarchy of methods to identify the low-lying isomers for the pentagonal dodecahedron (H2O)20 and the H3O+(H2O)20 clusters. Initial screening of isomers was performed with classical potentials [TIP4P, TTM2-F, TTM2.1-F for (H2O)20 and ASP for H3O+(H2O)20] and the networks obtained with those potentials were subsequently reoptimized at the DFT (B3LYP) and MP2 levels of theory. For the pentagonal dodecahedron (H2O)20 it was found that DFT (B3LYP) and MP2 produced the same global minimum. However, this was not the case for the H3O+(H2O)20 cluster, for which MP2 produced a different network for the global minimum when compared to DFT (B3LYP). All low-lying minima of H3O+(H2O)20 correspond to hydrogen bonding networks having 9 ''free'' OH bonds and the hydronium ion on the surface of the cluster. The fact that DFT (B3LYP) and MP2 produce different results and issues related to the use of a smaller basis set, explains the discrepancy between the current results and the structure previously suggested [Science 304, 1137 (2004)] for the global minimum of the H3O+(H2O)20 cluster. Additionally, the IR spectra of the MP2 global minimum are closer to the experimentally measured ones than the spectra of the previously suggested DFT global minimum. The latter exhibit additional bands in the most red-shifted region of the OH stretching vibrations (corresponding to the ''fingerprint'' of the underlying hydrogen bonding network), which are absent from both the experimental as well as the spectra of the new structure suggested for the global minimum of this cluster.
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Controlling Separation in Turbomachines
Evans, Simon; Himmel, Christoph; Power, Bronwyn; Wakelam, Christian; Xu, Liping; Hynes, Tom; Hodson, Howard
2010-01-01
Four examples of flow control: 1) Passive control of LP turbine blades (Laminar separation control). 2) Aspiration of a conventional axial compressor blade (Turbulent separation control). 3) Compressor blade designed for aspiration (Turbulent separation control). 4.Control of intakes in crosswinds (Turbulent separation control).
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Jan Werner; Eva Maria Griebeler
2014-01-01
We tested if growth rates of recent taxa are unequivocally separated between endotherms and ectotherms, and compared these to dinosaurian growth rates. We therefore performed linear regression analyses on the log-transformed maximum growth rate against log-transformed body mass at maximum growth for extant altricial birds, precocial birds, eutherians, marsupials, reptiles, fishes and dinosaurs. Regression models of precocial birds (and fishes) strongly differed from Case's study (1978), which...
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously
Blasco, Francisco Lazaro
2011-01-01
A novel fountain coding scheme has been introduced. The scheme consists of a parallel concatenation of a MDS block code with a LRFC code, both constructed over the same field, $F_q$. The performance of the concatenated fountain coding scheme has been analyzed through derivation of tight bounds on the probability of decoding failure as a function of the overhead. It has been shown how the concatenated scheme performs as well as LRFC codes in channels characterized by high erasure probabilities, whereas they provide failure probabilities lower by several orders of magnitude at moderate/low erasure probabilities.
Separation anxiety in children
... page: //medlineplus.gov/ency/article/001542.htm Separation anxiety in children To use the sharing features on this page, please enable JavaScript. Separation anxiety in children is a developmental stage in which ...
Ionene membrane battery separator
Moacanin, J.; Tom, H. Y.
1969-01-01
Ionic transport characteristics of ionenes, insoluble membranes from soluble polyelectrolyte compositions, are studied for possible application in a battery separator. Effectiveness of the thin film of separator membrane essentially determines battery lifetime.
Nath, Pulak; Twary, Scott N.
2016-04-26
Described herein are methods and systems for harvesting, collecting, separating and/or dewatering algae using iron based salts combined with a magnetic field gradient to separate algae from an aqueous solution.
Separation and confirmation of showers
Neslušan, L.; Hajduková, M.
2017-01-01
Aims: Using IAU MDC photographic, IAU MDC CAMS video, SonotaCo video, and EDMOND video databases, we aim to separate all provable annual meteor showers from each of these databases. We intend to reveal the problems inherent in this procedure and answer the question whether the databases are complete and the methods of separation used are reliable. We aim to evaluate the statistical significance of each separated shower. In this respect, we intend to give a list of reliably separated showers rather than a list of the maximum possible number of showers. Methods: To separate the showers, we simultaneously used two methods. The use of two methods enables us to compare their results, and this can indicate the reliability of the methods. To evaluate the statistical significance, we suggest a new method based on the ideas of the break-point method. Results: We give a compilation of the showers from all four databases using both methods. Using the first (second) method, we separated 107 (133) showers, which are in at least one of the databases used. These relatively low numbers are a consequence of discarding any candidate shower with a poor statistical significance. Most of the separated showers were identified as meteor showers from the IAU MDC list of all showers. Many of them were identified as several of the showers in the list. This proves that many showers have been named multiple times with different names. Conclusions: At present, a prevailing share of existing annual showers can be found in the data and confirmed when we use a combination of results from large databases. However, to gain a complete list of showers, we need more-complete meteor databases than the most extensive databases currently are. We also still need a more sophisticated method to separate showers and evaluate their statistical significance. Tables A.1 and A.2 are also available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc
Van Kooy, L.; Mooij, M.; Rem, P.
2004-01-01
Separations by density, such as the separation of non-ferrous scrap into light and heavy alloys, are often realized by means of heavy media. In principle, kinetic gravity separations in water can be faster and cheaper, because they do not rely on suspensions or salt solutions of which the density
Hierarchical Maximum Margin Learning for Multi-Class Classification
Yang, Jian-Bo
2012-01-01
Due to myriads of classes, designing accurate and efficient classifiers becomes very challenging for multi-class classification. Recent research has shown that class structure learning can greatly facilitate multi-class learning. In this paper, we propose a novel method to learn the class structure for multi-class classification problems. The class structure is assumed to be a binary hierarchical tree. To learn such a tree, we propose a maximum separating margin method to determine the child nodes of any internal node. The proposed method ensures that two classgroups represented by any two sibling nodes are most separable. In the experiments, we evaluate the accuracy and efficiency of the proposed method over other multi-class classification methods on real world large-scale problems. The results show that the proposed method outperforms benchmark methods in terms of accuracy for most datasets and performs comparably with other class structure learning methods in terms of efficiency for all datasets.
Trautmann, N
1976-01-01
A survey is given on the progress of fast chemical separation procedures during the last few years. Fast, discontinuous separation techniques are illustrated by a procedure for niobium. The use of such techniques for the chemical characterization of the heaviest known elements is described. Other rapid separation methods from aqueous solutions are summarized. The application of the high speed liquid chromatography to the separation of chemically similar elements is outlined. The use of the gas jet recoil transport method for nuclear reaction products and its combination with a continuous solvent extraction technique and with a thermochromatographic separation is presented. Different separation methods in the gas phase are briefly discussed and the attachment of a thermochromatographic technique to an on-line mass separator is shown. (45 refs).
Acoustofluidic bacteria separation
Li, Sixing; Ma, Fen; Bachman, Hunter; Cameron, Craig E.; Zeng, Xiangqun; Huang, Tony Jun
2017-01-01
Bacterial separation from human blood samples can help with the identification of pathogenic bacteria for sepsis diagnosis. In this work, we report an acoustofluidic device for label-free bacterial separation from human blood samples. In particular, we exploit the acoustic radiation force generated from a tilted-angle standing surface acoustic wave (taSSAW) field to separate Escherichia coli from human blood cells based on their size difference. Flow cytometry analysis of the E. coli separated from red blood cells shows a purity of more than 96%. Moreover, the label-free electrochemical detection of the separated E. coli displays reduced non-specific signals due to the removal of blood cells. Our acoustofluidic bacterial separation platform has advantages such as label-free separation, high biocompatibility, flexibility, low cost, miniaturization, automation, and ease of in-line integration. The platform can be incorporated with an on-chip sensor to realize a point-of-care sepsis diagnostic device.
A dual method for maximum entropy restoration
Smith, C. B.
1979-01-01
A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
Photoemission spectromicroscopy with MAXIMUM at Wisconsin
Ng, W.; Ray-Chaudhuri, A.K.; Cole, R.K.; Wallace, J.; Crossley, S.; Crossley, D.; Chen, G.; Green, M.; Guo, J.; Hansen, R.W.C.; Cerrina, F.; Margaritondo, G. (Dept. of Electrical Engineering, Dept. of Physics and Synchrotron Radiation Center, Univ. of Wisconsin, Madison (USA)); Underwood, J.H.; Korthright, J.; Perera, R.C.C. (Center for X-ray Optics, Accelerator and Fusion Research Div., Lawrence Berkeley Lab., CA (USA))
1990-06-01
We describe the development of the scanning photoemission spectromicroscope MAXIMUM at the Wisoncsin Synchrotron Radiation Center, which uses radiation from a 30-period undulator. The article includes a discussion of the first tests after the initial commissioning. (orig.).
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
The maximum entropy technique. System's statistical description
Belashev, B Z
2002-01-01
The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered
19 CFR 114.23 - Maximum period.
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Maximum period. 114.23 Section 114.23 Customs... CARNETS Processing of Carnets § 114.23 Maximum period. (a) A.T.A. carnet. No A.T.A. carnet with a period of validity exceeding 1 year from date of issue shall be accepted. This period of validity cannot be...
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH
Pandya A M
2011-04-01
Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70
Hutchinson, Thomas H. [Plymouth Marine Laboratory, Prospect Place, The Hoe, Plymouth PL1 3DH (United Kingdom)], E-mail: thom1@pml.ac.uk; Boegi, Christian [BASF SE, Product Safety, GUP/PA, Z470, 67056 Ludwigshafen (Germany); Winter, Matthew J. [AstraZeneca Safety, Health and Environment, Brixham Environmental Laboratory, Devon TQ5 8BA (United Kingdom); Owens, J. Willie [The Procter and Gamble Company, Central Product Safety, 11810 East Miami River Road, Cincinnati, OH 45252 (United States)
2009-02-19
There is increasing recognition of the need to identify specific sublethal effects of chemicals, such as reproductive toxicity, and specific modes of actions of the chemicals, such as interference with the endocrine system. To achieve these aims requires criteria which provide a basis to interpret study findings so as to separate these specific toxicities and modes of action from not only acute lethality per se but also from severe inanition and malaise that non-specifically compromise reproductive capacity and the response of endocrine endpoints. Mammalian toxicologists have recognized that very high dose levels are sometimes required to elicit both specific adverse effects and present the potential of non-specific 'systemic toxicity'. Mammalian toxicologists have developed the concept of a maximum tolerated dose (MTD) beyond which a specific toxicity or action cannot be attributed to a test substance due to the compromised state of the organism. Ecotoxicologists are now confronted by a similar challenge and must develop an analogous concept of a MTD and the respective criteria. As examples of this conundrum, we note recent developments in efforts to validate protocols for fish reproductive toxicity and endocrine screens (e.g. some chemicals originally selected as 'negatives' elicited decreases in fecundity or changes in endpoints intended to be biomarkers for endocrine modes of action). Unless analogous criteria can be developed, the potentially confounding effects of systemic toxicity may then undermine the reliable assessment of specific reproductive effects or biomarkers such as vitellogenin or spiggin. The same issue confronts other areas of aquatic toxicology (e.g., genotoxicity) and the use of aquatic animals for preclinical assessments of drugs (e.g., use of zebrafish for drug safety assessment). We propose that there are benefits to adopting the concept of an MTD for toxicology and pharmacology studies using fish and other aquatic
[Separation anxiety in children].
Purper-Ouakil, Diane; Franc, Nathalie
2010-06-20
Separation anxiety disorder can be differentiated from developmental anxiety because of its intensity, persistence and negative impact on adaptive functioning. This disorder is closely linked to other anxiety and mood disorders and can also be associated with externalizing psychopathology in children and adolescents. Severe separation anxiety can result in school refusal and intra-familial violence. Cognitive behavioral therapies have the best evidence-based support for the treatment of separation anxiety disorder in children and adolescents. In addition, it is important to detect factors associated with persistence of anxiety such as systematic avoidance of separation and parental overprotection. The role of pediatricians and general practitioners in recognizing clinical separation anxiety and encouraging appropriate care and positive parental attitudes is essential, as separation anxiety is often associated with a variety of somatic symptoms.
[Separation anxiety. Theoretical considerations].
Blandin, N; Parquet, P J; Bailly, D
1994-01-01
The interest in separation anxiety is nowadays increasing: this disorder appearing during childhood may predispose to the occurrence of anxiety disorders (such as panic disorder and agoraphobia) and major depression into adulthood. Psychoanalytic theories differ on the nature of separation anxiety and its place in child development. For some authors, separation anxiety must be understood as resulting from the unconscious internal conflicts inherent in the individuation process and gradual attainment of autonomy. From this point of view, the fear of loss of mother by separation is not regarded as resulting from a real danger. However, Freud considers the primary experience of separation from protecting mother as the prototype situation of anxiety and compares the situations generating fear to separation experiences. For him, anxiety originates from two factors: the physiological fact is initiated at the time of birth but the primary traumatic situation is the separation from mother. This point of view may be compared with behavioral theories. Behavioral theories suggest that separation anxiety may be conditioned or learned from innate fears. In Freud's theory, the primary situation of anxiety resulting from the separation from mother plays a role comparable to innate fears. Grappling with the problem of separation anxiety, Bowlby emphasizes then the importance of the child's attachment to one person (mother or primary caregiver) and the fact that this attachment is instinctive. This point of view, based on the watch of infants, is akin to ethological theories on behaviour of non human primates. Bowlby especially shows that the reactions of infant separated from mother evolve on three stages: the phase of protestation which may constitute the prototype of adulthood anxiety, the phase of desperation which may be the prototype of depression, and the phase of detachment. He emphasizes so the role of early separations in the development of vulnerability to depression
Roof separation characteristics of laminated weak roof strata of longwall roadway
LU Ting-kan; LIU Yu-zhou
2004-01-01
The roof separation was investigated in a coal mine as part of the site characterization of roof strata deterioration in a longwall roadway. The separation of laminated,weak roof strata was initially characterized as the maximum separation, effect of geological setting on separation and the effect of mining activities (heading development,time-dependent and longwall extraction) on separation. Then the separation process was studied, so as to answer the questions of: when the separation occurs; where the separation is located and what geological setting it relates to; how large of the separation is; and how the separation propagates.
Chang, Paul K
2014-01-01
Interdisciplinary and Advanced Topics in Science and Engineering, Volume 3: Separation of Flow presents the problem of the separation of fluid flow. This book provides information covering the fields of basic physical processes, analyses, and experiments concerning flow separation.Organized into 12 chapters, this volume begins with an overview of the flow separation on the body surface as discusses in various classical examples. This text then examines the analytical and experimental results of the laminar boundary layer of steady, two-dimensional flows in the subsonic speed range. Other chapt
The separation of adult separation anxiety disorder.
Baldwin, David S; Gordon, Robert; Abelli, Marianna; Pini, Stefano
2016-08-01
The Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) categorization of mental disorders places "separation anxiety disorder" within the broad group of anxiety disorders, and its diagnosis no longer rests on establishing an onset during childhood or adolescence. In previous editions of DSM, it was included within the disorders usually first diagnosed in infancy, childhood, or adolescence, with the requirement for an onset of symptoms before the age of 18 years: symptomatic adults could only receive a retrospective diagnosis, based on establishing this early onset. The new position of separation anxiety disorder is based upon the findings of epidemiological studies that revealed the unexpectedly high prevalence of the condition in adults, often in individuals with an onset of symptoms after the teenage years; its prominent place within the DSM-5 group of anxiety disorders should encourage further research into its epidemiology, etiology, and treatment. This review examines the clinical features and boundaries of the condition, and offers guidance on how it can be distinguished from other anxiety disorders and other mental disorders in which "separation anxiety" may be apparent.
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.
2014-06-01
Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Computing Rooted and Unrooted Maximum Consistent Supertrees
van Iersel, Leo
2009-01-01
A chief problem in phylogenetics and database theory is the computation of a maximum consistent tree from a set of rooted or unrooted trees. A standard input are triplets, rooted binary trees on three leaves, or quartets, unrooted binary trees on four leaves. We give exact algorithms constructing rooted and unrooted maximum consistent supertrees in time O(2^n n^5 m^2 log(m)) for a set of m triplets (quartets), each one distinctly leaf-labeled by some subset of n labels. The algorithms extend to weighted triplets (quartets). We further present fast exact algorithms for constructing rooted and unrooted maximum consistent trees in polynomial space. Finally, for a set T of m rooted or unrooted trees with maximum degree D and distinctly leaf-labeled by some subset of a set L of n labels, we compute, in O(2^{mD} n^m m^5 n^6 log(m)) time, a tree distinctly leaf-labeled by a maximum-size subset X of L that all trees in T, when restricted to X, are consistent with.
Maximum magnitude earthquakes induced by fluid injection
McGarr, Arthur F.
2014-01-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum magnitude earthquakes induced by fluid injection
McGarr, A.
2014-02-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Krugman, Dorothy C.
1971-01-01
Discusses the role of the caseworker in providing support to children experiencing separation from their families and emphasizes the need to recognize that there are differences between those separation experiences dictated by the needs of children and those dictated by arbitrary or noncasework factors. (AJ)
Nauta, M.H.; Emmelkamp, P.M.G.; Sturmey, P.; Hersen, M.
2012-01-01
Separation anxiety disorder (SAD) is the only anxiety disorder that is specific to childhood; however, SAD has hardly ever been addressed as a separate disorder in clinical trials investigating treatment outcome. So far, only parent training has been developed specifically for SAD. This particular t
Mineka, Susan; Suomi, Stephen J.
1978-01-01
Reviews phenomena associated with social separation from attachment objects in nonhuman primates. Evaluates four theoretical treatments of separation in light of existing data: Bowlby's attachment-object-loss theory, Kaufman's conservation-withdrawal theory, Seligman's learned helplessness theory, and Solomon and Corbit's opponent-process theory.…
Nonterminal Separating Macro Grammars
Hogendorp, Jan Anne; Asveld, P.R.J.; Nijholt, A.; Verbeek, Leo A.M.
1987-01-01
We extend the concept of nonterminal separating (or NTS) context-free grammar to nonterminal separating $m$-macro grammar where the mode of derivation $m$ is equal to "unrestricted". "outside-in' or "inside-out". Then we show some (partial) characterization results for these NTS $m$-macro grammars.
Maximum Multiflow in Wireless Network Coding
Zhou, Jin-Yi; Jiang, Yong; Zheng, Hai-Tao
2012-01-01
In a multihop wireless network, wireless interference is crucial to the maximum multiflow (MMF) problem, which studies the maximum throughput between multiple pairs of sources and sinks. In this paper, we observe that network coding could help to decrease the impacts of wireless interference, and propose a framework to study the MMF problem for multihop wireless networks with network coding. Firstly, a network model is set up to describe the new conflict relations modified by network coding. Then, we formulate a linear programming problem to compute the maximum throughput and show its superiority over one in networks without coding. Finally, the MMF problem in wireless network coding is shown to be NP-hard and a polynomial approximation algorithm is proposed.
Spiral microfluidic nanoparticle separators
Bhagat, Ali Asgar S.; Kuntaegowdanahalli, Sathyakumar S.; Dionysiou, Dionysios D.; Papautsky, Ian
2008-02-01
Nanoparticles have potential applications in many areas such as consumer products, health care, electronics, energy and other industries. As the use of nanoparticles in manufacturing increases, we anticipate a growing need to detect and measure particles of nanometer scale dimensions in fluids to control emissions of possible toxic nanoparticles. At present most particle separation techniques are based on membrane assisted filtering schemes. Unfortunately their efficiency is limited by the membrane pore size, making them inefficient for separating a wide range of sizes. In this paper, we propose a passive spiral microfluidic geometry for momentum-based particle separations. The proposed design is versatile and is capable of separating particulate mixtures over a wide dynamic range and we expect it will enable a variety of environmental, medical, or manufacturing applications that involve rapid separation of nanoparticles in real-world samples with a wide range of particle components.
The Wiener maximum quadratic assignment problem
Cela, Eranda; Woeginger, Gerhard J
2011-01-01
We investigate a special case of the maximum quadratic assignment problem where one matrix is a product matrix and the other matrix is the distance matrix of a one-dimensional point set. We show that this special case, which we call the Wiener maximum quadratic assignment problem, is NP-hard in the ordinary sense and solvable in pseudo-polynomial time. Our approach also yields a polynomial time solution for the following problem from chemical graph theory: Find a tree that maximizes the Wiener index among all trees with a prescribed degree sequence. This settles an open problem from the literature.
Maximum confidence measurements via probabilistic quantum cloning
Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu
2013-01-01
Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.
Maximum floodflows in the conterminous United States
Crippen, John R.; Bue, Conrad D.
1977-01-01
Peak floodflows from thousands of observation sites within the conterminous United States were studied to provide a guide for estimating potential maximum floodflows. Data were selected from 883 sites with drainage areas of less than 10,000 square miles (25,900 square kilometers) and were grouped into regional sets. Outstanding floods for each region were plotted on graphs, and envelope curves were computed that offer reasonable limits for estimates of maximum floods. The curves indicate that floods may occur that are two to three times greater than those known for most streams.
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Maximum entropy analysis of EGRET data
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Maximum phytoplankton concentrations in the sea
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collected...... in the North Atlantic as part of the Bermuda Atlantic Time Series program as well as data collected off Southern California as part of the Southern California Bight Study program. The observed maximum particulate organic carbon and volumetric particle concentrations are consistent with the predictions...
Direct maximum parsimony phylogeny reconstruction from genotype data
Ravi R
2007-12-01
Full Text Available Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. Results In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Conclusion Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.
Supercritical fluid reverse micelle separation
Fulton, John L.; Smith, Richard D.
1993-01-01
A method of separating solute material from a polar fluid in a first polar fluid phase is provided. The method comprises combining a polar fluid, a second fluid that is a gas at standard temperature and pressure and has a critical density, and a surfactant. The solute material is dissolved in the polar fluid to define the first polar fluid phase. The combined polar and second fluids, surfactant, and solute material dissolved in the polar fluid is maintained under near critical or supercritical temperature and pressure conditions such that the density of the second fluid exceeds the critical density thereof. In this way, a reverse micelle system defining a reverse micelle solvent is formed which comprises a continuous phase in the second fluid and a plurality of reverse micelles dispersed in the continuous phase. The solute material is dissolved in the polar fluid and is in chemical equilibrium with the reverse micelles. The first polar fluid phase and the continuous phase are immiscible. The reverse micelles each comprise a dynamic aggregate of surfactant molecules surrounding a core of the polar fluid. The reverse micelle solvent has a polar fluid-to-surfactant molar ratio W, which can vary over a range having a maximum ratio W.sub.o that determines the maximum size of the reverse micelles. The maximum ratio W.sub.o of the reverse micelle solvent is then varied, and the solute material from the first polar fluid phase is transported into the reverse micelles in the continuous phase at an extraction efficiency determined by the critical or supercritical conditions.
Supercritical fluid reverse micelle separation
Fulton, J.L.; Smith, R.D.
1993-11-30
A method of separating solute material from a polar fluid in a first polar fluid phase is provided. The method comprises combining a polar fluid, a second fluid that is a gas at standard temperature and pressure and has a critical density, and a surfactant. The solute material is dissolved in the polar fluid to define the first polar fluid phase. The combined polar and second fluids, surfactant, and solute material dissolved in the polar fluid is maintained under near critical or supercritical temperature and pressure conditions such that the density of the second fluid exceeds the critical density thereof. In this way, a reverse micelle system defining a reverse micelle solvent is formed which comprises a continuous phase in the second fluid and a plurality of reverse micelles dispersed in the continuous phase. The solute material is dissolved in the polar fluid and is in chemical equilibrium with the reverse micelles. The first polar fluid phase and the continuous phase are immiscible. The reverse micelles each comprise a dynamic aggregate of surfactant molecules surrounding a core of the polar fluid. The reverse micelle solvent has a polar fluid-to-surfactant molar ratio W, which can vary over a range having a maximum ratio W[sub o] that determines the maximum size of the reverse micelles. The maximum ratio W[sub o] of the reverse micelle solvent is then varied, and the solute material from the first polar fluid phase is transported into the reverse micelles in the continuous phase at an extraction efficiency determined by the critical or supercritical conditions. 27 figures.
Analysis of Photovoltaic Maximum Power Point Trackers
Veerachary, Mummadi
The photovoltaic generator exhibits a non-linear i-v characteristic and its maximum power point (MPP) varies with solar insolation. An intermediate switch-mode dc-dc converter is required to extract maximum power from the photovoltaic array. In this paper buck, boost and buck-boost topologies are considered and a detailed mathematical analysis, both for continuous and discontinuous inductor current operation, is given for MPP operation. The conditions on the connected load values and duty ratio are derived for achieving the satisfactory maximum power point operation. Further, it is shown that certain load values, falling out of the optimal range, will drive the operating point away from the true maximum power point. Detailed comparison of various topologies for MPPT is given. Selection of the converter topology for a given loading is discussed. Detailed discussion on circuit-oriented model development is given and then MPPT effectiveness of various converter systems is verified through simulations. Proposed theory and analysis is validated through experimental investigations.
On maximum cycle packings in polyhedral graphs
Peter Recht
2014-04-01
Full Text Available This paper addresses upper and lower bounds for the cardinality of a maximum vertex-/edge-disjoint cycle packing in a polyhedral graph G. Bounds on the cardinality of such packings are provided, that depend on the size, the order or the number of faces of G, respectively. Polyhedral graphs are constructed, that attain these bounds.
Hard graphs for the maximum clique problem
Hoede, Cornelis
1988-01-01
The maximum clique problem is one of the NP-complete problems. There are graphs for which a reduction technique exists that transforms the problem for these graphs into one for graphs with specific properties in polynomial time. The resulting graphs do not grow exponentially in order and number. Gra
Maximum Likelihood Estimation of Search Costs
J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)
2006-01-01
textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2015-01-01
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
Global characterization of the Holocene Thermal Maximum
Renssen, H.; Seppä, H.; Crosta, X.; Goosse, H.; Roche, D.M.V.A.P.
2012-01-01
We analyze the global variations in the timing and magnitude of the Holocene Thermal Maximum (HTM) and their dependence on various forcings in transient simulations covering the last 9000 years (9 ka), performed with a global atmosphere-ocean-vegetation model. In these experiments, we consider the i
Instance Optimality of the Adaptive Maximum Strategy
L. Diening; C. Kreuzer; R. Stevenson
2016-01-01
In this paper, we prove that the standard adaptive finite element method with a (modified) maximum marking strategy is instance optimal for the total error, being the square root of the squared energy error plus the squared oscillation. This result will be derived in the model setting of Poisson’s e
Maximum phonation time: variability and reliability.
Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W
2010-05-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.
Maximum Phonation Time: Variability and Reliability
R. Speyer; H.C.A. Bogaardt; V.L. Passos; N.P.H.D. Roodenburg; A. Zumach; M.A.M. Heijnen; L.W.J. Baijens; S.J.H.M. Fleskens; J.W. Brunings
2010-01-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia v
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...
Maximum likelihood estimation for integrated diffusion processes
Baltazar-Larios, Fernando; Sørensen, Michael
EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...
Maximum gain of Yagi-Uda arrays
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
Radiochemical separation of Cobalt
Erkelens, P.C. van
1961-01-01
A method is described for the radiochemical separation of cobalt based on the extraordinary stability of cobalt diethyldithiocarbamate. Interferences are few; only very small amounts of zinc and iron accompany cobalt, which is important in neutron-activation analysis.
Dai, Liang; Schmidt, Fabian
2015-01-01
The separate universe conjecture states that in General Relativity a density perturbation behaves locally (i.e. on scales much smaller than the wavelength of the mode) as a separate universe with different background density and curvature. We prove this conjecture for a spherical compensated tophat density perturbation of arbitrary amplitude and radius in $\\Lambda$CDM. We then use Conformal Fermi Coordinates to generalize this result to scalar perturbations of arbitrary configuration and scale in a general cosmology with a mixture of fluids, but to linear order in perturbations. In this case, the separate universe conjecture holds for the isotropic part of the perturbations. The anisotropic part on the other hand is exactly captured by a tidal field in the Newtonian form. We show that the separate universe picture is restricted to scales larger than the sound horizons of all fluid components. We then derive an expression for the locally measured matter bispectrum induced by a long-wavelength mode of arbitrary...
Electroextraction separation of dyestuffs
Luo, G.S.; Yu, M.J.; Jiang, W.B.; Zhu, S.L.; Dai, Y.Y. [Tsinghua Univ., Beijing (China). Dept. of Chemical Engineering
1999-03-01
Electroseparation technologies have prospects for significant growth well into the next century. Electroextraction, a coupled separation technique of solvent extraction with electrophoresis, was used to remove dyestuffs from their aqueous stream. A study on the characteristics of the separation technique was carried out with n-butanol/acid-chrom blue K/water and n-butanol/methyl blue/water as working systems. A continuous separation equipment was designed and sued in this work. The influences of two-phase flow, field strength, and concentration of the feed on the recovery of solute were studied. The results showed that much higher recovery of solute with less solvent consumption could be achieved by using this technique to remove dyes from their aqueous streams, especially for the separation of the dilute solution. When the field strength is increased, the recovery and mass flux increase. When the feed flow rate and the initial solute concentration are increased, the recovery decreases and the mass flux increases.
Shoulder separation - aftercare
... and top of your shoulder blade A severe shoulder separation You may need surgery right away if you have: Numbness in your fingers Cold fingers Muscle weakness in your arm Severe deformity of the joint
Radiochemical separation of Cobalt
Erkelens, P.C. van
1961-01-01
A method is described for the radiochemical separation of cobalt based on the extraordinary stability of cobalt diethyldithiocarbamate. Interferences are few; only very small amounts of zinc and iron accompany cobalt, which is important in neutron-activation analysis.
Mundschau, Michael [Longmont, CO; Xie, Xiaobing [Foster City, CA; Evenson, IV, Carl; Grimmer, Paul [Longmont, CO; Wright, Harold [Longmont, CO
2011-05-24
A method for separating a hydrogen-rich product stream from a feed stream comprising hydrogen and at least one carbon-containing gas, comprising feeding the feed stream, at an inlet pressure greater than atmospheric pressure and a temperature greater than 200.degree. C., to a hydrogen separation membrane system comprising a membrane that is selectively permeable to hydrogen, and producing a hydrogen-rich permeate product stream on the permeate side of the membrane and a carbon dioxide-rich product raffinate stream on the raffinate side of the membrane. A method for separating a hydrogen-rich product stream from a feed stream comprising hydrogen and at least one carbon-containing gas, comprising feeding the feed stream, at an inlet pressure greater than atmospheric pressure and a temperature greater than 200.degree. C., to an integrated water gas shift/hydrogen separation membrane system wherein the hydrogen separation membrane system comprises a membrane that is selectively permeable to hydrogen, and producing a hydrogen-rich permeate product stream on the permeate side of the membrane and a carbon dioxide-rich product raffinate stream on the raffinate side of the membrane. A method for pretreating a membrane, comprising: heating the membrane to a desired operating temperature and desired feed pressure in a flow of inert gas for a sufficient time to cause the membrane to mechanically deform; decreasing the feed pressure to approximately ambient pressure; and optionally, flowing an oxidizing agent across the membrane before, during, or after deformation of the membrane. A method of supporting a hydrogen separation membrane system comprising selecting a hydrogen separation membrane system comprising one or more catalyst outer layers deposited on a hydrogen transport membrane layer and sealing the hydrogen separation membrane system to a porous support.
Separation techniques: Chromatography
Coskun, Ozlem
2016-01-01
Chromatography is an important biophysical technique that enables the separation, identification, and purification of the components of a mixture for qualitative and quantitative analysis. Proteins can be purified based on characteristics such as size and shape, total charge, hydrophobic groups present on the surface, and binding capacity with the stationary phase. Four separation techniques based on molecular characteristics and interaction type use mechanisms of ion exchange, surface adsorp...
Distal humeral epiphyseal separation.
Moucha, Calin S; Mason, Dan E
2003-10-01
Distal humeral epiphyseal separation is an uncommon injury that is often misdiagnosed upon initial presentation. To make a timely, correct diagnosis, the treating physician must have a thorough understanding of basic anatomical relationships and an awareness of the existence of this injury. This is a case of a child who sustained a separation of the distal humeral epiphysis, as well as multiple other bony injuries, secondary to child abuse.
Model Selection Through Sparse Maximum Likelihood Estimation
Banerjee, Onureena; D'Aspremont, Alexandre
2007-01-01
We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Maximum Variance Hashing via Column Generation
Lei Luo
2013-01-01
item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find......Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... the competitive ratio of various natural algorithms. We study the general versions of the problems as well as the parameterized versions where there is an upper bound of on the item sizes, for some integer k....
Nonparametric Maximum Entropy Estimation on Information Diagrams
Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn
2016-01-01
Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...
Zipf's law, power laws and maximum entropy
Visser, Matt
2013-04-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Zipf's law, power laws, and maximum entropy
Visser, Matt
2012-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines - from astronomy to demographics to economics to linguistics to zoology, and even warfare. A recent model of random group formation [RGF] attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present article I argue that the cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Regions of constrained maximum likelihood parameter identifiability
Lee, C.-H.; Herget, C. J.
1975-01-01
This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.
A Maximum Radius for Habitable Planets.
Alibert, Yann
2015-09-01
We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-01-01
An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m)] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by t...
A stochastic maximum principle via Malliavin calculus
Øksendal, Bernt; Zhou, Xun Yu; Meyer-Brandis, Thilo
2008-01-01
This paper considers a controlled It\\^o-L\\'evy process where the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.
Tissue radiation response with maximum Tsallis entropy.
Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar
2010-10-08
The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.
Maximum Estrada Index of Bicyclic Graphs
Wang, Long; Wang, Yi
2012-01-01
Let $G$ be a simple graph of order $n$, let $\\lambda_1(G),\\lambda_2(G),...,\\lambda_n(G)$ be the eigenvalues of the adjacency matrix of $G$. The Esrada index of $G$ is defined as $EE(G)=\\sum_{i=1}^{n}e^{\\lambda_i(G)}$. In this paper we determine the unique graph with maximum Estrada index among bicyclic graphs with fixed order.
Maximum privacy without coherence, zero-error
Leung, Debbie; Yu, Nengkun
2016-09-01
We study the possible difference between the quantum and the private capacities of a quantum channel in the zero-error setting. For a family of channels introduced by Leung et al. [Phys. Rev. Lett. 113, 030512 (2014)], we demonstrate an extreme difference: the zero-error quantum capacity is zero, whereas the zero-error private capacity is maximum given the quantum output dimension.
Automatic maximum entropy spectral reconstruction in NMR.
Mobli, Mehdi; Maciejewski, Mark W; Gryk, Michael R; Hoch, Jeffrey C
2007-10-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system.
Maximum entropy analysis of cosmic ray composition
Nosek, Dalibor; Vícha, Jakub; Trávníček, Petr; Nosková, Jana
2016-01-01
We focus on the primary composition of cosmic rays with the highest energies that cause extensive air showers in the Earth's atmosphere. A way of examining the two lowest order moments of the sample distribution of the depth of shower maximum is presented. The aim is to show that useful information about the composition of the primary beam can be inferred with limited knowledge we have about processes underlying these observations. In order to describe how the moments of the depth of shower maximum depend on the type of primary particles and their energies, we utilize a superposition model. Using the principle of maximum entropy, we are able to determine what trends in the primary composition are consistent with the input data, while relying on a limited amount of information from shower physics. Some capabilities and limitations of the proposed method are discussed. In order to achieve a realistic description of the primary mass composition, we pay special attention to the choice of the parameters of the sup...
A Maximum Resonant Set of Polyomino Graphs
Zhang Heping
2016-05-01
Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-03-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Minimal Length, Friedmann Equations and Maximum Density
Awad, Adel
2014-01-01
Inspired by Jacobson's thermodynamic approach[gr-qc/9504004], Cai et al [hep-th/0501055,hep-th/0609128] have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar--Cai derivation [hep-th/0609128] of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure $p(\\rho,a)$ leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature $k$. As an example w...
Maximum saliency bias in binocular fusion
Lu, Yuhao; Stafford, Tom; Fox, Charles
2016-07-01
Subjective experience at any instant consists of a single ("unitary"), coherent interpretation of sense data rather than a "Bayesian blur" of alternatives. However, computation of Bayes-optimal actions has no role for unitary perception, instead being required to integrate over every possible action-percept pair to maximise expected utility. So what is the role of unitary coherent percepts, and how are they computed? Recent work provided objective evidence for non-Bayes-optimal, unitary coherent, perception and action in humans; and further suggested that the percept selected is not the maximum a posteriori percept but is instead affected by utility. The present study uses a binocular fusion task first to reproduce the same effect in a new domain, and second, to test multiple hypotheses about exactly how utility may affect the percept. After accounting for high experimental noise, it finds that both Bayes optimality (maximise expected utility) and the previously proposed maximum-utility hypothesis are outperformed in fitting the data by a modified maximum-salience hypothesis, using unsigned utility magnitudes in place of signed utilities in the bias function.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-01-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461
Maximum-biomass prediction of homofermentative Lactobacillus.
Cui, Shumao; Zhao, Jianxin; Liu, Xiaoming; Chen, Yong Q; Zhang, Hao; Chen, Wei
2016-07-01
Fed-batch and pH-controlled cultures have been widely used for industrial production of probiotics. The aim of this study was to systematically investigate the relationship between the maximum biomass of different homofermentative Lactobacillus and lactate accumulation, and to develop a prediction equation for the maximum biomass concentration in such cultures. The accumulation of the end products and the depletion of nutrients by various strains were evaluated. In addition, the minimum inhibitory concentrations (MICs) of acid anions for various strains at pH 7.0 were examined. The lactate concentration at the point of complete inhibition was not significantly different from the MIC of lactate for all of the strains, although the inhibition mechanism of lactate and acetate on Lactobacillus rhamnosus was different from the other strains which were inhibited by the osmotic pressure caused by acid anions at pH 7.0. When the lactate concentration accumulated to the MIC, the strains stopped growing. The maximum biomass was closely related to the biomass yield per unit of lactate produced (YX/P) and the MIC (C) of lactate for different homofermentative Lactobacillus. Based on the experimental data obtained using different homofermentative Lactobacillus, a prediction equation was established as follows: Xmax - X0 = (0.59 ± 0.02)·YX/P·C.
The maximum rate of mammal evolution.
Evans, Alistair R; Jones, David; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Fitzgerald, Erich M G; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Smith, Felisa A; Stephens, Patrick R; Theodor, Jessica M; Uhen, Mark D
2012-03-13
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Organic Separation Test Results
Russell, Renee L.; Rinehart, Donald E.; Peterson, Reid A.
2014-09-22
Separable organics have been defined as “those organic compounds of very limited solubility in the bulk waste and that can form a separate liquid phase or layer” (Smalley and Nguyen 2013), and result from three main solvent extraction processes: U Plant Uranium Recovery Process, B Plant Waste Fractionation Process, and Plutonium Uranium Extraction (PUREX) Process. The primary organic solvents associated with tank solids are TBP, D2EHPA, and NPH. There is concern that, while this organic material is bound to the sludge particles as it is stored in the tanks, waste feed delivery activities, specifically transfer pump and mixer pump operations, could cause the organics to form a separated layer in the tank farms feed tank. Therefore, Washington River Protection Solutions (WRPS) is experimentally evaluating the potential of organic solvents separating from the tank solids (sludge) during waste feed delivery activities, specifically the waste mixing and transfer processes. Given the Hanford Tank Waste Treatment and Immobilization Plant (WTP) waste acceptance criteria per the Waste Feed Acceptance Criteria document (24590-WTP-RPT-MGT-11-014) that there is to be “no visible layer” of separable organics in the waste feed, this would result in the batch being unacceptable to transfer to WTP. This study is of particular importance to WRPS because of these WTP requirements.
Gulf stream separation dynamics
Schoonover, Joseph
Climate models currently struggle with the more traditional, coarse ( O(100 km) ) representation of the ocean. In these coarse ocean simulations, western boundary currents are notoriously difficult to model accurately. The modeled Gulf Stream is typically seen exhibiting a mean pathway that is north of observations, and is linked to a warm sea-surface temperature bias in the Mid-Atlantic Bight. Although increased resolution ( O(10 km) ) improves the modeled Gulf Stream position, there is no clean recipe for obtaining the proper pathway. The 70 year history of literature on the Gulf Stream separation suggests that we have not reached a resolution on the dynamics that control the current's pathway just south of the Mid-Atlantic Bight. Without a concrete knowledge on the separation dynamics, we cannot provide a clean recipe for accurately modeling the Gulf Stream at increased resolutions. Further, any reliable parameterization that yields a realistic Gulf Stream path must express the proper physics of separation. The goal of this dissertation is to determine what controls the Gulf Stream separation. To do so, we examine the results of a model intercomparison study and a set of numerical regional terraforming experiments. It is argued that the separation is governed by local dynamics that are most sensitive to the steepening of the continental shelf, consistent with the topographic wave arrest hypothesis of Stern (1998). A linear extension of Stern's theory is provided, which illustrates that wave arrest is possible for a continuously stratified fluid.
Separably injective Banach spaces
Avilés, Antonio; Castillo, Jesús M F; González, Manuel; Moreno, Yolanda
2016-01-01
This monograph contains a detailed exposition of the up-to-date theory of separably injective spaces: new and old results are put into perspective with concrete examples (such as l∞/c0 and C(K) spaces, where K is a finite height compact space or an F-space, ultrapowers of L∞ spaces and spaces of universal disposition). It is no exaggeration to say that the theory of separably injective Banach spaces is strikingly different from that of injective spaces. For instance, separably injective Banach spaces are not necessarily isometric to, or complemented subspaces of, spaces of continuous functions on a compact space. Moreover, in contrast to the scarcity of examples and general results concerning injective spaces, we know of many different types of separably injective spaces and there is a rich theory around them. The monograph is completed with a preparatory chapter on injective spaces, a chapter on higher cardinal versions of separable injectivity and a lively discussion of open problems and further lines o...
Maximum Bipartite Matching Size And Application to Cuckoo Hashing
Kanizo, Yossi; Keslassy, Isaac
2010-01-01
Cuckoo hashing with a stash is a robust high-performance hashing scheme that can be used in many real-life applications. It complements cuckoo hashing by adding a small stash storing the elements that cannot fit into the main hash table due to collisions. However, the exact required size of the stash and the tradeoff between its size and the memory over-provisioning of the hash table are still unknown. We settle this question by investigating the equivalent maximum matching size of a random bipartite graph, with a constant left-side vertex degree $d=2$. Specifically, we provide an exact expression for the expected maximum matching size and show that its actual size is close to its mean, with high probability. This result relies on decomposing the bipartite graph into connected components, and then separately evaluating the distribution of the matching size in each of these components. In particular, we provide an exact expression for any finite bipartite graph size and also deduce asymptotic results as the nu...
Mass Separation by Metamaterials.
Restrepo-Flórez, Juan Manuel; Maldovan, Martin
2016-02-25
Being able to manipulate mass flow is critically important in a variety of physical processes in chemical and biomolecular science. For example, separation and catalytic systems, which requires precise control of mass diffusion, are crucial in the manufacturing of chemicals, crystal growth of semiconductors, waste recovery of biological solutes or chemicals, and production of artificial kidneys. Coordinate transformations and metamaterials are powerful methods to achieve precise manipulation of molecular diffusion. Here, we introduce a novel approach to obtain mass separation based on metamaterials that can sort chemical and biomolecular species by cloaking one compound while concentrating the other. A design strategy to realize such metamaterial using homogeneous isotropic materials is proposed. We present a practical case where a mixture of oxygen and nitrogen is manipulated using a metamaterial that cloaks nitrogen and concentrates oxygen. This work lays the foundation for molecular mass separation in biophysical and chemical systems through metamaterial devices.
Arshinoff, S A
1999-04-01
Phaco slice and separate retains the advantages of the chopping techniques of Nagahara, Koch, and Fukasaku but replaces chopping or snapping with slicing across the center of the phaco-tip-stabilized nucleus using a Nagahara chopper and then repositioning the chopper to optimally separate the divided lens halves. As the lens is rotated in the capsular bag, small pieces of the nuclear pie are sliced off, separated, emulsified, and aspirated. Emulsification and aspiration can alternatively be left until most or all the slices have been made. This technique works with a broader range of lens densities than other chopping techniques and uses no sculpting and very little phaco time. The phaco time required for this technique is relatively independent of nuclear density compared with a sculpting technique.
Membrane separation of hydrocarbons
Chang, Y. Alice; Kulkarni, Sudhir S.; Funk, Edward W.
1986-01-01
Mixtures of heavy oils and light hydrocarbons may be separated by passing the mixture through a polymeric membrane. The membrane which is utilized to effect the separation comprises a polymer which is capable of maintaining its integrity in the presence of hydrocarbon compounds and which has been modified by being subjected to the action of a sulfonating agent. Sulfonating agents which may be employed will include fuming sulfuric acid, chlorosulfonic acid, sulfur trioxide, etc., the surface or bulk modified polymer will contain a degree of sulfonation ranging from about 15 to about 50%. The separation process is effected at temperatures ranging from about ambient to about 100.degree. C. and pressures ranging from about 50 to about 1000 psig.
Schell, William J.
1979-01-01
A dry, fabric supported, polymeric gas separation membrane, such as cellulose acetate, is prepared by casting a solution of the polymer onto a shrinkable fabric preferably formed of synthetic polymers such as polyester or polyamide filaments before washing, stretching or calendering (so called griege goods). The supported membrane is then subjected to gelling, annealing, and drying by solvent exchange. During the processing steps, both the fabric support and the membrane shrink a preselected, controlled amount which prevents curling, wrinkling or cracking of the membrane in flat form or when spirally wound into a gas separation element.
Separation membrane development
Lee, M.W. [Savannah River Technology Center, Aiken, SC (United States)
1998-08-01
A ceramic membrane has been developed to separate hydrogen from other gases. The method used is a sol-gel process. A thin layer of dense ceramic material is coated on a coarse ceramic filter substrate. The pore size distribution in the thin layer is controlled by a densification of the coating materials by heat treatment. The membrane has been tested by permeation measurement of the hydrogen and other gases. Selectivity of the membrane has been achieved to separate hydrogen from carbon monoxide. The permeation rate of hydrogen through the ceramic membrane was about 20 times larger than Pd-Ag membrane.
Separation techniques: Chromatography
Coskun, Ozlem
2016-01-01
Chromatography is an important biophysical technique that enables the separation, identification, and purification of the components of a mixture for qualitative and quantitative analysis. Proteins can be purified based on characteristics such as size and shape, total charge, hydrophobic groups present on the surface, and binding capacity with the stationary phase. Four separation techniques based on molecular characteristics and interaction type use mechanisms of ion exchange, surface adsorption, partition, and size exclusion. Other chromatography techniques are based on the stationary bed, including column, thin layer, and paper chromatography. Column chromatography is one of the most common methods of protein purification. PMID:28058406
Separators for electrochemical cells
Carlson, Steven Allen; Anakor, Ifenna Kingsley
2014-11-11
Provided are separators for use in an electrochemical cell comprising (a) an inorganic oxide and (b) an organic polymer, wherein the inorganic oxide comprises organic substituents. Preferably, the inorganic oxide comprises an hydrated aluminum oxide of the formula Al.sub.2O.sub.3.xH.sub.2O, wherein x is less than 1.0, and wherein the hydrated aluminum oxide comprises organic substituents, preferably comprising a reaction product of a multifunctional monomer and/or organic carbonate with an aluminum oxide, such as pseudo-boehmite and an aluminum oxide. Also provided are electrochemical cells comprising such separators.
Maximum power operation of interacting molecular motors
Golubeva, Natalia; Imparato, Alberto
2013-01-01
We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors......, as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics....
Maximum a posteriori decoder for digital communications
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
The sun and heliosphere at solar maximum.
Smith, E J; Marsden, R G; Balogh, A; Gloeckler, G; Geiss, J; McComas, D J; McKibben, R B; MacDowall, R J; Lanzerotti, L J; Krupp, N; Krueger, H; Landgraf, M
2003-11-14
Recent Ulysses observations from the Sun's equator to the poles reveal fundamental properties of the three-dimensional heliosphere at the maximum in solar activity. The heliospheric magnetic field originates from a magnetic dipole oriented nearly perpendicular to, instead of nearly parallel to, the Sun's rotation axis. Magnetic fields, solar wind, and energetic charged particles from low-latitude sources reach all latitudes, including the polar caps. The very fast high-latitude wind and polar coronal holes disappear and reappear together. Solar wind speed continues to be inversely correlated with coronal temperature. The cosmic ray flux is reduced symmetrically at all latitudes.
Conductivity maximum in a charged colloidal suspension
Bastea, S
2009-01-27
Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.
Maximum entropy signal restoration with linear programming
Mastin, G.A.; Hanson, R.J.
1988-05-01
Dantzig's bounded-variable method is used to express the maximum entropy restoration problem as a linear programming problem. This is done by approximating the nonlinear objective function with piecewise linear segments, then bounding the variables as a function of the number of segments used. The use of a linear programming approach allows equality constraints found in the traditional Lagrange multiplier method to be relaxed. A robust revised simplex algorithm is used to implement the restoration. Experimental results from 128- and 512-point signal restorations are presented.
COMPARISON BETWEEN FORMULAS OF MAXIMUM SHIP SQUAT
PETRU SERGIU SERBAN
2016-06-01
Full Text Available Ship squat is a combined effect of ship’s draft and trim increase due to ship motion in limited navigation conditions. Over time, researchers conducted tests on models and ships to find a mathematical formula that can define squat. Various forms of calculating squat can be found in the literature. Among those most commonly used are of Barrass, Millward, Eryuzlu or ICORELS. This paper presents a comparison between the squat formulas to see the differences between them and which one provides the most satisfactory results. In this respect a cargo ship at different speeds was considered as a model for maximum squat calculations in canal navigation conditions.
Multi-Channel Maximum Likelihood Pitch Estimation
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...
Maximum entropy PDF projection: A review
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
CORA: Emission Line Fitting with Maximum Likelihood
Ness, Jan-Uwe; Wichmann, Rainer
2011-12-01
CORA analyzes emission line spectra with low count numbers and fits them to a line using the maximum likelihood technique. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise, the software derives the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. CORA has been applied to an X-ray spectrum with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory.
Dynamical maximum entropy approach to flocking
Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.
2014-04-01
We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.
Maximum Temperature Detection System for Integrated Circuits
Frankiewicz, Maciej; Kos, Andrzej
2015-03-01
The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.
Zipf's law and maximum sustainable growth
Malevergne, Y; Sornette, D
2010-01-01
Zipf's law states that the number of firms with size greater than S is inversely proportional to S. Most explanations start with Gibrat's rule of proportional growth but require additional constraints. We show that Gibrat's rule, at all firm levels, yields Zipf's law under a balance condition between the effective growth rate of incumbent firms (which includes their possible demise) and the growth rate of investments in entrant firms. Remarkably, Zipf's law is the signature of the long-term optimal allocation of resources that ensures the maximum sustainable growth rate of an economy.
Nobuyuki Kenmochi
1996-01-01
w is constrained to have double obstacles σ*≤w≤σ* (i.e., σ* and σ* are the threshold values of w. The objective of this paper is to discuss the semigroup {S(t} associated with the phase separation model, and construct its global attractor.
Phase separation micro molding
Vogelaar, Laura
2005-01-01
The research described in this thesis concerns the development of a new microfabrication method, Phase Separation Micro Molding (PSμM). While microfabrication is still best known from semiconductor industry, where it is used to integrate electrical components on a chip, the scope has immensely expan
Fathering After Marital Separation
Keshet, Harry Finkelstein; Rosenthal, Kristine M.
1978-01-01
Deals with experiences of a group of separated or divorced fathers who chose to remain fully involved in the upbringing of their children. As they underwent transition from married parenthood to single fatherhood, these men learned that meeting demands of child care contributed to personal stability and growth. (Author)
Fritz, P. [UFZ-Umweltforschungszentrum, Centre of Environmental Research Leipzig-Halle, Leipzig (Germany)
2000-07-01
Storm-runoff thus reflects the complex hydraulic behaviour of drainage basins and water-links of such systems. Water of different origin may participate in the events and in this lecture, the application of isotope techniques to separate storm hydrographs into different components will be presented.
Dabelsteen, Hans B.
This PhD thesis asks how we can conceptualize the current separation doctrine of religion and politics in a country like Denmark, where the structure of the established church and peoplehood overlap. In order to answer this question, Hans Bruun Dabelsteen maps the current discussion of secularism...
Acromioclavicular Joint Separations
2013-01-01
Published online: 16 December 2012 # Springer Science+Business Media New York 2012 Abstract Acromioclavicular (AC) joint separations are common...injuries. The sports most likely to cause AC joint dislocations are football, soccer , hockey, rugby, and skiing, among others [9, 28, 29]. The major cause
1992-03-04
SEPARATEES Defense Outplacement Referral System (DORS) Since most of us are not independently wealthy, we will need a job after separation. DORS is...Job Assistance SPOUSES OF ALL SEPARATEES As a spouse you may take advantage of preparing Standard Form 17 1’s and resu- the outplacement services
Bill of Rights in Action, 1987
1987-01-01
The dimensions of the separation of powers principle are explored through three lessons in the subject areas of U.S. history, U.S. government, and world history. In 1748, a French nobleman, Baron de Montesquieu, wrote a book called "The Spirit of the Laws," in which he argued that there could be no liberty when all government power was…
Accurate structural correlations from maximum likelihood superpositions.
Douglas L Theobald
2008-02-01
Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.
Maximum entropy production and the fluctuation theorem
Dewar, R C [Unite EPHYSE, INRA Centre de Bordeaux-Aquitaine, BP 81, 33883 Villenave d' Ornon Cedex (France)
2005-05-27
Recently the author used an information theoretical formulation of non-equilibrium statistical mechanics (MaxEnt) to derive the fluctuation theorem (FT) concerning the probability of second law violating phase-space paths. A less rigorous argument leading to the variational principle of maximum entropy production (MEP) was also given. Here a more rigorous and general mathematical derivation of MEP from MaxEnt is presented, and the relationship between MEP and the FT is thereby clarified. Specifically, it is shown that the FT allows a general orthogonality property of maximum information entropy to be extended to entropy production itself, from which MEP then follows. The new derivation highlights MEP and the FT as generic properties of MaxEnt probability distributions involving anti-symmetric constraints, independently of any physical interpretation. Physically, MEP applies to the entropy production of those macroscopic fluxes that are free to vary under the imposed constraints, and corresponds to selection of the most probable macroscopic flux configuration. In special cases MaxEnt also leads to various upper bound transport principles. The relationship between MaxEnt and previous theories of irreversible processes due to Onsager, Prigogine and Ziegler is also clarified in the light of these results. (letter to the editor)
Thermodynamic hardness and the maximum hardness principle
Franco-Pérez, Marco; Gázquez, José L.; Ayers, Paul W.; Vela, Alberto
2017-08-01
An alternative definition of hardness (called the thermodynamic hardness) within the grand canonical ensemble formalism is proposed in terms of the partial derivative of the electronic chemical potential with respect to the thermodynamic chemical potential of the reservoir, keeping the temperature and the external potential constant. This temperature dependent definition may be interpreted as a measure of the propensity of a system to go through a charge transfer process when it interacts with other species, and thus it keeps the philosophy of the original definition. When the derivative is expressed in terms of the three-state ensemble model, in the regime of low temperatures and up to temperatures of chemical interest, one finds that for zero fractional charge, the thermodynamic hardness is proportional to T-1(I -A ) , where I is the first ionization potential, A is the electron affinity, and T is the temperature. However, the thermodynamic hardness is nearly zero when the fractional charge is different from zero. Thus, through the present definition, one avoids the presence of the Dirac delta function. We show that the chemical hardness defined in this way provides meaningful and discernible information about the hardness properties of a chemical species exhibiting integer or a fractional average number of electrons, and this analysis allowed us to establish a link between the maximum possible value of the hardness here defined, with the minimum softness principle, showing that both principles are related to minimum fractional charge and maximum stability conditions.
Maximum Likelihood Analysis in the PEN Experiment
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Lake Basin Fetch and Maximum Length/Width
Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...
Bush favours research at Pentagon and NIH
MacIlwain, C
2001-01-01
The first budget from George W. Bush increases funding for military research and the NIH, while environmental work is drastically cut. The rest of civilian science funding is essentially frozen (1 page).
Pentagon on kindel Saddami tabamises / Aadu Hiietamm
Hiietamm, Aadu, 1954-
2003-01-01
Iraagi endise infoministri Saud-al-Sahafi sõnul ei tee Saddami poegade surm lõppu USA sõdurite ründamisele. USA asekaitseministri Paul Wolfowitzi hinnangul alahindas USA ohtu sõjajärgses Iraagis
Pentagon otsib Eestist kaupa / Kadri Paas
Paas, Kadri, 1982-
2010-01-01
Pentagoni logistikatugi Defence Logistic Agency (DLA) pöördus Kaitsetööstuse Liidu poole selgitamaks välja, kas Eesti ettevõtetelt oleks võimalik osta Afganistanis sõdivatele vägedele köögivilja, joogivett ja ehitusmaterjale
Pentagon otsib Eestist kaupa / Kadri Paas
Paas, Kadri, 1982-
2010-01-01
Pentagoni logistikatugi Defence Logistic Agency (DLA) pöördus Kaitsetööstuse Liidu poole selgitamaks välja, kas Eesti ettevõtetelt oleks võimalik osta Afganistanis sõdivatele vägedele köögivilja, joogivett ja ehitusmaterjale
Microgravity Passive Phase Separator
Paragano, Matthew; Indoe, William; Darmetko, Jeffrey
2012-01-01
A new invention disclosure discusses a structure and process for separating gas from liquids in microgravity. The Microgravity Passive Phase Separator consists of two concentric, pleated, woven stainless- steel screens (25-micrometer nominal pore) with an axial inlet, and an annular outlet between both screens (see figure). Water enters at one end of the center screen at high velocity, eventually passing through the inner screen and out through the annular exit. As gas is introduced into the flow stream, the drag force exerted on the bubble pushes it downstream until flow stagnation or until it reaches an equilibrium point between the surface tension holding bubble to the screen and the drag force. Gas bubbles of a given size will form a front that is moved further down the length of the inner screen with increasing velocity. As more bubbles are added, the front location will remain fixed, but additional bubbles will move to the end of the unit, eventually coming to rest in the large cavity between the unit housing and the outer screen (storage area). Owing to the small size of the pores and the hydrophilic nature of the screen material, gas does not pass through the screen and is retained within the unit for emptying during ground processing. If debris is picked up on the screen, the area closest to the inlet will become clogged, so high-velocity flow will persist farther down the length of the center screen, pushing the bubble front further from the inlet of the inner screen. It is desired to keep the velocity high enough so that, for any bubble size, an area of clean screen exists between the bubbles and the debris. The primary benefits of this innovation are the lack of any need for additional power, strip gas, or location for venting the separated gas. As the unit contains no membrane, the transport fluid will not be lost due to evaporation in the process of gas separation. Separation is performed with relatively low pressure drop based on the large surface
Maximum entropy principle and texture formation
Arminjon, M; Arminjon, Mayeul; Imbault, Didier
2006-01-01
The macro-to-micro transition in a heterogeneous material is envisaged as the selection of a probability distribution by the Principle of Maximum Entropy (MAXENT). The material is made of constituents, e.g. given crystal orientations. Each constituent is itself made of a large number of elementary constituents. The relevant probability is the volume fraction of the elementary constituents that belong to a given constituent and undergo a given stimulus. Assuming only obvious constraints in MAXENT means describing a maximally disordered material. This is proved to have the same average stimulus in each constituent. By adding a constraint in MAXENT, a new model, potentially interesting e.g. for texture prediction, is obtained.
MLDS: Maximum Likelihood Difference Scaling in R
Kenneth Knoblauch
2008-01-01
Full Text Available The MLDS package in the R programming language can be used to estimate perceptual scales based on the results of psychophysical experiments using the method of difference scaling. In a difference scaling experiment, observers compare two supra-threshold differences (a,b and (c,d on each trial. The approach is based on a stochastic model of how the observer decides which perceptual difference (or interval (a,b or (c,d is greater, and the parameters of the model are estimated using a maximum likelihood criterion. We also propose a method to test the model by evaluating the self-consistency of the estimated scale. The package includes an example in which an observer judges the differences in correlation between scatterplots. The example may be readily adapted to estimate perceptual scales for arbitrary physical continua.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-06-01
Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.
Maximum Segment Sum, Monadically (distilled tutorial
Jeremy Gibbons
2011-09-01
Full Text Available The maximum segment sum problem is to compute, given a list of integers, the largest of the sums of the contiguous segments of that list. This problem specification maps directly onto a cubic-time algorithm; however, there is a very elegant linear-time solution too. The problem is a classic exercise in the mathematics of program construction, illustrating important principles such as calculational development, pointfree reasoning, algebraic structure, and datatype-genericity. Here, we take a sideways look at the datatype-generic version of the problem in terms of monadic functional programming, instead of the traditional relational approach; the presentation is tutorial in style, and leavened with exercises for the reader.
Maximum Information and Quantum Prediction Algorithms
McElwaine, J N
1997-01-01
This paper describes an algorithm for selecting a consistent set within the consistent histories approach to quantum mechanics and investigates its properties. The algorithm uses a maximum information principle to select from among the consistent sets formed by projections defined by the Schmidt decomposition. The algorithm unconditionally predicts the possible events in closed quantum systems and ascribes probabilities to these events. A simple spin model is described and a complete classification of all exactly consistent sets of histories formed from Schmidt projections in the model is proved. This result is used to show that for this example the algorithm selects a physically realistic set. Other tentative suggestions in the literature for set selection algorithms using ideas from information theory are discussed.
Maximum process problems in optimal control theory
Goran Peskir
2005-01-01
Full Text Available Given a standard Brownian motion (Btt≥0 and the equation of motion dXt=vtdt+2dBt, we set St=max0≤s≤tXs and consider the optimal control problem supvE(Sτ−Cτ, where c>0 and the supremum is taken over all admissible controls v satisfying vt∈[μ0,μ1] for all t up to τ=inf{t>0|Xt∉(ℓ0,ℓ1} with μ0g∗(St, where s↦g∗(s is a switching curve that is determined explicitly (as the unique solution to a nonlinear differential equation. The solution found demonstrates that the problem formulations based on a maximum functional can be successfully included in optimal control theory (calculus of variations in addition to the classic problem formulations due to Lagrange, Mayer, and Bolza.
Maximum Spectral Luminous Efficacy of White Light
Murphy, T W
2013-01-01
As lighting efficiency improves, it is useful to understand the theoretical limits to luminous efficacy for light that we perceive as white. Independent of the efficiency with which photons are generated, there exists a spectrally-imposed limit to the luminous efficacy of any source of photons. We find that, depending on the acceptable bandpass and---to a lesser extent---the color temperature of the light, the ideal white light source achieves a spectral luminous efficacy of 250--370 lm/W. This is consistent with previous calculations, but here we explore the maximum luminous efficacy as a function of photopic sensitivity threshold, color temperature, and color rendering index; deriving peak performance as a function of all three parameters. We also present example experimental spectra from a variety of light sources, quantifying the intrinsic efficacy of their spectral distributions.
Maximum entropy model for business cycle synchronization
Xi, Ning; Muneepeerakul, Rachata; Azaele, Sandro; Wang, Yougui
2014-11-01
The global economy is a complex dynamical system, whose cyclical fluctuations can mainly be characterized by simultaneous recessions or expansions of major economies. Thus, the researches on the synchronization phenomenon are key to understanding and controlling the dynamics of the global economy. Based on a pairwise maximum entropy model, we analyze the business cycle synchronization of the G7 economic system. We obtain a pairwise-interaction network, which exhibits certain clustering structure and accounts for 45% of the entire structure of the interactions within the G7 system. We also find that the pairwise interactions become increasingly inadequate in capturing the synchronization as the size of economic system grows. Thus, higher-order interactions must be taken into account when investigating behaviors of large economic systems.
Quantum gravity momentum representation and maximum energy
Moffat, J. W.
2016-11-01
We use the idea of the symmetry between the spacetime coordinates xμ and the energy-momentum pμ in quantum theory to construct a momentum space quantum gravity geometry with a metric sμν and a curvature tensor Pλ μνρ. For a closed maximally symmetric momentum space with a constant 3-curvature, the volume of the p-space admits a cutoff with an invariant maximum momentum a. A Wheeler-DeWitt-type wave equation is obtained in the momentum space representation. The vacuum energy density and the self-energy of a charged particle are shown to be finite, and modifications of the electromagnetic radiation density and the entropy density of a system of particles occur for high frequencies.
Video segmentation using Maximum Entropy Model
QIN Li-juan; ZHUANG Yue-ting; PAN Yun-he; WU Fei
2005-01-01
Detecting objects of interest from a video sequence is a fundamental and critical task in automated visual surveillance.Most current approaches only focus on discriminating moving objects by background subtraction whether or not the objects of interest can be moving or stationary. In this paper, we propose layers segmentation to detect both moving and stationary target objects from surveillance video. We extend the Maximum Entropy (ME) statistical model to segment layers with features, which are collected by constructing a codebook with a set of codewords for each pixel. We also indicate how the training models are used for the discrimination of target objects in surveillance video. Our experimental results are presented in terms of the success rate and the segmenting precision.
2016-01-01
Footage of the 70 degree ISOLDE GPS separator magnet MAG70 as well as the switchyard for the Central Mass and GLM (GPS Low Mass) and GHM (GPS High Mass) beamlines in the GPS separator zone. In the GPS20 vacuum sector equipment such as the long GPS scanner 482 / 483 unit, faraday cup FC 490, vacuum valves and wiregrid piston WG210 and WG475 and radiation monitors can also be seen. Also the RILIS laser guidance and trajectory can be seen, the GPS main beamgate switch box and the actual GLM, GHM and Central Beamline beamgates in the beamlines as well as the first electrostatic quadrupoles for the GPS lines. Close up of the GHM deflector plates motor and connections and the inspection glass at the GHM side of the switchyard.
2016-01-01
Footage of the 90 and 60 degree ISOLDE HRS separator magnets in the HRS separator zone. In the two vacuum sectors HRS20 and HRS30 equipment such as the HRS slits SL240, the HRS faraday cup FC300 and wiregrid WG210 can be spotted. Vacuum valves, turbo pumps, beamlines, quadrupoles, water and compressed air connections, DC and signal cabling can be seen throughout the video. The HRS main and user beamgate in the beamline between MAG90 and MAG60 and its switchboxes as well as all vacuum bellows and flanges are shown. Instrumentation such as the HRS scanner unit 482 / 483, the HRS WG470 wiregrid and slits piston can be seen. The different quadrupoles and supports are shown as well as the RILIS guidance tubes and installation at the magnets and the different radiation monitors.
Battery separator manufacturing process
Palmer, N.I.; Sugarman, N.
1974-12-27
A battery with a positive plate, a negative plate, and a separator of polymeric resin having a degree of undesirable hydrophobia, solid below 180/sup 0/F, extrudable as a hot melt, and resistant to degradation by at least either acids or alkalies positioned between the plates is described. The separator comprises a nonwoven mat of fibers, the fibers being comprised of the polymeric resin and a wetting agent in an amount of 0.5 to 20 percent by weight based on the weight of the resin with the amount being incompatible with the resin below the melting point of the resin such that the wetting agent will bloom over a period of time at ambient temperatures in a battery, yet being compatible with the resin at the extrusion temperature and bringing about blooming to the surface of the fibers when the fibers are subjected to heat and pressure.
Dabelsteen, Hans B.
analysis, Dabelsteen study Danish secularism as an ideological concept. He finds that the conceptual structure of Danish secularism holds separation-as-principled distance at its core. Institutionally this particularly pertains to the establishment arrangement, and in practice it translates...... and proposes two conceptual expansions. The first is to include modest establishment in a framework of secularism defensible by political liberalism, and the second is to consider secularism in close connection to a theory of peoplehood. Methodologically positioned between interpretive realism and policy...... into the principle of treating everybody equally (with religious freedom, equality and Danish peoplehood as the most important principles adjacent to secularism). In a study of the historical roots of the separation doctrine and two current policy cases (same-sex marriage and reforms of church governance...
Acoustophoresis separation method
Heyman, Joseph S. (Inventor)
1993-01-01
A method and apparatus are provided for acoustophoresis, i.e., the separation of species via acoustic waves. An ultrasonic transducer applies an acoustic wave to one end of a sample container containing at least two species having different acoustic absorptions. The wave has a frequency tuned to or harmonized with the point of resonance of the species to be separated. This wave causes the species to be driven to an opposite end of the sample container for removal. A second ultrasonic transducer may be provided to apply a second, oppositely directed acoustic wave to prevent undesired streaming. In addition, a radio frequency tuned to the mechanical resonance and coupled with a magnetic field can serve to identify a species in a medium comprising species with similar absorption coefficients, whereby an acoustic wave having a frequency corresponding to this gyrational rate can then be applied to sweep the identified species to one end of the container for removal.
Evaluation of pliers' grip spans in the maximum gripping task and sub-maximum cutting task.
Kim, Dae-Min; Kong, Yong-Ku
2016-12-01
A total of 25 males participated to investigate the effects of the grip spans of pliers on the total grip force, individual finger forces and muscle activities in the maximum gripping task and wire-cutting tasks. In the maximum gripping task, results showed that the 50-mm grip span had significantly higher total grip strength than the other grip spans. In the cutting task, the 50-mm grip span also showed significantly higher grip strength than the 65-mm and 80-mm grip spans, whereas the muscle activities showed a higher value at 80-mm grip span. The ratios of cutting force to maximum grip strength were also investigated. Ratios of 30.3%, 31.3% and 41.3% were obtained by grip spans of 50-mm, 65-mm, and 80-mm, respectively. Thus, the 50-mm grip span for pliers might be recommended to provide maximum exertion in gripping tasks, as well as lower maximum-cutting force ratios in the cutting tasks.
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
Hall, Alex
2016-01-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with very promising results. We find that the introduction of an intrinsic shape prior mitigates noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely sub-dominant. We show how biases propagate to shear estima...
Todd , Matthew H
2014-01-01
In one handy volume this handbook summarizes the most common synthetic methods for the separation of racemic mixtures, allowing an easy comparison of the different strategies described in the literature.Alongside classical methods, the authors also consider kinetic resolutions, dynamic kinetic resolutions, divergent reactions of a racemic mixture, and a number of ""neglected"" cases not covered elsewhere, such as the use of circularly polarized light, polymerizations, ""ripening"" processes, dynamic combinatorial chemistry, and several thermodynamic processes. The result is a thorough introdu
Separation Logic and Concurrency
Bornat, Richard
Concurrent separation logic is a development of Hoare logic adapted to deal with pointers and concurrency. Since its inception, it has been enhanced with a treatment of permissions to enable sharing of data between threads, and a treatment of variables as resource alongside heap cells as resource. An introduction to the logic is given with several examples of proofs, culminating in a treatment of Simpson's 4-slot algorithm, an instance of racy non-blocking concurrency.
Innovative Separations Technologies
J. Tripp; N. Soelberg; R. Wigeland
2011-05-01
Reprocessing used nuclear fuel (UNF) is a multi-faceted problem involving chemistry, material properties, and engineering. Technology options are available to meet a variety of processing goals. A decision about which reprocessing method is best depends significantly on the process attributes considered to be a priority. New methods of reprocessing that could provide advantages over the aqueous Plutonium Uranium Reduction Extraction (PUREX) and Uranium Extraction + (UREX+) processes, electrochemical, and other approaches are under investigation in the Fuel Cycle Research and Development (FCR&D) Separations Campaign. In an attempt to develop a revolutionary approach to UNF recycle that may have more favorable characteristics than existing technologies, five innovative separations projects have been initiated. These include: (1) Nitrogen Trifluoride for UNF Processing; (2) Reactive Fluoride Gas (SF6) for UNF Processing; (3) Dry Head-end Nitration Processing; (4) Chlorination Processing of UNF; and (5) Enhanced Oxidation/Chlorination Processing of UNF. This report provides a description of the proposed processes, explores how they fit into the Modified Open Cycle (MOC) and Full Recycle (FR) fuel cycles, and identifies performance differences when compared to 'reference' advanced aqueous and fluoride volatility separations cases. To be able to highlight the key changes to the reference case, general background on advanced aqueous solvent extraction, advanced oxidative processes (e.g., volumetric oxidation, or 'voloxidation,' which is high temperature reaction of oxide UNF with oxygen, or modified using other oxidizing and reducing gases), and fluorination and chlorination processes is provided.
Colour Separation and Aversion
Sarah M Haigh
2012-05-01
Full Text Available Aversion to achromatic patterns is well documented but relatively little is known about discomfort from chromatic patterns. Large colour differences are uncommon in the natural environment and deviation from natural statistics makes images uncomfortable (Fernandez and Wilkins 2008, Perception, 37(7, 1098–113; Juricevic et al 2010, Perception, 39(7, 884–899. We report twelve studies documenting a linear increase in aversion to chromatic square-wave gratings as a function of the separation in UCS chromaticity between the component bars, independent of their luminance contrast. Two possible explanations for the aversion were investigated: (1 accommodative response, or (2 cortical metabolic demand. We found no correlation between chromaticity separation and accommodative lag or variance in lag, measured using an open-field autorefractor. However, near infrared spectroscopy of the occipital cortex revealed a larger oxyhaemoglobin response to patterns with large chromaticity separation. The aversion may be cortical in origin and does not appear to be due to accommodation.
Rem, P.C.; Bakker, M.C.M.; Berkhout, S.P.M.; Rahman, M.A.
2012-01-01
Eddy current separation apparatus (1) for separating particles (20) from a particle stream (w), wherein the apparatus (1) comprises a separator drum (4) adapted to create a first particle fraction (21) and a second particle fraction (23), a feeding device (2) upstream of the separator drum (4) for s
20 CFR 211.14 - Maximum creditable compensation.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Maximum creditable compensation. 211.14... CREDITABLE RAILROAD COMPENSATION § 211.14 Maximum creditable compensation. Maximum creditable compensation... Employment Accounts shall notify each employer of the amount of maximum creditable compensation applicable...
49 CFR 230.24 - Maximum allowable stress.
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...
Theoretical Estimate of Maximum Possible Nuclear Explosion
Bethe, H. A.
1950-01-31
The maximum nuclear accident which could occur in a Na-cooled, Be moderated, Pu and power producing reactor is estimated theoretically. (T.R.H.) 2O82 Results of nuclear calculations for a variety of compositions of fast, heterogeneous, sodium-cooled, U-235-fueled, plutonium- and power-producing reactors are reported. Core compositions typical of plate-, pin-, or wire-type fuel elements and with uranium as metal, alloy, and oxide were considered. These compositions included atom ratios in the following range: U-23B to U-235 from 2 to 8; sodium to U-235 from 1.5 to 12; iron to U-235 from 5 to 18; and vanadium to U-235 from 11 to 33. Calculations were performed to determine the effect of lead and iron reflectors between the core and blanket. Both natural and depleted uranium were evaluated as the blanket fertile material. Reactors were compared on a basis of conversion ratio, specific power, and the product of both. The calculated results are in general agreement with the experimental results from fast reactor assemblies. An analysis of the effect of new cross-section values as they became available is included. (auth)
Proposed principles of maximum local entropy production.
Ross, John; Corlan, Alexandru D; Müller, Stefan C
2012-07-12
Articles have appeared that rely on the application of some form of "maximum local entropy production principle" (MEPP). This is usually an optimization principle that is supposed to compensate for the lack of structural information and measurements about complex systems, even systems as complex and as little characterized as the whole biosphere or the atmosphere of the Earth or even of less known bodies in the solar system. We select a number of claims from a few well-known papers that advocate this principle and we show that they are in error with the help of simple examples of well-known chemical and physical systems. These erroneous interpretations can be attributed to ignoring well-established and verified theoretical results such as (1) entropy does not necessarily increase in nonisolated systems, such as "local" subsystems; (2) macroscopic systems, as described by classical physics, are in general intrinsically deterministic-there are no "choices" in their evolution to be selected by using supplementary principles; (3) macroscopic deterministic systems are predictable to the extent to which their state and structure is sufficiently well-known; usually they are not sufficiently known, and probabilistic methods need to be employed for their prediction; and (4) there is no causal relationship between the thermodynamic constraints and the kinetics of reaction systems. In conclusion, any predictions based on MEPP-like principles should not be considered scientifically founded.
Maximum entropy production and plant optimization theories.
Dewar, Roderick C
2010-05-12
Plant ecologists have proposed a variety of optimization theories to explain the adaptive behaviour and evolution of plants from the perspective of natural selection ('survival of the fittest'). Optimization theories identify some objective function--such as shoot or canopy photosynthesis, or growth rate--which is maximized with respect to one or more plant functional traits. However, the link between these objective functions and individual plant fitness is seldom quantified and there remains some uncertainty about the most appropriate choice of objective function to use. Here, plants are viewed from an alternative thermodynamic perspective, as members of a wider class of non-equilibrium systems for which maximum entropy production (MEP) has been proposed as a common theoretical principle. I show how MEP unifies different plant optimization theories that have been proposed previously on the basis of ad hoc measures of individual fitness--the different objective functions of these theories emerge as examples of entropy production on different spatio-temporal scales. The proposed statistical explanation of MEP, that states of MEP are by far the most probable ones, suggests a new and extended paradigm for biological evolution--'survival of the likeliest'--which applies from biomacromolecules to ecosystems, not just to individuals.
Maximum likelihood continuity mapping for fraud detection
Hogden, J.
1997-05-01
The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.
Maximum life spiral bevel reduction design
Savage, M.; Prasanna, M. G.; Coe, H. H.
1992-07-01
Optimization is applied to the design of a spiral bevel gear reduction for maximum life at a given size. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial values. Gear tooth bending strength and minimum contact ratio under load are included in the active constraints. The optimal design of the spiral bevel gear reduction includes the selection of bearing and shaft proportions in addition to gear mesh parameters. System life is maximized subject to a fixed back-cone distance of the spiral bevel gear set for a specified speed ratio, shaft angle, input torque, and power. Significant parameters in the design are: the spiral angle, the pressure angle, the numbers of teeth on the pinion and gear, and the location and size of the four support bearings. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for gradient optimization. After finding the continuous optimum, a designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearing lives on the gear parameters in the optimal configurations. For a fixed back-cone distance, optimal designs with larger shaft angles have larger service lives.
CORA - emission line fitting with Maximum Likelihood
Ness, J.-U.; Wichmann, R.
2002-07-01
The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.
Maximum likelihood estimates of pairwise rearrangement distances.
Serdoz, Stuart; Egri-Nagy, Attila; Sumner, Jeremy; Holland, Barbara R; Jarvis, Peter D; Tanaka, Mark M; Francis, Andrew R
2017-06-21
Accurate estimation of evolutionary distances between taxa is important for many phylogenetic reconstruction methods. Distances can be estimated using a range of different evolutionary models, from single nucleotide polymorphisms to large-scale genome rearrangements. Corresponding corrections for genome rearrangement distances fall into 3 categories: Empirical computational studies, Bayesian/MCMC approaches, and combinatorial approaches. Here, we introduce a maximum likelihood estimator for the inversion distance between a pair of genomes, using a group-theoretic approach to modelling inversions introduced recently. This MLE functions as a corrected distance: in particular, we show that because of the way sequences of inversions interact with each other, it is quite possible for minimal distance and MLE distance to differently order the distances of two genomes from a third. The second aspect tackles the problem of accounting for the symmetries of circular arrangements. While, generally, a frame of reference is locked, and all computation made accordingly, this work incorporates the action of the dihedral group so that distance estimates are free from any a priori frame of reference. The philosophy of accounting for symmetries can be applied to any existing correction method, for which examples are offered. Copyright © 2017 Elsevier Ltd. All rights reserved.
Maddock, A.G.; Smith, F.
1959-08-25
A method is described for separating plutonium from uranium and fission products by treating a nitrate solution of fission products, uranium, and hexavalent plutonium with a relatively water-insoluble fluoride to adsorb fission products on the fluoride, treating the residual solution with a reducing agent for plutonium to reduce its valence to four and less, treating the reduced plutonium solution with a relatively insoluble fluoride to adsorb the plutonium on the fluoride, removing the solution, and subsequently treating the fluoride with its adsorbed plutonium with a concentrated aqueous solution of at least one of a group consisting of aluminum nitrate, ferric nitrate, and manganous nitrate to remove the plutonium from the fluoride.
Karraker, D.G.
1959-07-14
A liquid-liquid extraction process is presented for the recovery of polonium from lead and bismuth. According to the invention an acidic aqueous chloride phase containing the polonium, lead, and bismuth values is contacted with a tributyl phosphate ether phase. The polonium preferentially enters the organic phase which is then separated and washed with an aqueous hydrochloric solution to remove any lead or bismuth which may also have been extracted. The now highly purified polonium in the organic phase may be transferred to an aqueous solution by extraction with aqueous nitric acid.
Beaufait, L.J. Jr.; Stevenson, F.R.; Rollefson, G.K.
1958-11-18
The recovery of plutonium ions from neutron irradiated uranium can be accomplished by bufferlng an aqueous solutlon of the irradiated materials containing tetravalent plutonium to a pH of 4 to 7, adding sufficient acetate to the solution to complex the uranyl present, adding ferric nitrate to form a colloid of ferric hydroxide, plutonlum, and associated fission products, removing and dissolving the colloid in aqueous nitric acid, oxldizlng the plutonium to the hexavalent state by adding permanganate or dichromate, treating the resultant solution with ferric nitrate to form a colloid of ferric hydroxide and associated fission products, and separating the colloid from the plutonlum left in solution.
Boedeker, Peter
2017-01-01
Hierarchical linear modeling (HLM) is a useful tool when analyzing data collected from groups. There are many decisions to be made when constructing and estimating a model in HLM including which estimation technique to use. Three of the estimation techniques available when analyzing data with HLM are maximum likelihood, restricted maximum…
Printed Spacecraft Separation System
Holmans, Walter [Planetary Systems Corporation, Silver Springs, MD (United States); Dehoff, Ryan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2016-10-01
In this project Planetary Systems Corporation proposed utilizing additive manufacturing (3D printing) to manufacture a titanium spacecraft separation system for commercial and US government customers to realize a 90% reduction in the cost and energy. These savings were demonstrated via “printing-in” many of the parts and sub-assemblies into one part, thus greatly reducing the labor associated with design, procurement, assembly and calibration of mechanisms. Planetary Systems Corporation redesigned several of the components of the separation system based on additive manufacturing principles including geometric flexibility and the ability to fabricate complex designs, ability to combine multiple parts of an assembly into a single component, and the ability to optimize design for specific mechanical property targets. Shock absorption was specifically targeted and requirements were established to attenuate damage to the Lightband system from shock of initiation. Planetary Systems Corporation redesigned components based on these requirements and sent the designs to Oak Ridge National Laboratory to be printed. ORNL printed the parts using the Arcam electron beam melting technology based on the desire for the parts to be fabricated from Ti-6Al-4V based on the weight and mechanical performance of the material. A second set of components was fabricated from stainless steel material on the Renishaw laser powder bed technology due to the improved geometric accuracy, surface finish, and wear resistance of the material. Planetary Systems Corporation evaluated these components and determined that 3D printing is potentially a viable method for achieving significant cost and savings metrics.
Virus separation using membranes.
Grein, Tanja A; Michalsky, Ronald; Czermak, Peter
2014-01-01
Industrial manufacturing of cell culture-derived viruses or virus-like particles for gene therapy or vaccine production are complex multistep processes. In addition to the bioreactor, such processes require a multitude of downstream unit operations for product separation, concentration, or purification. Similarly, before a biopharmaceutical product can enter the market, removal or inactivation of potential viral contamination has to be demonstrated. Given the complexity of biological solutions and the high standards on composition and purity of biopharmaceuticals, downstream processing is the bottleneck in many biotechnological production trains. Membrane-based filtration can be an economically attractive and efficient technology for virus separation. Viral clearance, for instance, of up to seven orders of magnitude has been reported for state of the art polymeric membranes under best conditions.This chapter summarizes the fundamentals of virus ultrafiltration, diafiltration, or purification with adsorptive membranes. In lieu of an impractical universally applicable protocol for virus filtration, application of these principles is demonstrated with two examples. The chapter provides detailed methods for production, concentration, purification, and removal of a rod-shaped baculovirus (Autographa californica M nucleopolyhedrovirus, about 40 × 300 nm in size, a potential vector for gene therapy, and an industrially important protein expression system) or a spherical parvovirus (minute virus of mice, 22-26 nm in size, a model virus for virus clearance validation studies).
Maximum likelihood molecular clock comb: analytic solutions.
Chor, Benny; Khetan, Amit; Snir, Sagi
2006-04-01
Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM), are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model--three taxa, two state characters, under a molecular clock. Four taxa rooted trees have two topologies--the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). In a previous work, we devised a closed form analytic solution for the ML molecular clock fork. In this work, we extend the state of the art in the area of analytic solutions ML trees to the family of all four taxa trees under the molecular clock assumption. The change from the fork topology to the comb incurs a major increase in the complexity of the underlying algebraic system and requires novel techniques and approaches. We combine the ultrametric properties of molecular clock trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations. We finally use tools from algebraic geometry (e.g., Gröbner bases, ideal saturation, resultants) and employ symbolic algebra software to obtain analytic solutions for the comb. We show that in contrast to the fork, the comb has no closed form solutions (expressed by radicals in the input data). In general, four taxa trees can have multiple ML points. In contrast, we can now prove that under the molecular clock assumption, the comb has a unique (local and global) ML point. (Such uniqueness was previously shown for the fork.).
Plasma separation: physical separation at the molecular level
Gueroult, Renaud; Rax, Jean-Marcel; Fisch, Nathaniel J.
2016-09-01
Separation techniques are usually divided in two categories depending on the nature of the discriminating property: chemical or physical. Further to this difference, physical and chemical techniques differ in that chemical separation typically occurs at the molecular level, while physical separation techniques commonly operate at the macroscopic scale. Separation based on physical properties can in principle be realized at the molecular or even atomic scale by ionizing the mixture. This is in essence plasma based separation. Due to this fundamental difference, plasma based separation stands out from other separation techniques, and features unique properties. In particular, plasma separation allows separating different elements or chemical compounds based on physical properties. This could prove extremely valuable to separate macroscopically homogeneous mixtures made of substances of similar chemical formulation. Yet, the realization of plasma separation techniques' full potential requires identifying and controlling basic mechanisms in complex plasmas which exhibit suitable separation properties. In this paper, we uncover the potential of plasma separation for various applications, and identify the key physics mechanisms upon which hinges the development of these techniques.
Particle separator scroll vanes
Lastrina, F. A.; Mayer, J. C.; Pommer, L. M.
1985-07-09
An inlet particle separator for a gas turbine engine is provided with unique vanes distributed around an entrance to a particle collection chamber. The vanes are uniquely constructed to direct extraneous particles that enter the engine into the collection chamber and prevent the particles from rebounding back into the engine's air flow stream. The vanes are provided with several features to accomplish this function, including upstream faces that are sharply angled towards air flow stream direction to cause particles to bounce towards the collection chamber. In addition, throat regions between the vanes cause a localized air flow acceleration and a focusing of the particles that aid in directing the particles in a proper direction.
Nebulized therapy. SEPAR year.
Olveira, Casilda; Muñoz, Ana; Domenech, Adolfo
2014-12-01
Inhaled drugs are deposited directly in the respiratory tract. They therefore achieve higher concentrations with faster onset of action and fewer side effects than when used systemically. Nebulized drugs are mainly recommended for patients that require high doses of bronchodilators, when they need to inhale drugs that only exist in this form (antibiotics or dornase alfa) or when they are unable to use other inhalation devices. Technological development in recent years has led to new devices that optimize pulmonary deposits and reduce the time needed for treatment. In this review we focus solely on drugs currently used, or under investigation, for nebulization in adult patients; basically bronchodilators, inhaled steroids, antibiotics, antifungals, mucolytics and others such as anticoagulants, prostanoids and lidocaine. Copyright © 2014 SEPAR. Published by Elsevier Espana. All rights reserved.
Block copolymer battery separator
Wong, David; Balsara, Nitash Pervez
2016-04-26
The invention herein described is the use of a block copolymer/homopolymer blend for creating nanoporous materials for transport applications. Specifically, this is demonstrated by using the block copolymer poly(styrene-block-ethylene-block-styrene) (SES) and blending it with homopolymer polystyrene (PS). After blending the polymers, a film is cast, and the film is submerged in tetrahydrofuran, which removes the PS. This creates a nanoporous polymer film, whereby the holes are lined with PS. Control of morphology of the system is achieved by manipulating the amount of PS added and the relative size of the PS added. The porous nature of these films was demonstrated by measuring the ionic conductivity in a traditional battery electrolyte, 1M LiPF.sub.6 in EC/DEC (1:1 v/v) using AC impedance spectroscopy and comparing these results to commercially available battery separators.
Block copolymer battery separator
Wong, David; Balsara, Nitash Pervez
2016-04-26
The invention herein described is the use of a block copolymer/homopolymer blend for creating nanoporous materials for transport applications. Specifically, this is demonstrated by using the block copolymer poly(styrene-block-ethylene-block-styrene) (SES) and blending it with homopolymer polystyrene (PS). After blending the polymers, a film is cast, and the film is submerged in tetrahydrofuran, which removes the PS. This creates a nanoporous polymer film, whereby the holes are lined with PS. Control of morphology of the system is achieved by manipulating the amount of PS added and the relative size of the PS added. The porous nature of these films was demonstrated by measuring the ionic conductivity in a traditional battery electrolyte, 1M LiPF.sub.6 in EC/DEC (1:1 v/v) using AC impedance spectroscopy and comparing these results to commercially available battery separators.
The Prediction of Maximum Amplitudes of Solar Cycles and the Maximum Amplitude of Solar Cycle 24
无
2002-01-01
We present a brief review of predictions of solar cycle maximum ampli-tude with a lead time of 2 years or more. It is pointed out that a precise predictionof the maximum amplitude with such a lead-time is still an open question despiteprogress made since the 1960s. A method of prediction using statistical character-istics of solar cycles is developed: the solar cycles are divided into two groups, ahigh rising velocity (HRV) group and a low rising velocity (LRV) group, dependingon the rising velocity in the ascending phase for a given duration of the ascendingphase. The amplitude of Solar Cycle 24 can be predicted after the start of thecycle using the formula derived in this paper. Now, about 5 years before the startof the cycle, we can make a preliminary prediction of 83.2-119.4 for its maximumamplitude.
Gravity separation for oil wastewater treatment
Golomeova, Mirjana; Zendelska, Afrodita; Krstev, Boris; Krstev, Aleksandar
2010-01-01
In this paper, the applications of gravity separation for oil wastewater treatment are presented. Described is operation on conventional gravity separation and parallel plate separation. Key words: gravity separation, oil, conventional gravity separation, parallel plate separation.
Gravity separation for oil wastewater treatment
Golomeova, Mirjana; Zendelska, Afrodita; Krstev, Boris; Krstev, Aleksandar
2010-01-01
In this paper, the applications of gravity separation for oil wastewater treatment are presented. Described is operation on conventional gravity separation and parallel plate separation. Key words: gravity separation, oil, conventional gravity separation, parallel plate separation.
Noise removal in multichannel image data by a parametric maximum noise fraction estimator
Conradsen, Knut; Ersbøll, Bjarne Kjær; Nielsen, Allan Aasbjerg
1991-01-01
Some approaches to noise removal in multispectral imagery are presented. The primary contribution of the present work is the establishment of several ways of estimating the noise covariance matrix from image data and a comparison of the noise separation performances. A case study with Landsat MSS...... data demonstrates that the principal components are not sorted correctly in terms of visual image quality, whereas the minimum/maximum autocorrelation factors and the maximum noise fractions (MAFs) are. A case study with Landsat TM data shows an ordering which is consistent with the spatial wavelength...... in the components. The case studies indicate that a better noise separation is attained when using more complex noise models than the simple model implied by MAF analysis. (L.M.)...
[Effect of trypsin on the rat keratinocyte separation and subculture].
Ouyang, An-Li; Zhou, Yan; Hua, Ping; Tan, Wen-Song
2002-01-01
The effect of trypsin on the separation an subculture of the keratinocytes was investigated in this work. It was found that when 0.25% trypsin was employed for 5 minutes to separate keratinocytes, the number of active keratinocytes and the cells capable of forming colony were higher than those of other experimental conditions. The maximum attached ratio of primary keratinocytes was obtained when skin tissues were treated at 0.05% concentration of trypsin. With the increase of the trypsin concentrations, the attached ratio, attachment rate constant, and colony forming efficiency were all increased. Thus, 0.25% concentration of trypsin was recommended for separating and subculturing the keratinocytes.
Pattern formation, logistics, and maximum path probability
Kirkaldy, J. S.
1985-05-01
The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are
Selectivity in capillary electrokinetic separations
de Zeeuw, R.A; de Jong, G.J.; Ensing, K
1999-01-01
This review gives a survey of selectivity modes in capillary electrophoresis separations in pharmaceutical analysis and bioanalysis. Despite the high efficiencies of these separation techniques, good selectivity is required to allow quantitation or identification of a Chemistry and Toxicology, parti
Physical Separation in the Workplace
Stea, Diego; Foss, Nicolai Juul; Holdt Christensen, Peter
2015-01-01
Physical separation is pervasive in organizations, and has powerful effects on employee motivation and organizational behaviors. However, research shows that workplace separation is characterized by a variety of tradeoffs, tensions, and challenges that lead to both positive and negative outcomes...
Determine separations process strategy decision
Slaathaug, E.J.
1996-01-01
This study provides a summary level comparative analysis of selected, top-level, waste treatment strategies. These strategies include No Separations, Separations (high-level/low-level separations), and Deferred Separations of the tank waste. These three strategies encompass the full range of viable processing alternatives based upon full retrieval of the tank wastes. The assumption of full retrieval of the tank wastes is a predecessor decision and will not be revisited in this study.
Composite separators and redox flow batteries based on porous separators
Li, Bin; Wei, Xiaoliang; Luo, Qingtao; Nie, Zimin; Wang, Wei; Sprenkle, Vincent L.
2016-01-12
Composite separators having a porous structure and including acid-stable, hydrophilic, inorganic particles enmeshed in a substantially fully fluorinated polyolefin matrix can be utilized in a number of applications. The inorganic particles can provide hydrophilic characteristics. The pores of the separator result in good selectivity and electrical conductivity. The fluorinated polymeric backbone can result in high chemical stability. Accordingly, one application of the composite separators is in redox flow batteries as low cost membranes. In such applications, the composite separator can also enable additional property-enhancing features compared to ion-exchange membranes. For example, simple capacity control can be achieved through hydraulic pressure by balancing the volumes of electrolyte on each side of the separator. While a porous separator can also allow for volume and pressure regulation, in RFBs that utilize corrosive and/or oxidizing compounds, the composite separators described herein are preferable for their robustness in the presence of such compounds.
Cardiorespiratory Fitness of Inmates of a Maximum Security Prison ...
USER
Maximum Security Prison; and also to determine the effects of age, gender, and period of incarceration on CRF. A total of 247 apparently healthy inmates of Maiduguri Maximum Security ... with different types of cardiovascular and metabolic.
Maximum likelihood polynomial regression for robust speech recognition
LU Yong; WU Zhenyang
2011-01-01
The linear hypothesis is the main disadvantage of maximum likelihood linear re- gression （MLLR）. This paper applies the polynomial regression method to model adaptation and establishes a nonlinear model adaptation algorithm using maximum likelihood polyno
Separating Underdetermined Convolutive Speech Mixtures
Pedersen, Michael Syskind; Wang, DeLiang; Larsen, Jan
2006-01-01
a method for underdetermined blind source separation of convolutive mixtures. The proposed framework is applicable for separation of instantaneous as well as convolutive speech mixtures. It is possible to iteratively extract each speech signal from the mixture by combining blind source separation...
General Motors sidestream separator
Tessier, R.J.
1981-01-01
On February 15, 1980, the United States Environmental Protection Agency, acting pursuant to Paragraph 113(D) (4) of the Clean Air Act, issued to General Motors an innovative technology order covering fifteen coal-fired spreader-stoker boilers located at six General Motors plants in Ohio. The purpose and effect of this order was to permit General Motors time to develop a new, innovative technique for controlling particulate emissions from the specified boilers before compliance with the federally approved Ohio particulate control regulation was required. This new technology was christened, The Sidestream Separator, by General Motors. It provides a highly cost effective means of reducing particulate emissions below levels currently obtainable with conventionally used high efficiency mechanical collectors. These improvements could prove to be of substantial benefit to many industrial facilities with spreader-stoker coal-fired boilers that cannot be brought into compliance with applicble air pollution regulations except by application of far more expensive and unwieldly electrostatic precipitators (ESP's) or fabric filters (baghouses).
PARAFFIN SEPARATION VACUUM DISTILLATION
Zaid A. Abdulrahman
2013-05-01
Full Text Available Simulated column performance curves were constructed for existing paraffin separation vacuum distillation column in LAB plant (Arab Detergent Company/Baiji-Iraq. The variables considered in this study are the thermodynamic model option, top vacuum pressure, top and bottom temperatures, feed temperature, feed composition & reflux ratio. Also simulated columns profiles for the temperature, vapor & liquid flow rates composition were constructed. Four different thermodynamic model options (SRK, TSRK, PR, and ESSO were used, affecting the results within 1-25% variation for the most cases.The simulated results show that about 2% to 8 % of paraffin (C10, C11, C12, & C13 present at the bottom stream which may cause a problem in the LAB plant. The major variations were noticed for the top temperature & the paraffin weight fractions at bottom section with top vacuum pressure. The bottom temperature above 240 oC is not recommended because the total bottom flow rate decreases sharply, where as the weight fraction of paraffins decrease slightly. The study gives evidence about a successful simulation with CHEMCAD
An Investigation into Separation of Impurity from Saffron Stigma Using an Electrostatic Separator
H Mortezapour
2015-03-01
Full Text Available In the present study, a laboratory electrostatic separator was constructed and its separation potential of white saffron impurities from stigma was investigated. The device was comprised of a nylon ribbon which moves in contact with a woolen brush and was charged by the triboelectric effect. The charged ribbon, then, moved over the material pan. Since the electrostatic behavior vary from various materials, their attraction to the ribbon differ. The separation tests were conducted at three levels of ribbon position (with 1.5, 2.5 and 3.5 cm from the material pan, three drum speeds (50, 60 and 70 rpm and three working times (120, 18 and 240 seconds. The results showed that material absorption increased as working time increased and the ribbon distance decreased. Meanwhile, rising the speed from 50 to 60 rpm improved material absorption while, more increasing from 60 to 70 rpm reduced the absorption. A maximum impurity separation of 97% was observed with ribbon distance of 1.5 cm, ribbon speed of 60 rpm and working for 240 seconds. The minimum stigma losses were found to be about 2% when the ribbon distance and speed were 3.5 cm and 70 rpm, respectively, and the separator worked for 120 seconds.
M. Mihelich
2014-11-01
Full Text Available We derive rigorous results on the link between the principle of maximum entropy production and the principle of maximum Kolmogorov–Sinai entropy using a Markov model of the passive scalar diffusion called the Zero Range Process. We show analytically that both the entropy production and the Kolmogorov–Sinai entropy seen as functions of f admit a unique maximum denoted fmaxEP and fmaxKS. The behavior of these two maxima is explored as a function of the system disequilibrium and the system resolution N. The main result of this article is that fmaxEP and fmaxKS have the same Taylor expansion at first order in the deviation of equilibrium. We find that fmaxEP hardly depends on N whereas fmaxKS depends strongly on N. In particular, for a fixed difference of potential between the reservoirs, fmaxEP(N tends towards a non-zero value, while fmaxKS(N tends to 0 when N goes to infinity. For values of N typical of that adopted by Paltridge and climatologists (N ≈ 10 ~ 100, we show that fmaxEP and fmaxKS coincide even far from equilibrium. Finally, we show that one can find an optimal resolution N* such that fmaxEP and fmaxKS coincide, at least up to a second order parameter proportional to the non-equilibrium fluxes imposed to the boundaries. We find that the optimal resolution N* depends on the non equilibrium fluxes, so that deeper convection should be represented on finer grids. This result points to the inadequacy of using a single grid for representing convection in climate and weather models. Moreover, the application of this principle to passive scalar transport parametrization is therefore expected to provide both the value of the optimal flux, and of the optimal number of degrees of freedom (resolution to describe the system.
Identification and characterization of icosahedral metallic nanowires
Pelaez, Samuel; Serena, Pedro A. [Instituto de Ciencia de Materiales de Madrid, Consejo Superior de Investigaciones Cientificas, c/Sor Juana Ines de la Cruz 3, Cantoblanco, 28049-Madrid (Spain); Guerrero, Carlo [Departamento de Fisica, Facultad Experimental de Ciencias, La Universidad del Zulia, Maracaibo (Venezuela); Paredes, Ricardo [Centro de Fisica, Instituto Venezolano de Investigaciones Cientificas, Apto. 20632, Caracas 1020A (Venezuela); Garcia-Mochales, Pedro [Departamento de Fisica de la Materia Condensada, Facultad de Ciencias, Universidad Autonoma de Madrid, c/Tomas y Valiente 7, Cantoblanco, 28049-Madrid (Spain)
2009-10-15
We present and discuss an algorithm to identify ans characterize the long icosahedral structures (staggered pentagonal nanowires with 1-5-1-5 atomic structure) that appear in Molecular Dynamics simulations of metallic nanowires of different species subjected to stretching. The use of the algorithm allows the identification of pentagonal rings forming the icosahedral structure as well as the determination of its number n{sub p}, and the maximum length of the pentagonal nanowire L{sub p}{sup m}. The algorithm is tested with some ideal structures to show its ability to discriminate between pentagonal rings and other ring structures. We applied the algorithm to Ni nanowires with temperatures ranging between 4 K and 865 K, stretched along the[100] direction. We studied statistically the formation of pentagonal nanowires obtaining the distributions of length L{sub p}{sup m} and number of rings n{sub p} as function of the temperature. The L{sub p}{sup m} distribution presents a peaked shape, with peaks locate at fixes distances whose separation corresponds to the distance between two consecutive pentagonal rings. (copyright 2009 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)
20 CFR 617.14 - Maximum amount of TRA.
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Maximum amount of TRA. 617.14 Section 617.14... FOR WORKERS UNDER THE TRADE ACT OF 1974 Trade Readjustment Allowances (TRA) § 617.14 Maximum amount of TRA. (a) General rule. Except as provided under paragraph (b) of this section, the maximum amount of...
40 CFR 94.107 - Determination of maximum test speed.
2010-07-01
... specified in 40 CFR 1065.510. These data points form the lug curve. It is not necessary to generate the... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Determination of maximum test speed... Determination of maximum test speed. (a) Overview. This section specifies how to determine maximum test...
14 CFR 25.1505 - Maximum operating limit speed.
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Maximum operating limit speed. 25.1505... Operating Limitations § 25.1505 Maximum operating limit speed. The maximum operating limit speed (V MO/M MO airspeed or Mach Number, whichever is critical at a particular altitude) is a speed that may not...
Maximum Performance Tests in Children with Developmental Spastic Dysarthria.
Wit, J.; And Others
1993-01-01
Three Maximum Performance Tasks (Maximum Sound Prolongation, Fundamental Frequency Range, and Maximum Repetition Rate) were administered to 11 children (ages 6-11) with spastic dysarthria resulting from cerebral palsy and 11 controls. Despite intrasubject and intersubject variability in normal and pathological speakers, the tasks were found to be…
Maximum physical capacity testing in cancer patients undergoing chemotherapy
Knutsen, L.; Quist, M; Midtgaard, J
2006-01-01
BACKGROUND: Over the past few years there has been a growing interest in the field of physical exercise in rehabilitation of cancer patients, leading to requirements for objective maximum physical capacity measurement (maximum oxygen uptake (VO(2max)) and one-repetition maximum (1RM)) to determine...
Berry, Vincent; Nicolas, François
2006-01-01
Given a set of evolutionary trees on a same set of taxa, the maximum agreement subtree problem (MAST), respectively, maximum compatible tree problem (MCT), consists of finding a largest subset of taxa such that all input trees restricted to these taxa are isomorphic, respectively compatible. These problems have several applications in phylogenetics such as the computation of a consensus of phylogenies obtained from different data sets, the identification of species subjected to horizontal gene transfers and, more recently, the inference of supertrees, e.g., Trees Of Life. We provide two linear time algorithms to check the isomorphism, respectively, compatibility, of a set of trees or otherwise identify a conflict between the trees with respect to the relative location of a small subset of taxa. Then, we use these algorithms as subroutines to solve MAST and MCT on rooted or unrooted trees of unbounded degree. More precisely, we give exact fixed-parameter tractable algorithms, whose running time is uniformly polynomial when the number of taxa on which the trees disagree is bounded. The improves on a known result for MAST and proves fixed-parameter tractability for MCT.
Separation of non-ferrous metals from ASR by corona electrostatic separation
Kim, Yang-soo; Choi, Jin-Young; Jeon, Ho-Seok; Han, Oh-Hyung; Park, Chul-Hyun
2016-04-01
Automotive shredder residue (ASR), the residual fraction of approximate 25% obtained after dismantling and shredding from waste car, consists of polymers (plastics and rubber), metals (ferrous and non-ferrous), wood, glass and fluff (textile and fiber). ASR cannot be effectively separated due to its heterogeneous materials and coated or laminated complexes and then largely deposited in land-fill sites as waste. Thus reducing a pollutant release before disposal, techniques that can improve the liberation of coated (or laminated) complexes and the recovery of valuable metals from the shredder residue are needed. ASR may be separated by a series of physical processing operations such as comminution, air, magnetic and electrostatic separations. The work deals with the characterization of the shredder residue coming from an industrial plant in korea and focuses on estimating the optimal conditions of corona electrostatic separation for improving the separation efficiency of valuable non-ferrous metals such as aluminum, copper and etc. From the results of test, the maximum separation achievable for non-ferrous metals using a corona electrostatic separation has been shown to be recovery of 92.5% at a grade of 75.8%. The recommended values of the process variables, particle size, electrode potential, drum speed, splitter position and relative humidity are -6mm, 50 kV, 35rpm, 20° and less 40%, respectively. Acknowledgments This study was supported by the R&D Center for Valuable Recycling (Global-Top R&BD Program) of the Ministry of Environment. (Project No. GT-11-C-01-170-0)
Present and Last Glacial Maximum climates as states of maximum entropy production
Herbert, Corentin; Kageyama, Masa; Dubrulle, Berengere
2011-01-01
The Earth, like other planets with a relatively thick atmosphere, is not locally in radiative equilibrium and the transport of energy by the geophysical fluids (atmosphere and ocean) plays a fundamental role in determining its climate. Using simple energy-balance models, it was suggested a few decades ago that the meridional energy fluxes might follow a thermodynamic Maximum Entropy Production (MEP) principle. In the present study, we assess the MEP hypothesis in the framework of a minimal climate model based solely on a robust radiative scheme and the MEP principle, with no extra assumptions. Specifically, we show that by choosing an adequate radiative exchange formulation, the Net Exchange Formulation, a rigorous derivation of all the physical parameters can be performed. The MEP principle is also extended to surface energy fluxes, in addition to meridional energy fluxes. The climate model presented here is extremely fast, needs very little empirical data and does not rely on ad hoc parameterizations. We in...
Separators - Technology review: Ceramic based separators for secondary batteries
Nestler, Tina; Schmid, Robert; Münchgesang, Wolfram; Bazhenov, Vasilii; Schilm, Jochen; Leisegang, Tilmann; Meyer, Dirk C.
2014-06-01
Besides a continuous increase of the worldwide use of electricity, the electric energy storage technology market is a growing sector. At the latest since the German energy transition ("Energiewende") was announced, technological solutions for the storage of renewable energy have been intensively studied. Storage technologies in various forms are commercially available. A widespread technology is the electrochemical cell. Here the cost per kWh, e. g. determined by energy density, production process and cycle life, is of main interest. Commonly, an electrochemical cell consists of an anode and a cathode that are separated by an ion permeable or ion conductive membrane - the separator - as one of the main components. Many applications use polymeric separators whose pores are filled with liquid electrolyte, providing high power densities. However, problems arise from different failure mechanisms during cell operation, which can affect the integrity and functionality of these separators. In the case of excessive heating or mechanical damage, the polymeric separators become an incalculable security risk. Furthermore, the growth of metallic dendrites between the electrodes leads to unwanted short circuits. In order to minimize these risks, temperature stable and non-flammable ceramic particles can be added, forming so-called composite separators. Full ceramic separators, in turn, are currently commercially used only for high-temperature operation systems, due to their comparably low ion conductivity at room temperature. However, as security and lifetime demands increase, these materials turn into focus also for future room temperature applications. Hence, growing research effort is being spent on the improvement of the ion conductivity of these ceramic solid electrolyte materials, acting as separator and electrolyte at the same time. Starting with a short overview of available separator technologies and the separator market, this review focuses on ceramic-based separators
Mathematical modelling of membrane separation
Vinther, Frank
This thesis concerns mathematical modelling of membrane separation. The thesis consists of introductory theory on membrane separation, equations of motion, and properties of dextran, which will be the solute species throughout the thesis. Furthermore, the thesis consist of three separate mathemat......This thesis concerns mathematical modelling of membrane separation. The thesis consists of introductory theory on membrane separation, equations of motion, and properties of dextran, which will be the solute species throughout the thesis. Furthermore, the thesis consist of three separate....... It is found that the probability of entering the pore is highest when the largest of the radii in the ellipse is equal to half the radius of the pore, in case of molecules with circular radius less than the pore radius. The results are directly related to the macroscopic distribution coefficient...
Capillary Separation: Micellar Electrokinetic Chromatography
Terabe, Shigeru
2009-07-01
Micellar electrokinetic chromatography (MEKC), a separation mode of capillary electrophoresis (CE), has enabled the separation of electrically neutral analytes. MEKC can be performed by adding an ionic micelle to the running solution of CE without modifying the instrument. Its separation principle is based on the differential migration of the ionic micelles and the bulk running buffer under electrophoresis conditions and on the interaction between the analyte and the micelle. Hence, MEKC's separation principle is similar to that of chromatography. MEKC is a useful technique particularly for the separation of small molecules, both neutral and charged, and yields high-efficiency separation in a short time with minimum amounts of sample and reagents. To improve the concentration sensitivity of detection, several on-line sample preconcentration techniques such as sweeping have been developed.
Separable programming theory and methods
Stefanov, Stefan M
2001-01-01
In this book, the author considers separable programming and, in particular, one of its important cases - convex separable programming Some general results are presented, techniques of approximating the separable problem by linear programming and dynamic programming are considered Convex separable programs subject to inequality equality constraint(s) and bounds on variables are also studied and iterative algorithms of polynomial complexity are proposed As an application, these algorithms are used in the implementation of stochastic quasigradient methods to some separable stochastic programs Numerical approximation with respect to I1 and I4 norms, as a convex separable nonsmooth unconstrained minimization problem, is considered as well Audience Advanced undergraduate and graduate students, mathematical programming operations research specialists
Separation process using microchannel technology
Tonkovich, Anna Lee; Perry, Steven T.; Arora, Ravi; Qiu, Dongming; Lamont, Michael Jay; Burwell, Deanna; Dritz, Terence Andrew; McDaniel, Jeffrey S.; Rogers, Jr.; William A.; Silva, Laura J.; Weidert, Daniel J.; Simmons, Wayne W.; Chadwell, G. Bradley
2009-03-24
The disclosed invention relates to a process and apparatus for separating a first fluid from a fluid mixture comprising the first fluid. The process comprises: (A) flowing the fluid mixture into a microchannel separator in contact with a sorption medium, the fluid mixture being maintained in the microchannel separator until at least part of the first fluid is sorbed by the sorption medium, removing non-sorbed parts of the fluid mixture from the microchannel separator; and (B) desorbing first fluid from the sorption medium and removing desorbed first fluid from the microchannel separator. The process and apparatus are suitable for separating nitrogen or methane from a fluid mixture comprising nitrogen and methane. The process and apparatus may be used for rejecting nitrogen in the upgrading of sub-quality methane.
Wastewater treatment with acoustic separator
Kambayashi, Takuya; Saeki, Tomonori; Buchanan, Ian
2017-07-01
Acoustic separation is a filter-free wastewater treatment method based on the forces generated in ultrasonic standing waves. In this report, a batch-system separator based on acoustic separation was demonstrated using a small-scale prototype acoustic separator to remove suspended solids from oil sand process-affected water (OSPW). By applying an acoustic separator to the batch use OSPW treatment, the required settling time, which was the time that the chemical oxygen demand (COD) decreased to the environmental criterion (<200 mg/L), could be shortened from 10 to 1 min. Moreover, for a 10 min settling time, the acoustic separator could reduce the FeCl3 dose as coagulant in OSPW treatment from 500 to 160 mg/L.
Hereditary separability in Hausdorff continua
D. Daniel
2012-04-01
Full Text Available We consider those Hausdorff continua S such that each separable subspace of S is hereditarily separable. Due to results of Ostaszewski and Rudin, respectively, all monotonically normal spaces and therefore all continuous Hausdorff images of ordered compacta also have this property. Our study focuses on the structure of such spaces that also possess one of various rim properties, with emphasis given to rim-separability. In so doing we obtain analogues of results of M. Tuncali and I. Loncar, respectively.
A Note on k-Limited Maximum Base
Yang Ruishun; Yang Xiaowei
2006-01-01
The problem of k-limited maximum base was specified into two special problems of k-limited maximum base; that is, let subset D of the problem of k-limited maximum base be an independent set and a circuit of the matroid, respectively. It was proved that under this circumstance the collections of k-limited base satisfy base axioms. Then a new matroid was determined, and the problem of k-limited maximum base was transformed to the problem of maximum base of this new matroid. Aiming at the problem, two algorithms, which in essence are greedy algorithms based on former matroid, were presented for the two special problems of k-limited maximum base. They were proved to be reasonable and more efficient than the algorithm presented by Ma Zhongfan in view of the complexity of algorithm.
An Interval Maximum Entropy Method for Quadratic Programming Problem
RUI Wen-juan; CAO De-xin; SONG Xie-wu
2005-01-01
With the idea of maximum entropy function and penalty function methods, we transform the quadratic programming problem into an unconstrained differentiable optimization problem, discuss the interval extension of the maximum entropy function, provide the region deletion test rules and design an interval maximum entropy algorithm for quadratic programming problem. The convergence of the method is proved and numerical results are presented. Both theoretical and numerical results show that the method is reliable and efficient.
Gas Separation in the Ranque-Hilsch Vortex tube
Linderstrøm-Lang, C. U.
1964-01-01
The gas separation taking place in the vortex tube is studied in detail. Both enrichment and depletion of a given component in any one of the two resultant streams may take place; the sign of this separation effect depends on certain parameters, notably the hot to cold flow ratio. A comparison...... of the data shows how the pattern of the effect curve, i.e. the separation effect as a function of hot flow fraction, varies with constructional parameters. Among these the ratio of the diameters of the two orifices through which the gas escapes from the tube, is of paramount importance. Also their magnitude...... relative to the tube diameter has a distinct modifying effect. The separation ability as a function of the tube length has a maximum at quite short lengths, dependent, however, on the inlet jet diameter in such a way that an increase in this causes an increase in the optimal length. The conclusion...
Integer Programming Model for Maximum Clique in Graph
YUAN Xi-bo; YANG You; ZENG Xin-hai
2005-01-01
The maximum clique or maximum independent set of graph is a classical problem in graph theory. Combined with Boolean algebra and integer programming, two integer programming models for maximum clique problem,which improve the old results were designed in this paper. Then, the programming model for maximum independent set is a corollary of the main results. These two models can be easily applied to computer algorithm and software, and suitable for graphs of any scale. Finally the models are presented as Lingo algorithms, verified and compared by several examples.
Counterexamples to convergence theorem of maximum-entropy clustering algorithm
于剑; 石洪波; 黄厚宽; 孙喜晨; 程乾生
2003-01-01
In this paper, we surveyed the development of maximum-entropy clustering algorithm, pointed out that the maximum-entropy clustering algorithm is not new in essence, and constructed two examples to show that the iterative sequence given by the maximum-entropy clustering algorithm may not converge to a local minimum of its objective function, but a saddle point. Based on these results, our paper shows that the convergence theorem of maximum-entropy clustering algorithm put forward by Kenneth Rose et al. does not hold in general cases.
Jan Werner
Full Text Available We tested if growth rates of recent taxa are unequivocally separated between endotherms and ectotherms, and compared these to dinosaurian growth rates. We therefore performed linear regression analyses on the log-transformed maximum growth rate against log-transformed body mass at maximum growth for extant altricial birds, precocial birds, eutherians, marsupials, reptiles, fishes and dinosaurs. Regression models of precocial birds (and fishes strongly differed from Case's study (1978, which is often used to compare dinosaurian growth rates to those of extant vertebrates. For all taxonomic groups, the slope of 0.75 expected from the Metabolic Theory of Ecology was statistically supported. To compare growth rates between taxonomic groups we therefore used regressions with this fixed slope and group-specific intercepts. On average, maximum growth rates of ectotherms were about 10 (reptiles to 20 (fishes times (in comparison to mammals or even 45 (reptiles to 100 (fishes times (in comparison to birds lower than in endotherms. While on average all taxa were clearly separated from each other, individual growth rates overlapped between several taxa and even between endotherms and ectotherms. Dinosaurs had growth rates intermediate between similar sized/scaled-up reptiles and mammals, but a much lower rate than scaled-up birds. All dinosaurian growth rates were within the range of extant reptiles and mammals, and were lower than those of birds. Under the assumption that growth rate and metabolic rate are indeed linked, our results suggest two alternative interpretations. Compared to other sauropsids, the growth rates of studied dinosaurs clearly indicate that they had an ectothermic rather than an endothermic metabolic rate. Compared to other vertebrate growth rates, the overall high variability in growth rates of extant groups and the high overlap between individual growth rates of endothermic and ectothermic extant species make it impossible to rule
Werner, Jan; Griebeler, Eva Maria
2014-01-01
We tested if growth rates of recent taxa are unequivocally separated between endotherms and ectotherms, and compared these to dinosaurian growth rates. We therefore performed linear regression analyses on the log-transformed maximum growth rate against log-transformed body mass at maximum growth for extant altricial birds, precocial birds, eutherians, marsupials, reptiles, fishes and dinosaurs. Regression models of precocial birds (and fishes) strongly differed from Case's study (1978), which is often used to compare dinosaurian growth rates to those of extant vertebrates. For all taxonomic groups, the slope of 0.75 expected from the Metabolic Theory of Ecology was statistically supported. To compare growth rates between taxonomic groups we therefore used regressions with this fixed slope and group-specific intercepts. On average, maximum growth rates of ectotherms were about 10 (reptiles) to 20 (fishes) times (in comparison to mammals) or even 45 (reptiles) to 100 (fishes) times (in comparison to birds) lower than in endotherms. While on average all taxa were clearly separated from each other, individual growth rates overlapped between several taxa and even between endotherms and ectotherms. Dinosaurs had growth rates intermediate between similar sized/scaled-up reptiles and mammals, but a much lower rate than scaled-up birds. All dinosaurian growth rates were within the range of extant reptiles and mammals, and were lower than those of birds. Under the assumption that growth rate and metabolic rate are indeed linked, our results suggest two alternative interpretations. Compared to other sauropsids, the growth rates of studied dinosaurs clearly indicate that they had an ectothermic rather than an endothermic metabolic rate. Compared to other vertebrate growth rates, the overall high variability in growth rates of extant groups and the high overlap between individual growth rates of endothermic and ectothermic extant species make it impossible to rule out either of
Microfluidic immunomagnetic separation for enhanced bacterial detection
Hoyland, James; Kunstmann-Olsen, Casper; Ahmed, Shakil
A combined lab-on-a-chip approach combining immunomagnetic separation (IMS) and flow cytometry was developed for the enrichment and detection of salmonella contamination in food samples. Immunomagnetic beads were immobilized in chips consisting of long fractal meanders while contaminated samples...... to obtain maximum capturing efficiency. The effects of channel volume, path length and number of bends of microfluidic chip on IMS efficiency were also determined....... were flowed over them. After incubation the beads can be released for detection into the flow-cytometry chip. Immunomagnetic beads were prepared by using anti-Salmonella antibodies and magnetic beads (2.8μm). Both the synthesized and commercially available anti-Salmonella beads were used to capture...
Phase separation during radiation crosslinking of unsaturated polyester resin
Pucić, Irina; Ranogajec, Franjo
2003-06-01
Phase separation during radiation-initiated crosslinking of unsaturated polyester resin was studied. Residual reactivity of liquid phases and gels of partially cured samples was determined by DSC. Uncured resin and liquid phases showed double reaction exotherm, gels had a single maximum that corresponded to higher-temperature maximum of liquid parts. The lower-temperature process was attributed to styrene-polyester copolymerization. At higher temperatures, polyester unsaturations that remained unreacted due to microgel formation homopolymerized. FTIR revealed different composition of phases. In thicker samples, reaction heat influenced microgel formation causing delayed appearance of gel and faster increase in conversion.
Parental separation and pediatric cancer
Grant, Sally; Carlsen, Kathrine; Bidstrup, Pernille Envold Hansen
2012-01-01
The purpose of this study was to determine the risk for separation (ending cohabitation) of the parents of a child with a diagnosis of cancer.......The purpose of this study was to determine the risk for separation (ending cohabitation) of the parents of a child with a diagnosis of cancer....
Fast Monaural Separation of Speech
Pontoppidan, Niels Henrik; Dyrholm, Mads
2003-01-01
a Factorial Hidden Markov Model, with non-stationary assumptions on the source autocorrelations modelled through the Factorial Hidden Markov Model, leads to separation in the monaural case. By extending Hansens work we find that Roweis' assumptions are necessary for monaural speech separation. Furthermore we...
Metals Separation by Liquid Extraction.
Malmary, G.; And Others
1984-01-01
As part of a project focusing on techniques in industrial chemistry, students carry out experiments on separating copper from cobalt in chloride-containing aqueous solution by liquid extraction with triisoctylamine solvent and search the literature on the separation process of these metals. These experiments and the literature research are…
Vision 2020: 2000 Separations Roadmap
Adler, Stephen [Center for Waster Reduction Technologies; Beaver, Earl [Practical Sustainability; Bryan, Paul [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Robinson, Sharon [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Watson, Jack [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2000-01-01
This report documents the results of four workshops on the technology barriers, research needs, and priorities of the chemical, agricultural, petroleum, and pharmaceutical industries as they relate to separation technologies utilizing adsorbents, crystallization, distillation, extraction, membranes, separative reactors, ion exchange, bioseparations, and dilute solutions.
Electrostatically enhanced core separator system
Easom, B.H.; Smolensky, L.A.; Altman, R.F. [LSR Technologies, Inc., Acton, MA (United States)
1997-12-31
The Electrostatically Enhanced Core Separator (EECS) system employs the same design principles as the mechanical Core Separator system plus an electrostatic separation enhancing technique. The EECS system contains a special type of separator, the EECS element, a conventional solids collector and means for flow recirculation. In the EECS system solids separation and collection are accomplished in two different components. The EECS element acts as a separator, not as a collector so particles are not collected on its walls. This eliminates or at least mitigates the problems associated with reentrainment (due to high or low dust resistivity), seepage (due to gas flow below the precipitator plates and over the hoppers), sneakage (due to gas flow both above and below the precipitator plates), and rapping reentrainment. If the EECS separation efficiency is high enough, particles cannot leave the system with the process stream. They recirculate until they are extracted by the collector. As a result, the separation efficiency of the EECS element determines the efficiency of the system, even if the collector efficiency is relatively low. 8 refs., 3 figs.
Relational Parametricity and Separation Logic
Birkedal, Lars; Yang, Hongseok
2008-01-01
Separation logic is a recent extension of Hoare logic for reasoning about programs with references to shared mutable data structures. In this paper, we provide a new interpretation of the logic for a programming language with higher types. Our interpretation is based on Reynolds's relational...... parametricity, and it provides a formal connection between separation logic and data abstraction. Udgivelsesdato: 2008...
Testing Orions Fairing Separation System
Martinez, Henry; Cloutier, Chris; Lemmon, Heber; Rakes, Daniel; Oldham, Joe; Schlagel, Keith
2014-01-01
Traditional fairing systems are designed to fully encapsulate and protect their payload from the harsh ascent environment including acoustic vibrations, aerodynamic forces and heating. The Orion fairing separation system performs this function and more by also sharing approximately half of the vehicle structural load during ascent. This load-share condition through launch and during jettison allows for a substantial increase in mass to orbit. A series of component-level development tests were completed to evaluate and characterize each component within Orion's unique fairing separation system. Two full-scale separation tests were performed to verify system-level functionality and provide verification data. This paper summarizes the fairing spring, Pyramidal Separation Mechanism and forward seal system component-level development tests, system-level separation tests, and lessons learned.
Novel blind source separation algorithm using Gaussian mixture density function
孔薇; 杨杰; 周越
2004-01-01
The blind source separation (BSS) is an important task for numerous applications in signal processing, communications and array processing. But for many complex sources blind separation algorithms are not efficient because the probability distribution of the sources cannot be estimated accurately. So in this paper, to justify the ME(maximum enteropy) approach, the relation between the ME and the MMI(minimum mutual information) is elucidated first. Then a novel algorithm that uses Gaussian mixture density to approximate the probability distribution of the sources is presented based on the ME approach. The experiment of the BSS of ship-radiated noise demonstrates that the proposed algorithm is valid and efficient.
When do evolutionary algorithms optimize separable functions in parallel?
Doerr, Benjamin; Sudholt, Dirk; Witt, Carsten
2013-01-01
is that evolutionary algorithms make progress on all subfunctions in parallel, so that optimizing a separable function does not take not much longer than optimizing the hardest subfunction-subfunctions are optimized "in parallel." We show that this is only partially true, already for the simple (1+1) evolutionary...... algorithm ((1+1) EA). For separable functions composed of k Boolean functions indeed the optimization time is the maximum optimization time of these functions times a small O(log k) overhead. More generally, for sums of weighted subfunctions that each attain non-negative integer values less than r = o(log1...
Combining Experiments and Simulations Using the Maximum Entropy Principle
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...
49 CFR 174.86 - Maximum allowable operating speed.
2010-10-01
... 49 Transportation 2 2010-10-01 2010-10-01 false Maximum allowable operating speed. 174.86 Section... operating speed. (a) For molten metals and molten glass shipped in packagings other than those prescribed in § 173.247 of this subchapter, the maximum allowable operating speed may not exceed 24 km/hour (15...
Parametric optimization of thermoelectric elements footprint for maximum power generation
Rezania, A.; Rosendahl, Lasse; Yin, Hao
2014-01-01
The development studies in thermoelectric generator (TEG) systems are mostly disconnected to parametric optimization of the module components. In this study, optimum footprint ratio of n- and p-type thermoelectric (TE) elements is explored to achieve maximum power generation, maximum cost-perform...
30 CFR 56.19066 - Maximum riders in a conveyance.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Maximum riders in a conveyance. 56.19066 Section 56.19066 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND... Hoisting Hoisting Procedures § 56.19066 Maximum riders in a conveyance. In shafts inclined over 45...
30 CFR 57.19066 - Maximum riders in a conveyance.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Maximum riders in a conveyance. 57.19066 Section 57.19066 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND... Hoisting Hoisting Procedures § 57.19066 Maximum riders in a conveyance. In shafts inclined over 45...
Maximum Atmospheric Entry Angle for Specified Retrofire Impulse
T. N. Srivastava
1969-07-01
Full Text Available Maximum atmospheric entry angles for vehicles initially moving in elliptic orbits are investigated and it is shown that tangential retrofire impulse at the apogee results in the maximum entry angle. Equivalence of maximizing the entry angle and minimizing the retrofire impulse is also established.
5 CFR 838.711 - Maximum former spouse survivor annuity.
2010-01-01
... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Maximum former spouse survivor annuity... Orders Awarding Former Spouse Survivor Annuities Limitations on Survivor Annuities § 838.711 Maximum former spouse survivor annuity. (a) Under CSRS, payments under a court order may not exceed the...
46 CFR 151.45-6 - Maximum amount of cargo.
2010-10-01
... 46 Shipping 5 2010-10-01 2010-10-01 false Maximum amount of cargo. 151.45-6 Section 151.45-6 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CERTAIN BULK DANGEROUS CARGOES BARGES CARRYING BULK LIQUID HAZARDOUS MATERIAL CARGOES Operations § 151.45-6 Maximum amount of cargo. (a)...
20 CFR 226.52 - Total annuity subject to maximum.
2010-04-01
... rate effective on the date the supplemental annuity begins, before any reduction for a private pension... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Total annuity subject to maximum. 226.52... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Railroad Retirement Family Maximum § 226.52...
49 CFR 195.406 - Maximum operating pressure.
2010-10-01
... 49 Transportation 3 2010-10-01 2010-10-01 false Maximum operating pressure. 195.406 Section 195... HAZARDOUS LIQUIDS BY PIPELINE Operation and Maintenance § 195.406 Maximum operating pressure. (a) Except for surge pressures and other variations from normal operations, no operator may operate a pipeline at a...
Maximum-entropy clustering algorithm and its global convergence analysis
无
2001-01-01
Constructing a batch of differentiable entropy functions touniformly approximate an objective function by means of the maximum-entropy principle, a new clustering algorithm, called maximum-entropy clustering algorithm, is proposed based on optimization theory. This algorithm is a soft generalization of the hard C-means algorithm and possesses global convergence. Its relations with other clustering algorithms are discussed.
Distribution of maximum loss of fractional Brownian motion with drift
Çağlar, Mine; Vardar-Acar, Ceren
2013-01-01
In this paper, we find bounds on the distribution of the maximum loss of fractional Brownian motion with H >= 1/2 and derive estimates on its tail probability. Asymptotically, the tail of the distribution of maximum loss over [0, t] behaves like the tail of the marginal distribution at time t.
48 CFR 436.575 - Maximum workweek-construction schedule.
2010-10-01
...-construction schedule. 436.575 Section 436.575 Federal Acquisition Regulations System DEPARTMENT OF AGRICULTURE... Maximum workweek-construction schedule. The contracting officer shall insert the clause at 452.236-75, Maximum Workweek-Construction Schedule, if the clause at FAR 52.236-15 is used and the contractor's...
30 CFR 57.5039 - Maximum permissible concentration.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Maximum permissible concentration. 57.5039... Maximum permissible concentration. Except as provided by standard § 57.5005, persons shall not be exposed to air containing concentrations of radon daughters exceeding 1.0 WL in active workings. ...
5 CFR 550.105 - Biweekly maximum earnings limitation.
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Biweekly maximum earnings limitation. 550.105 Section 550.105 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION (GENERAL) Premium Pay Maximum Earnings Limitations § 550.105 Biweekly...
5 CFR 550.106 - Annual maximum earnings limitation.
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Annual maximum earnings limitation. 550.106 Section 550.106 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION (GENERAL) Premium Pay Maximum Earnings Limitations § 550.106 Annual...
32 CFR 842.35 - Depreciation and maximum allowances.
2010-07-01
... 32 National Defense 6 2010-07-01 2010-07-01 false Depreciation and maximum allowances. 842.35... LITIGATION ADMINISTRATIVE CLAIMS Personnel Claims (31 U.S.C. 3701, 3721) § 842.35 Depreciation and maximum allowances. The military services have jointly established the “Allowance List-Depreciation Guide”...
Pseudo-stationary separation materials for highly parallel separations.
Singh, Anup K.; Palmer, Christopher (University of Montana, Missoula, MT)
2005-05-01
Goal of this study was to develop and characterize novel polymeric materials as pseudostationary phases in electrokinetic chromatography. Fundamental studies have characterized the chromatographic selectivity of the materials as a function of chemical structure and molecular conformation. The selectivities of the polymers has been studied extensively, resulting in a large body of fundamental knowledge regarding the performance and selectivity of polymeric pseudostationary phases. Two polymers have also been used for amino acid and peptide separations, and with laser induced fluorescence detection. The polymers performed well for the separation of derivatized amino acids, and provided some significant differences in selectivity relative to a commonly used micellar pseudostationary phase. The polymers did not perform well for peptide separations. The polymers were compatible with laser induced fluorescence detection, indicating that they should also be compatible with chip-based separations.
WASTE PRINTED CIRCUIT BOARDS SEPARATION IN ELECTROSTATIC SEPARATOR
Branimir Fuk
2012-12-01
Full Text Available Printed circuit boards from electronic waste are very important source of precious metals by recycling. The biggest challenge is liberation and separation of useful components; thin film which contains copper, zinc, tin, lead and precious metals like silver, gold and palladium from non useful components; polymers, ceramics and glass fibbers. The paper presents results for separation of shredded printed circuit boards from TV sets in electrostatic separator. Testing where conducted with material class 2/1 and 1/0.5 mm in laboratory on equipment for mineral processing. Results showed influence from independent variable; separation knife gradient, drum rotation speed and voltage on concentrate quality and recovery (the paper is published in Croatian.
Becker, Joseph F.; Valentin, Jose
1996-01-01
The maximum entropy technique was successfully applied to the deconvolution of overlapped chromatographic peaks. An algorithm was written in which the chromatogram was represented as a vector of sample concentrations multiplied by a peak shape matrix. Simulation results demonstrated that there is a trade off between the detector noise and peak resolution in the sense that an increase of the noise level reduced the peak separation that could be recovered by the maximum entropy method. Real data originated from a sample storage column was also deconvoluted using maximum entropy. Deconvolution is useful in this type of system because the conservation of time dependent profiles depends on the band spreading processes in the chromatographic column, which might smooth out the finer details in the concentration profile. The method was also applied to the deconvolution of previously interpretted Pioneer Venus chromatograms. It was found in this case that the correct choice of peak shape function was critical to the sensitivity of maximum entropy in the reconstruction of these chromatograms.
Hydrogen isotope separation; Separation isotopique de l'hydrogene
Leterq, D.; Guidon, H. [CEA Bruyeres-le-Chatel, 91 (France)
2001-12-01
CEA-DAM has been operating for more than 20 years hydrogen isotopes separation by batch chromatography with palladium coated on alumina as absorbing material. The efforts have been focused on the development of two new separation processes: TCAP (Thermal Cycling Absorption Process) and chromatography an molecular sieve at 77 K. H{sub 2}/D{sub 2} first tests results are promising. (authors)
Particle separations by electrophoretic techniques
Ballou, N.E.; Petersen, S.L.; Ducatte, G.R.; Remcho, V.T.
1996-03-01
A new method for particle separations based on capillary electrophoresis has been developed and characterized. It uniquely separates particles according to their chemical nature. Separations have been demonstrated with chemically modified latex particles and with inorganic oxide and silicate particles. Separations have been shown both experimentally and theoretically to be essentially independent of particle size in the range of about 0.2 {mu}m to 10 {mu}m. The method has been applied to separations of U0{sub 2} particles from environmental particulate material. For this, an integrated method was developed for capillary electrophoretic separation, collection of separated fractions, and determinations of U0{sub 2} and environmental particles in each fraction. Experimental runs with the integrated method on mixtures of UO{sub 2} particles and environmental particulate material demonstrated enrichment factors of 20 for UO{sub 2} particles in respect to environmental particles in the U0{sub 2}containing fractions. This enrichment factor reduces the costs and time for processing particulate samples by the lexan process by a factor of about 20.
Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation
Petr Stehlík
2015-01-01
Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′ (or Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.
Experimental study on prediction model for maximum rebound ratio
LEI Wei-dong; TENG Jun; A.HEFNY; ZHAO Jian; GUAN Jiong
2007-01-01
The proposed prediction model for estimating the maximum rebound ratio was applied to a field explosion test, Mandai test in Singapore.The estimated possible maximum Deak particle velocities(PPVs)were compared with the field records.Three of the four available field-recorded PPVs lie exactly below the estimated possible maximum values as expected.while the fourth available field-recorded PPV lies close to and a bit higher than the estimated maximum possible PPV The comparison results show that the predicted PPVs from the proposed prediction model for the maximum rebound ratio match the field.recorded PPVs better than those from two empirical formulae.The very good agreement between the estimated and field-recorded values validates the proposed prediction model for estimating PPV in a rock mass with a set of ipints due to application of a two dimensional compressional wave at the boundary of a tunnel or a borehole.
Maximum efficiency of state-space models of nanoscale energy conversion devices.
Einax, Mario; Nitzan, Abraham
2016-07-07
The performance of nano-scale energy conversion devices is studied in the framework of state-space models where a device is described by a graph comprising states and transitions between them represented by nodes and links, respectively. Particular segments of this network represent input (driving) and output processes whose properly chosen flux ratio provides the energy conversion efficiency. Simple cyclical graphs yield Carnot efficiency for the maximum conversion yield. We give general proof that opening a link that separate between the two driving segments always leads to reduced efficiency. We illustrate these general result with simple models of a thermoelectric nanodevice and an organic photovoltaic cell. In the latter an intersecting link of the above type corresponds to non-radiative carriers recombination and the reduced maximum efficiency is manifested as a smaller open-circuit voltage.
Maximum efficiency of state-space models of nanoscale energy conversion devices
Einax, Mario; Nitzan, Abraham
2016-07-01
The performance of nano-scale energy conversion devices is studied in the framework of state-space models where a device is described by a graph comprising states and transitions between them represented by nodes and links, respectively. Particular segments of this network represent input (driving) and output processes whose properly chosen flux ratio provides the energy conversion efficiency. Simple cyclical graphs yield Carnot efficiency for the maximum conversion yield. We give general proof that opening a link that separate between the two driving segments always leads to reduced efficiency. We illustrate these general result with simple models of a thermoelectric nanodevice and an organic photovoltaic cell. In the latter an intersecting link of the above type corresponds to non-radiative carriers recombination and the reduced maximum efficiency is manifested as a smaller open-circuit voltage.
Sadjadi, Firooz A; Mahalanobis, Abhijit
2006-05-01
We report the development of a technique for adaptive selection of polarization ellipse tilt and ellipticity angles such that the target separation from clutter is maximized. From the radar scattering matrix [S] and its complex components, in phase and quadrature phase, the elements of the Mueller matrix are obtained. Then, by means of polarization synthesis, the radar cross section of the radar scatters are obtained at different transmitting and receiving polarization states. By designing a maximum average correlation height filter, we derive a target versus clutter distance measure as a function of four transmit and receive polarization state angles. The results of applying this method on real synthetic aperture radar imagery indicate a set of four transmit and receive angles that lead to maximum target versus clutter discrimination. These optimum angles are different for different targets. Hence, by adaptive control of the state of polarization of polarimetric radar, one can noticeably improve the discrimination of targets from clutter.
Singh, Harpreet; Arvind; Dorai, Kavita, E-mail: kavita@iisermohali.ac.in
2016-09-07
Estimation of quantum states is an important step in any quantum information processing experiment. A naive reconstruction of the density matrix from experimental measurements can often give density matrices which are not positive, and hence not physically acceptable. How do we ensure that at all stages of reconstruction, we keep the density matrix positive? Recently a method has been suggested based on maximum likelihood estimation, wherein the density matrix is guaranteed to be positive definite. We experimentally implement this protocol on an NMR quantum information processor. We discuss several examples and compare with the standard method of state estimation. - Highlights: • State estimation using maximum likelihood method was performed on an NMR quantum information processor. • Physically valid density matrices were obtained every time in contrast to standard quantum state tomography. • Density matrices of several different entangled and separable states were reconstructed for two and three qubits.
Efficient separations & processing crosscutting program
NONE
1996-08-01
The Efficient Separations and Processing Crosscutting Program (ESP) was created in 1991 to identify, develop, and perfect chemical and physical separations technologies and chemical processes which treat wastes and address environmental problems throughout the DOE complex. The ESP funds several multiyear tasks that address high-priority waste remediation problems involving high-level, low-level, transuranic, hazardous, and mixed (radioactive and hazardous) wastes. The ESP supports applied research and development (R & D) leading to the demonstration or use of these separations technologies by other organizations within the Department of Energy (DOE), Office of Environmental Management.
Maximum entropy as a consequence of Bayes' theorem in differentiable manifolds
Davis, Sergio
2015-01-01
Bayesian inference and the principle of maximum entropy (PME) are usually presented as separate but complementary branches of inference, the latter playing a central role in the foundations of Statistical Mechanics. In this work it is shown that the PME can be derived from Bayes' theorem and the divergence theorem for systems whose states can be mapped to points in a differentiable manifold. In this view, entropy must be interpreted as the invariant measure (non-informative prior) on the space of probability densities.
2010-07-01
... as specified in 40 CFR 1065.610. This is the maximum in-use engine speed used for calculating the NOX... procedures of 40 CFR part 1065, based on the manufacturer's design and production specifications for the..., power density, and maximum in-use engine speed. 1042.140 Section 1042.140 Protection of...
Lunar Soil Particle Separator Project
National Aeronautics and Space Administration — The Lunar Soil Particle Separator (LSPS) is an innovative method to beneficiate soil prior to in-situ resource utilization (ISRU). The LSPS improves ISRU oxygen...
Selective Photoinitiated Electrophoretic Separator Project
National Aeronautics and Space Administration — To address NASA Johnson Space Center needs for gas separation and collection technology for lunar in-situ resource utilization, Physical Optics Corporation (POC)...
Magnetic separation in microfluidic systems
Smistrup, Kristian
2007-01-01
This Ph.D. thesis presents theory, modeling, design, fabrication, experiments and results for microfluidic magnetic separators. A model for magnetic bead movement in a microfluidic channel is presented, and the limits of the model are discussed. The effective magnetic field gradient is defined...... for fabrication of silicon based systems. This fabrication scheme is explained, and it is shown how, it is applied with variations for several designs of magnetic separators. An experimental setup for magnetic separation experiments has been developed. It has been coupled with an image analysis program....... It is shown conceptually how such a system can be applied for parallel biochemical processing in a microfluidic system. ’Passive’ magnetic separators are presented, where on-chip soft magnetic elements are magnetized by an external magnetic field and create strong magnetic fields and gradients inside...
Chiral separation of agricultural fungicides.
Pérez-Fernández, Virginia; García, Maria Ángeles; Marina, Maria Luisa
2011-09-23
Fungicides are very important and diverse environmental and agricultural concern species. Their determination in commercial formulations or environmental matrices, requires highly efficient, selective and sensitive methods. A significant number of these chemicals are chiral with the activity residing usually in one of the enantiomers. The different toxicological and degradation behavior observed in many cases for fungicide enantiomers, results in the need to investigate them separately. For this purpose, separation techniques such as GC, HPLC, supercritical fluid chromatography (SFC) and CE have widely been employed although, at present, HPLC still dominates chromatographic chiral analysis of fungicides. This review covers the literature concerning the enantiomeric separation of fungicides usually employed in agriculture grouping the chiral separation methodologies developed for their analysis in environmental, biological, and food samples.
Lunar Soil Particle Separator Project
National Aeronautics and Space Administration — The Lunar Soil Particle Separator (LSPS) is an innovative method to beneficiate soil prior to in-situ resource utilization (ISRU). The LSPS can improve ISRU oxygen...
Separators for Lithium Ion Batteries
G.C.Li; H.P.Zhang; Y.P.Wu
2007-01-01
1 Results A separator for rechargeable batteries is a microporous membrane placed between electrodes of opposite polarity, keeping them apart to prevent electrical short circuits and at the same time allowing rapid transport of lithium ions that are needed to complete the circuit during the passage of current in an electrochemical cell, and thus plays a key role in determining the performance of the lithium ion battery. Here provides a comprehensive overview of various types of separators for lithium io...
Lithium isotope separation by laser
Arisawa, T.; Maruyama, Y.; Suzuki, Y.; Shiba, K.
1982-01-01
A lithium isotope separation was performed using a laser isotope separation method. It was found that the lithium atoms with a natural isotopic abundance enhanced its /sup 6/Li concentration up to over 90% by tuning the laser wavelength to the /sup 2/Psub(1/2) of /sup 6/Li. Too high power, however, leads to a loss of enrichment due to the power broadening effect which was analysed by the equation of motion of density matrices.
Burke, Kenneth Alan; Fisher, Caleb; Newman, Paul
2010-01-01
The main product of a typical fuel cell is water, and many fuel-cell configurations use the flow of excess gases (i.e., gases not consumed by the reaction) to drive the resultant water out of the cell. This two-phase mixture then exits through an exhaust port where the two fluids must again be separated to prevent the fuel cell from flooding and to facilitate the reutilization of both fluids. The Glenn Research Center (GRC) has designed, built, and tested an innovative fuel-cell water separator that not only removes liquid water from a fuel cell s exhaust ports, but does so with no moving parts or other power-consuming components. Instead it employs the potential and kinetic energies already present in the moving exhaust flow. In addition, the geometry of the separator is explicitly intended to be integrated into a fuel-cell stack, providing a direct mate with the fuel cell s existing flow ports. The separator is also fully scalable, allowing it to accommodate a wide range of water removal requirements. Multiple separators can simply be "stacked" in series or parallel to adapt to the water production/removal rate. GRC s separator accomplishes the task of water removal by coupling a high aspect- ratio flow chamber with a highly hydrophilic, polyethersulfone membrane. The hydrophilic membrane readily absorbs and transports the liquid water away from the mixture while simultaneously resisting gas penetration. The expansive flow path maximizes the interaction of the water particles with the membrane while minimizing the overall gas flow restriction. In essence, each fluid takes its corresponding path of least resistance, and the two fluids are effectively separated. The GRC fuel-cell water separator has a broad range of applications, including commercial hydrogen-air fuel cells currently being considered for power generation in automobiles.
Development of Radiochemical Separation Technology
Lee, Eil Hee; Kim, K. W.; Yang, H. B. (and others)
2007-06-15
This project of the second phase was aimed at the development of basic unit technologies for advanced partitioning, and the application tests of pre-developed partitioning technologies for separation of actinides by using a simulated multi-component radioactive waste containing Am, Np, Tc, U and so on. The goals for recovery yield of TRU, and for purity of Tc are high than 99% and about 99%, respectively. The work scopes and contents were as follows. 1). For the development of basic unit technologies for advanced partitioning. 1. Development of technologies for co-removal of TRU and for mutual separation of U and TRU with a reduction-complexation reaction. 2. Development of extraction system for high-acidity co-separation of An(+3) and Ln(+3) and its radiolytic evaluation. 3. Synthesis of extractants for the selective separation of An(+3) and its relevant extraction system development. 4. Development of a hybrid system for the recovery of noble metals and its continuous separation tests. 5. Development of electrolytic system for the decompositions of N-NO3 and N-NH3 compounds to nitrogen gas. 2). For the application test of pre-developed partitioning technologies for the separation of actinide elements in a simulated multi-component solution equivalent to HLW level. 1. Co-separation of Tc, Np and U by a (TBP-TOA)/NDD system. 2. Mutual-separation of Am, Cm and RE elements by a (Zr-DEHPA)/NDD system. All results will be used as the fundamental data for the development of advanced partitioning process in the future.
Balestrini, S.J.
1981-07-01
The ion optics of an existing mass separator are documented. The elctrostatic and magnetic stages are analyzed theoretically, both separately and in combination, by paying particular attention to the ion trajectories, the linear and angular magnifications, and the dispersion. The possibility of converting the magnet into a tunable unit by means of current-carrying elements in the gap is demonstrated. The feasibility of correction coils constructed from printed circuit board is shown.
Maximum Likelihood Estimation of the Identification Parameters and Its Correction
无
2002-01-01
By taking the subsequence out of the input-output sequence of a system polluted by white noise, anindependent observation sequence and its probability density are obtained and then a maximum likelihood estimation of theidentification parameters is given. In order to decrease the asymptotic error, a corrector of maximum likelihood (CML)estimation with its recursive algorithm is given. It has been proved that the corrector has smaller asymptotic error thanthe least square methods. A simulation example shows that the corrector of maximum likelihood estimation is of higherapproximating precision to the true parameters than the least square methods.
Maximum frequency of the decametric radiation from Jupiter
Barrow, C. H.; Alexander, J. K.
1980-01-01
The upper frequency limits of Jupiter's decametric radio emission are found to be essentially the same when observed from the earth or, with considerably higher sensitivity, from the Voyager spacecraft close to Jupiter. This suggests that the maximum frequency is a real cut-off corresponding to a maximum gyrofrequency of about 38-40 MHz at Jupiter. It no longer appears to be necessary to specify different cut-off frequencies for the Io and non-Io emission as the maximum frequencies are roughly the same in each case.
Separation mechanisms and fluid flow in oil/water separation
Celius, H.K.; Knudsen, B. [IKU Petroleumsforskning A/S, Trondheim (Norway); Hafskjold, B.; Hansen, E.W. [Selskapet for Industriell og Teknisk Forskning, Trondheim (Norway)
1996-12-31
This paper describes work aimed at physical and numerical modeling of separation rates of oil/water systems in order to establish better tools for design and operation of offshore operators. This work aims to integrate the chemical and physical phenomena behind coalescence and settling with those of fluid flow in the system, in order to develop tools for design and operational analysis of separation equipment. The work includes the development of a high pressure, bench-scale test rig to perform separation tests on live oil and water samples, and a rationale in the form of a computer code that can be used to interpret the test results and transform them to a form siutable for operational purposes. This involves a formulation of a mathematical description of the chemical and physical mechanisms behind the emulsification and separation process, and to establish a link to the hydrdynamic properties of the separator vessel. The Emucol computer program is used in the analysis. 12 refs., 5 figs.
2015-08-01
Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to any penalty for failing to comply with a...quasi-completely separated, the traditional maximum likelihood estimation (MLE) method generates infinite estimates. The bias -reduction (BR) method...which is a variant of the bias -correction method, removes the first-order bias term by applying a modified score function, and it always produces
The Application of Maximum Principle in Supply Chain Cost Optimization
Zhou Ling; Wang Jun
2013-01-01
In this paper, using the maximum principle for analyzing dynamic cost, we propose a new two-stage supply chain model of the manufacturing-assembly mode for high-tech perishable products supply chain...
Maximum Principle for Nonlinear Cooperative Elliptic Systems on IR N
LEADI Liamidi; MARCOS Aboubacar
2011-01-01
We investigate in this work necessary and sufficient conditions for having a Maximum Principle for a cooperative elliptic system on the whole (IR)N.Moreover,we prove the existence of solutions by an approximation method for the considered system.
Maximum Likelihood Factor Structure of the Family Environment Scale.
Fowler, Patrick C.
1981-01-01
Presents the maximum likelihood factor structure of the Family Environment Scale. The first bipolar dimension, "cohesion v conflict," measures relationship-centered concerns, while the second unipolar dimension is an index of "organizational and control" activities. (Author)
Multiresolution maximum intensity volume rendering by morphological adjunction pyramids
Roerdink, Jos B.T.M.
We describe a multiresolution extension to maximum intensity projection (MIP) volume rendering, allowing progressive refinement and perfect reconstruction. The method makes use of morphological adjunction pyramids. The pyramidal analysis and synthesis operators are composed of morphological 3-D
Multiresolution Maximum Intensity Volume Rendering by Morphological Adjunction Pyramids
Roerdink, Jos B.T.M.
2001-01-01
We describe a multiresolution extension to maximum intensity projection (MIP) volume rendering, allowing progressive refinement and perfect reconstruction. The method makes use of morphological adjunction pyramids. The pyramidal analysis and synthesis operators are composed of morphological 3-D
Changes in context and perception of maximum reaching height.
Wagman, Jeffrey B; Day, Brian M
2014-01-01
Successfully performing a given behavior requires flexibility in both perception and behavior. In particular, doing so requires perceiving whether that behavior is possible across the variety of contexts in which it might be performed. Three experiments investigated how (changes in) context (ie point of observation and intended reaching task) influenced perception of maximum reaching height. The results of experiment 1 showed that perceived maximum reaching height more closely reflected actual reaching ability when perceivers occupied a point of observation that was compatible with that required for the reaching task. The results of experiments 2 and 3 showed that practice perceiving maximum reaching height from a given point of observation improved perception of maximum reaching height from a different point of observation, regardless of whether such practice occurred at a compatible or incompatible point of observation. In general, such findings show bounded flexibility in perception of affordances and are thus consistent with a description of perceptual systems as smart perceptual devices.
Water Quality Assessment and Total Maximum Daily Loads Information (ATTAINS)
U.S. Environmental Protection Agency — The Water Quality Assessment TMDL Tracking And Implementation System (ATTAINS) stores and tracks state water quality assessment decisions, Total Maximum Daily Loads...
Combining Experiments and Simulations Using the Maximum Entropy Principle
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...... in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results....... Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges....
On the sufficiency of the linear maximum principle
Vidal, Rene Victor Valqui
1987-01-01
Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results...
Maximum Photovoltaic Penetration Levels on Typical Distribution Feeders: Preprint
Hoke, A.; Butler, R.; Hambrick, J.; Kroposki, B.
2012-07-01
This paper presents simulation results for a taxonomy of typical distribution feeders with various levels of photovoltaic (PV) penetration. For each of the 16 feeders simulated, the maximum PV penetration that did not result in steady-state voltage or current violation is presented for several PV location scenarios: clustered near the feeder source, clustered near the midpoint of the feeder, clustered near the end of the feeder, randomly located, and evenly distributed. In addition, the maximum level of PV is presented for single, large PV systems at each location. Maximum PV penetration was determined by requiring that feeder voltages stay within ANSI Range A and that feeder currents stay within the ranges determined by overcurrent protection devices. Simulations were run in GridLAB-D using hourly time steps over a year with randomized load profiles based on utility data and typical meteorological year weather data. For 86% of the cases simulated, maximum PV penetration was at least 30% of peak load.
16 CFR 1505.8 - Maximum acceptable material temperatures.
2010-01-01
... Association, 155 East 44th Street, New York, NY 10017. Material Degrees C. Degrees F. Capacitors (1) (1) Class... capacitor has no marked temperature limit, the maximum acceptable temperature will be assumed to be 65...
Environmental Monitoring, Water Quality - Total Maximum Daily Load (TMDL)
NSGIC GIS Inventory (aka Ramona) — The Clean Water Act Section 303(d) establishes the Total Maximum Daily Load (TMDL) program. The purpose of the TMDL program is to identify sources of pollution and...
PREDICTION OF MAXIMUM DRY DENSITY OF LOCAL GRANULAR ...
methods. A test on a soil of relatively high solid density revealed that the developed relation looses ... where, Pd max is the laboratory maximum dry ... Addis-Jinima Road Rehabilitation. ..... data sets that differ considerably in the magnitude.
Environmental Monitoring, Water Quality - Total Maximum Daily Load (TMDL)
NSGIC Education | GIS Inventory — The Clean Water Act Section 303(d) establishes the Total Maximum Daily Load (TMDL) program. The purpose of the TMDL program is to identify sources of pollution and...
Solar Panel Maximum Power Point Tracker for Power Utilities
Sandeep Banik,
2014-01-01
Full Text Available ―Solar Panel Maximum Power Point Tracker For power utilities‖ As the name implied, it is a photovoltaic system that uses the photovoltaic array as a source of electrical power supply and since every photovoltaic (PV array has an optimum operating point, called the maximum power point, which varies depending on the insolation level and array voltage. A maximum power point tracker (MPPT is needed to operate the PV array at its maximum power point. The objective of this thesis project is to build a photovoltaic (PV array Of 121.6V DC Voltage(6 cell each 20V, 100watt And convert the DC voltage to Single phase 120v,50Hz AC voltage by switch mode power converter‘s and inverter‘s.
A Family of Maximum SNR Filters for Noise Reduction
Huang, Gongping; Benesty, Jacob; Long, Tao;
2014-01-01
This paper is devoted to the study and analysis of the maximum signal-to-noise ratio (SNR) filters for noise reduction both in the time and short-time Fourier transform (STFT) domains with one single microphone and multiple microphones. In the time domain, we show that the maximum SNR filters can...... significantly increase the SNR but at the expense of tremendous speech distortion. As a consequence, the speech quality improvement, measured by the perceptual evaluation of speech quality (PESQ) algorithm, is marginal if any, regardless of the number of microphones used. In the STFT domain, the maximum SNR....... This demonstrates that the maximum SNR filters, particularly the multichannel ones, in the STFT domain may be of great practical value....
Maximum likelihood estimation of finite mixture model for economic data
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
On the maximum sufficient range of interstellar vessels
Cartin, Daniel
2011-01-01
This paper considers the likely maximum range of space vessels providing the basis of a mature interstellar transportation network. Using the principle of sufficiency, it is argued that this range will be less than three parsecs for the average interstellar vessel. This maximum range provides access from the Solar System to a large majority of nearby stellar systems, with total travel distances within the network not excessively greater than actual physical distance.
Efficiency at Maximum Power of Interacting Molecular Machines
Golubeva, Natalia; Imparato, Alberto
2012-01-01
We investigate the efficiency of systems of molecular motors operating at maximum power. We consider two models of kinesin motors on a microtubule: for both the simplified and the detailed model, we find that the many-body exclusion effect enhances the efficiency at maximum power of the many- motor...... system, with respect to the single motor case. Remarkably, we find that this effect occurs in a limited region of the system parameters, compatible with the biologically relevant range....
Filtering Additive Measurement Noise with Maximum Entropy in the Mean
Gzyl, Henryk
2007-01-01
The purpose of this note is to show how the method of maximum entropy in the mean (MEM) may be used to improve parametric estimation when the measurements are corrupted by large level of noise. The method is developed in the context on a concrete example: that of estimation of the parameter in an exponential distribution. We compare the performance of our method with the bayesian and maximum likelihood approaches.
The maximum entropy production principle: two basic questions.
Martyushev, Leonid M
2010-05-12
The overwhelming majority of maximum entropy production applications to ecological and environmental systems are based on thermodynamics and statistical physics. Here, we discuss briefly maximum entropy production principle and raises two questions: (i) can this principle be used as the basis for non-equilibrium thermodynamics and statistical mechanics and (ii) is it possible to 'prove' the principle? We adduce one more proof which is most concise today.
A tropospheric ozone maximum over the equatorial Southern Indian Ocean
L. Zhang
2012-05-01
Full Text Available We examine the distribution of tropical tropospheric ozone (O_{3} from the Microwave Limb Sounder (MLS and the Tropospheric Emission Spectrometer (TES by using a global three-dimensional model of tropospheric chemistry (GEOS-Chem. MLS and TES observations of tropospheric O_{3} during 2005 to 2009 reveal a distinct, persistent O_{3} maximum, both in mixing ratio and tropospheric column, in May over the Equatorial Southern Indian Ocean (ESIO. The maximum is most pronounced in 2006 and 2008 and less evident in the other three years. This feature is also consistent with the total column O_{3} observations from the Ozone Mapping Instrument (OMI and the Atmospheric Infrared Sounder (AIRS. Model results reproduce the observed May O_{3} maximum and the associated interannual variability. The origin of the maximum reflects a complex interplay of chemical and dynamic factors. The O_{3} maximum is dominated by the O_{3} production driven by lightning nitrogen oxides (NO_{x} emissions, which accounts for 62% of the tropospheric column O_{3} in May 2006. We find the contribution from biomass burning, soil, anthropogenic and biogenic sources to the O_{3} maximum are rather small. The O_{3} productions in the lightning outflow from Central Africa and South America both peak in May and are directly responsible for the O_{3} maximum over the western ESIO. The lightning outflow from Equatorial Asia dominates over the eastern ESIO. The interannual variability of the O_{3} maximum is driven largely by the anomalous anti-cyclones over the southern Indian Ocean in May 2006 and 2008. The lightning outflow from Central Africa and South America is effectively entrained by the anti-cyclones followed by northward transport to the ESIO.
On the sufficiency of the linear maximum principle
Vidal, Rene Victor Valqui
1987-01-01
Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results......Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results...
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
2009-01-01
We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed s...
Hybrid TOA/AOA Approximate Maximum Likelihood Mobile Localization
Mohamed Zhaounia; Mohamed Adnan Landolsi; Ridha Bouallegue
2010-01-01
This letter deals with a hybrid time-of-arrival/angle-of-arrival (TOA/AOA) approximate maximum likelihood (AML) wireless location algorithm. Thanks to the use of both TOA/AOA measurements, the proposed technique can rely on two base stations (BS) only and achieves better performance compared to the original approximate maximum likelihood (AML) method. The use of two BSs is an important advantage in wireless cellular communication systems because it avoids hearability problems and reduces netw...
[Study on the maximum entropy principle and population genetic equilibrium].
Zhang, Hong-Li; Zhang, Hong-Yan
2006-03-01
A general mathematic model of population genetic equilibrium about one locus was constructed based on the maximum entropy principle by WANG Xiao-Long et al. They proved that the maximum solve of the model was just the frequency distribution that a population reached Hardy-Weinberg genetic equilibrium. It can suggest that a population reached Hardy-Weinberg genetic equilibrium when the genotype entropy of the population reached the maximal possible value, and that the frequency distribution of the maximum entropy was equivalent to the distribution of Hardy-Weinberg equilibrium law about one locus. They further assumed that the frequency distribution of the maximum entropy was equivalent to all genetic equilibrium distributions. This is incorrect, however. The frequency distribution of the maximum entropy was only equivalent to the distribution of Hardy-Weinberg equilibrium with respect to one locus or several limited loci. The case with regard to limited loci was proved in this paper. Finally we also discussed an example where the maximum entropy principle was not the equivalent of other genetic equilibria.
Membrane manufacture for peptide separations
Kim, Dooli
2016-06-07
Nanostructured polymeric membranes are key tools in biomedical applications such as hemodialysis, protein separations, in the food industry, and drinking water supply from seawater. Despite of the success in different separation processes, membrane manufacture itself is at risk, since the most used solvents are about to be banned in many countries due to environmental and health issues. We propose for the first time the preparation of polyethersulfone membranes based on dissolution in the ionic liquid 1-ethyl-3-methylimidazolium dimethylphosphate ([EMIM]DEP). We obtained a series of membranes tailored for separation of solutes with molecular weight of 30, 5, 1.3, and 1.25 kg mol-1 with respective water permeances of 140, 65, 30 and 20 Lm-2h-1bar-1. We demonstrate their superior efficiency in the separation of complex mixtures of peptides with molecular weights in the range of 800 to 3500 gmol-1. Furthermore, the thermodynamics and kinetics of phase separation leading to the pore formation in the membranes were investigated. The rheology of the solutions and the morphology of the prepared membranes were examed and compared to those of polyethersulfone in organic solvents currently used for membrane manufacture.
Linearly and Quadratically Separable Classifiers Using Adaptive Approach
Mohamed Abdel-Kawy Mohamed Ali Soliman; Rasha M. Abo-Bakr
2011-01-01
This paper presents a fast adaptive iterative algorithm to solve linearly separable classification problems in Rn.In each iteration,a subset of the sampling data (n-points,where n is the number of features) is adaptively chosen and a hyperplane is constructed such that it separates the chosen n-points at a margin e and best classifies the remaining points.The classification problem is formulated and the details of the algorithm are presented.Further,the algorithm is extended to solving quadratically separable classification problems.The basic idea is based on mapping the physical space to another larger one where the problem becomes linearly separable.Numerical illustrations show that few iteration steps are sufficient for convergence when classes are linearly separable.For nonlinearly separable data,given a specified maximum number of iteration steps,the algorithm returns the best hyperplane that minimizes the number of misclassified points occurring through these steps.Comparisons with other machine learning algorithms on practical and benchmark datasets are also presented,showing the performance of the proposed algorithm.
Blind source separation based on generalized gaussian model
YANG Bin; KONG Wei; ZHOU Yue
2007-01-01
Since in most blind source separation (BSS) algorithms the estimations of probability density function (pdf) of sources are fixed or can only switch between one sup-Gaussian and other sub-Gaussian model,they may not be efficient to separate sources with different distributions. So to solve the problem of pdf mismatch and the separation of hybrid mixture in BSS, the generalized Gaussian model (GGM) is introduced to model the pdf of the sources since it can provide a general structure of univariate distributions. Its great advantage is that only one parameter needs to be determined in modeling the pdf of different sources, so it is less complex than Gaussian mixture model. By using maximum likelihood (ML) approach, the convergence of the proposed algorithm is improved. The computer simulations show that it is more efficient and valid than conventional methods with fixed pdf estimation.
The Signal Space Separation method
Taulu, S; Simola, J; Taulu, Samu; Kajola, Matti; Simola, Juha
2004-01-01
Multichannel measurement with hundreds of channels essentially covers all measurable degrees of freedom of a curl and source free vector field, like the magnetic field in a volume free of current sources (e.g. in magnetoencephalography, MEG). A functional expansion solution of Laplace's equation enables one to separate signals arising from the sphere enclosing the interesting sources, e.g. the currents in the brain, from the rest of the signals. The signal space separation (SSS) is accomplished by calculating individual basis vectors for each term of the functional expansion solution to create a signal basis covering all measurable signal vectors. Any signal vector has a unique SSS decomposition with separate coefficients for the interesting signals and signals coming from outside the interesting volume. Thus, SSS basis provides an elegant method to remove external disturbances, and to transform the interesting signals to virtual sensor configurations. SSS can also be used in compensating the movements of the...
Rare Earth Separation in China
无
2006-01-01
During the last decade, China rare earth (RE) industry has made significant progress and become one of the most important producers in the world. In this paper, the recent developments in both fundamental research and industrial application are briefly reviewed: (1) the development and application of Theory of Countercurrent Extraction, (2) the novel solvent extraction process and its application in industry for separating heavy rare earth elements (Tm, Yb, Lu), yttrium (Y), and scandium (Sc), (3) the on-line analysis and automatic control of countercurrent extraction, (4) the eco-friendly process for RE/Th separation of bastnasite in Sichuan Province and electrochemical process for Eu/RE separation, and (5) the optimized flowcharts for typical rare earth minerals in China.
FU Zhi-liang; WANG Su-hua; GAO Yan-fa
2008-01-01
A new experiment was made on the developing of bed separations and mining subsidence from Tangshan T2192 working face by equivalent materials simulation.The overburden deformation and the developing of bed separations with working face advancing was simulated by a new model.The results show that the maximum value of bed separations moved forward gradually along with the working face advancing; the maximum value of bed separations is 0.31～0.50 times of mining thickness.The key strata have a great influence upon surface subsidence during the overburden movement process.The mechanics parameters of new experiment are fitted with results in fields perfectly.
FU Zhi-liang; WANG Su-hua; GAO Yan-fa
2008-01-01
A new experiment was made on the developing of bed separations and mining subsidence from Tangshan T2192 working face by equivalent materials simulation. The overburden deformation and the developing of bed separations with working face advanc-ing was simulated by a new model. The results show that the maximum value of bed separations moved forward gradually along with the working face advancing; the maxi-mum value of bed separations is 0.31 ~0.50 times of mining thickness. The key strata have a great influence upon surface subsidence during the overburden movement process. The mechanics parameters of new experiment are fitted with results in fields perfectly.
Thermographic Detection of separated Flow
Dollinger, C.; Balaresque, N.; Schaffarczyk, A. P.; Fischer, A.
2016-09-01
Thermographic wind tunnel measurements, both on a cylinder as well as on a 2D airfoil, were performed at various Reynolds numbers in order to evaluate the possibility of detecting and visualizing separated flow areas. A new approach by acquiring a series of thermographic images and applying a spatial-temporal statistical analysis allows improving both the resolution and the information content of the thermographic images. Separated flow regions become visible and laminar/turbulent transitions can be detected more accurately. The knowledge about possibly present stall cells can be used to confirm two-dimensional flow conditions and support the development of more effective and silent rotorblades.
Separation processes, I: Azeotropic rectification
Milojević Svetomir
2005-01-01
Full Text Available In a series of two articles, the problems of azeotrope separation (part I and the design of separation units (part II were analyzed. The basic definition and equations of vapour-liquid equilibria for ideal and non-ideal systems, the importance of the activity coefficient calculation necessary for the analysis of non-ideal equilibrium systems, as well as theoretical aspects of azeotrope rectification and the determination of the optimal third component (modifier or azeotrope agent are presented in the first part.
Separable metrics and radiating stars
G Z ABEBE; S D MAHARAJ
2017-01-01
We study the junction condition relating the pressure to heat flux at the boundary of an accelerating and expanding spherically symmetric radiating star. We transform the junction condition to an ordinary differential equation by making a separability assumption on the metric functions in the space–time variables. The condition of separability on the metric functions yields several new exact solutions. A class of shear-free models is found which contains a linear equation of state and generalizes a previously obtained model. Four new shearing models are obtained; all the gravitational potentials can be written explicitly. A brief physical analysis indicates that the matter variables are well behaved.
SOFC and Gas Separation Membranes
Hagen, Anke; Hendriksen, Peter Vang; Søgaard, Martin
2009-01-01
from air. Subsequent separation and sequestration of CO2 is therefore easier on a SOFC plant than on conventional power plants based on combustion. Oxide ion conducting materials may be used for gas separation purposes with close to 100 % selectivity. They typically work in the same temperature range...... as SOFCs. Such membranes can potentially be used in Oxyfuel processes as well as in IGCC (Integrated Gasification Combined Cycle) power plants for supply of process oxygen, which may reduce cost of carbon capture and storage as dilution of the flue gas with nitrogen is avoided. Both technologies are very...
Barriers in Concurrent Separation Logic
Hobor, Aquinas; Gherghina, Cristian
We develop and prove sound a concurrent separation logic for Pthreads-style barriers. Although Pthreads barriers are widely used in systems, and separation logic is widely used for verification, there has not been any effort to combine the two. Unlike locks and critical sections, Pthreads barriers enable simultaneous resource redistribution between multiple threads and are inherently stateful, leading to significant complications in the design of the logic and its soundness proof. We show how our logic can be applied to a specific example program in a modular way. Our proofs are machine-checked in Coq.
Convolutive Blind Source Separation Methods
Pedersen, Michael Syskind; Larsen, Jan; Kjems, Ulrik
2008-01-01
During the past decades, much attention has been given to the separation of mixed sources, in particular for the blind case where both the sources and the mixing process are unknown and only recordings of the mixtures are available. In several situations it is desirable to recover all sources from....... This may help practitioners and researchers new to the area of convolutive source separation obtain a complete overview of the field. Hopefully those with more experience in the field can identify useful tools, or find inspiration for new algorithms....
Kuracina Richard
2015-06-01
Full Text Available The article deals with the measurement of maximum explosion pressure and the maximum rate of exposure pressure rise of wood dust cloud. The measurements were carried out according to STN EN 14034-1+A1:2011 Determination of explosion characteristics of dust clouds. Part 1: Determination of the maximum explosion pressure pmax of dust clouds and the maximum rate of explosion pressure rise according to STN EN 14034-2+A1:2012 Determination of explosion characteristics of dust clouds - Part 2: Determination of the maximum rate of explosion pressure rise (dp/dtmax of dust clouds. The wood dust cloud in the chamber is achieved mechanically. The testing of explosions of wood dust clouds showed that the maximum value of the pressure was reached at the concentrations of 450 g / m3 and its value is 7.95 bar. The fastest increase of pressure was observed at the concentrations of 450 g / m3 and its value was 68 bar / s.
The maximum single dose of resistant maltodextrin that does not cause diarrhea in humans.
Kishimoto, Yuka; Kanahori, Sumiko; Sakano, Katsuhisa; Ebihara, Shukuko
2013-01-01
The objective of the present study was to determine the maximum dose of resistant maltodextrin (Fibersol)-2, a non-viscous water-soluble dietary fiber), that does not induce transitory diarrhea. Ten healthy adult subjects (5 men and 5 women) ingested Fibersol-2 at increasing dose levels of 0.7, 0.8, 0.9, 1.0, and 1.1 g/kg body weight (bw). Each administration was separated from the previous dose by an interval of 1 wk. The highest dose level that did not cause diarrhea in any subject was regarded as the maximum non-effective level for a single dose. The results showed that no subject of either sex experienced diarrhea at dose levels of 0.7, 0.8, 0.9, or 1.0 g/kg bw. At the highest dose level of 1.1 g/kg bw, no female subject experienced diarrhea, whereas 1 male subject developed diarrhea with muddy stools 2 h after ingestion of the test substance. Consequently, the maximum non-effective level for a single dose of the resistant maltodextrin Fibersol-2 is 1.0 g/kg bw for men and >1.1 g/kg bw for women. Gastrointestinal symptoms were gurgling sounds in 4 subjects (7 events) and flatus in 5 subjects (9 events), although no association with dose level was observed. These symptoms were mild and transient and resolved without treatment.
Individual Module Maximum Power Point Tracking for Thermoelectric Generator Systems
Vadstrup, Casper; Schaltz, Erik; Chen, Min
2013-07-01
In a thermoelectric generator (TEG) system the DC/DC converter is under the control of a maximum power point tracker which ensures that the TEG system outputs the maximum possible power to the load. However, if the conditions, e.g., temperature, health, etc., of the TEG modules are different, each TEG module will not produce its maximum power. If each TEG module is controlled individually, each TEG module can be operated at its maximum power point and the TEG system output power will therefore be higher. In this work a power converter based on noninverting buck-boost converters capable of handling four TEG modules is presented. It is shown that, when each module in the TEG system is operated under individual maximum power point tracking, the system output power for this specific application can be increased by up to 8.4% relative to the situation when the modules are connected in series and 16.7% relative to the situation when the modules are connected in parallel.
Size dependence of efficiency at maximum power of heat engine
Izumida, Y.
2013-10-01
We perform a molecular dynamics computer simulation of a heat engine model to study how the engine size difference affects its performance. Upon tactically increasing the size of the model anisotropically, we determine that there exists an optimum size at which the model attains the maximum power for the shortest working period. This optimum size locates between the ballistic heat transport region and the diffusive heat transport one. We also study the size dependence of the efficiency at the maximum power. Interestingly, we find that the efficiency at the maximum power around the optimum size attains a value that has been proposed as a universal upper bound, and it even begins to exceed the bound as the size further increases. We explain this behavior of the efficiency at maximum power by using a linear response theory for the heat engine operating under a finite working period, which naturally extends the low-dissipation Carnot cycle model [M. Esposito, R. Kawai, K. Lindenberg, C. Van den Broeck, Phys. Rev. Lett. 105, 150603 (2010)]. The theory also shows that the efficiency at the maximum power under an extreme condition may reach the Carnot efficiency in principle.© EDP Sciences Società Italiana di Fisica Springer-Verlag 2013.
How long do centenarians survive? Life expectancy and maximum lifespan.
Modig, K; Andersson, T; Vaupel, J; Rau, R; Ahlbom, A
2017-08-01
The purpose of this study was to explore the pattern of mortality above the age of 100 years. In particular, we aimed to examine whether Scandinavian data support the theory that mortality reaches a plateau at particularly old ages. Whether the maximum length of life increases with time was also investigated. The analyses were based on individual level data on all Swedish and Danish centenarians born from 1870 to 1901; in total 3006 men and 10 963 women were included. Birth cohort-specific probabilities of dying were calculated. Exact ages were used for calculations of maximum length of life. Whether maximum age changed over time was analysed taking into account increases in cohort size. The results confirm that there has not been any improvement in mortality amongst centenarians in the past 30 years and that the current rise in life expectancy is driven by reductions in mortality below the age of 100 years. The death risks seem to reach a plateau of around 50% at the age 103 years for men and 107 years for women. Despite the rising life expectancy, the maximum age does not appear to increase, in particular after accounting for the increasing number of individuals of advanced age. Mortality amongst centenarians is not changing despite improvements at younger ages. An extension of the maximum lifespan and a sizeable extension of life expectancy both require reductions in mortality above the age of 100 years. © 2017 The Association for the Publication of the Journal of Internal Medicine.
Predicting species' maximum dispersal distances from simple plant traits.
Tamme, Riin; Götzenberger, Lars; Zobel, Martin; Bullock, James M; Hooftman, Danny A P; Kaasik, Ants; Pärtel, Meelis
2014-02-01
Many studies have shown plant species' dispersal distances to be strongly related to life-history traits, but how well different traits can predict dispersal distances is not yet known. We used cross-validation techniques and a global data set (576 plant species) to measure the predictive power of simple plant traits to estimate species' maximum dispersal distances. Including dispersal syndrome (wind, animal, ant, ballistic, and no special syndrome), growth form (tree, shrub, herb), seed mass, seed release height, and terminal velocity in different combinations as explanatory variables we constructed models to explain variation in measured maximum dispersal distances and evaluated their power to predict maximum dispersal distances. Predictions are more accurate, but also limited to a particular set of species, if data on more specific traits, such as terminal velocity, are available. The best model (R2 = 0.60) included dispersal syndrome, growth form, and terminal velocity as fixed effects. Reasonable predictions of maximum dispersal distance (R2 = 0.53) are also possible when using only the simplest and most commonly measured traits; dispersal syndrome and growth form together with species taxonomy data. We provide a function (dispeRsal) to be run in the software package R. This enables researchers to estimate maximum dispersal distances with confidence intervals for plant species using measured traits as predictors. Easily obtainable trait data, such as dispersal syndrome (inferred from seed morphology) and growth form, enable predictions to be made for a large number of species.
Prediction of three dimensional maximum isometric neck strength.
Fice, Jason B; Siegmund, Gunter P; Blouin, Jean-Sébastien
2014-09-01
We measured maximum isometric neck strength under combinations of flexion/extension, lateral bending and axial rotation to determine whether neck strength in three dimensions (3D) can be predicted from principal axes strength. This would allow biomechanical modelers to validate their neck models across many directions using only principal axis strength data. Maximum isometric neck moments were measured in 9 male volunteers (29±9 years) for 17 directions. The 3D moments were normalized by the principal axis moments, and compared to unity for all directions tested. Finally, each subject's maximum principal axis moments were used to predict their resultant moment in the off-axis directions. Maximum moments were 30±6 N m in flexion, 32±9 N m in lateral bending, 51±11 N m in extension, and 13±5 N m in axial rotation. The normalized 3D moments were not significantly different from unity (95% confidence interval contained one), except for three directions that combined ipsilateral axial rotation and lateral bending; in these directions the normalized moments exceeded one. Predicted resultant moments compared well to the actual measured values (r2=0.88). Despite exceeding unity, the normalized moments were consistent across subjects to allow prediction of maximum 3D neck strength using principal axes neck strength.
Magnetic separation apparatus and methods
Tibbe, Arjan; Scholtens, Tycho M.; Terstappen, Leon W.M.M.
2010-01-01
Apparatuses and methods for separating, immobilizing, and quantifying biological substances from within a fluid medium. Biological substances are observed by employing a vessel (6) having a chamber therein, the vessel comprising a transparent collection wall (5). A high internal gradient magnetic ca
Separating Device for solid Particles
De Jong, T.P.R.; Kattentidt, H.U.R.; Schokker, E.A.
2001-01-01
The invention relates to a separating device for solid fragments, comprising a conveyor belt for supplying the fragments, at least one sensor for detecting the fragments, and an ejector for dislodging the fragments from the belt. The ejector is embodied as mechanical impulse-transmitting organ opera
Operation of Electromagnetic Isotope Separator
MI; Ya-jing
2015-01-01
In 2015,we mainly completed the installation of the electromagnetic isotope separator comprehensive technical transformation projects,including the work of installation,debugging,commissioning and acceptance.In June 30,2015,according to the schedule requirements,the project
33rd Actinide Separations Conference
McDonald, L M; Wilk, P A
2009-05-04
Welcome to the 33rd Actinide Separations Conference hosted this year by the Lawrence Livermore National Laboratory. This annual conference is centered on the idea of networking and communication with scientists from throughout the United States, Britain, France and Japan who have expertise in nuclear material processing. This conference forum provides an excellent opportunity for bringing together experts in the fields of chemistry, nuclear and chemical engineering, and actinide processing to present and discuss experiences, research results, testing and application of actinide separation processes. The exchange of information that will take place between you, and other subject matter experts from around the nation and across the international boundaries, is a critical tool to assist in solving both national and international problems associated with the processing of nuclear materials used for both defense and energy purposes, as well as for the safe disposition of excess nuclear material. Granlibakken is a dedicated conference facility and training campus that is set up to provide the venue that supports communication between scientists and engineers attending the 33rd Actinide Separations Conference. We believe that you will find that Granlibakken and the Lake Tahoe views provide an atmosphere that is stimulating for fruitful discussions between participants from both government and private industry. We thank the Lawrence Livermore National Laboratory and the United States Department of Energy for their support of this conference. We especially thank you, the participants and subject matter experts, for your involvement in the 33rd Actinide Separations Conference.
Working inside an electrostatic separator
1980-01-01
This type of separators with electrodes of a length of 2 m and a field of 100 kV/cm were still in use for secondary beams in the East Hall at the PS. Michel Zahnd is on foreground, left, and Pierre Simon on background, right.
Development of Separator for Soybeans
Vries, de H.C.P.; Rijpma, P.J.; Owaa, J.S.E.
1997-01-01
A simple and effective separator for soybeans was developed for small-scale farmers in Uganda, to clean the seeds from foreign material, chaff, broken beans etc. as demanded by local and world markets. It will help to avoid losses during post-harvest time and to reduce human drudgery of cleaning the
Separation technology 2005; Separasjonsteknologi 2005
NONE
2005-07-01
The conference comprises 13 presentations on the topics of separation technology aspects with emphasis on technology assessment. Some topics of particular interest are emulsion stabilization, sand technology and handling, water handling and reservoir injection, technical equipment and compression and pressure aspects.
Separating proteins with activated carbon.
Stone, Matthew T; Kozlov, Mikhail
2014-07-15
Activated carbon is applied to separate proteins based on differences in their size and effective charge. Three guidelines are suggested for the efficient separation of proteins with activated carbon. (1) Activated carbon can be used to efficiently remove smaller proteinaceous impurities from larger proteins. (2) Smaller proteinaceous impurities are most efficiently removed at a solution pH close to the impurity's isoelectric point, where they have a minimal effective charge. (3) The most efficient recovery of a small protein from activated carbon occurs at a solution pH further away from the protein's isoelectric point, where it is strongly charged. Studies measuring the binding capacities of individual polymers and proteins were used to develop these three guidelines, and they were then applied to the separation of several different protein mixtures. The ability of activated carbon to separate proteins was demonstrated to be broadly applicable with three different types of activated carbon by both static treatment and by flowing through a packed column of activated carbon.
Gas Separations using Ceramic Membranes
Paul KT Liu
2005-01-13
This project has been oriented toward the development of a commercially viable ceramic membrane for high temperature gas separations. A technically and commercially viable high temperature gas separation membrane and process has been developed under this project. The lab and field tests have demonstrated the operational stability, both performance and material, of the gas separation thin film, deposited upon the ceramic membrane developed. This performance reliability is built upon the ceramic membrane developed under this project as a substrate for elevated temperature operation. A comprehensive product development approach has been taken to produce an economically viable ceramic substrate, gas selective thin film and the module required to house the innovative membranes for the elevated temperature operation. Field tests have been performed to demonstrate the technical and commercial viability for (i) energy and water recovery from boiler flue gases, and (ii) hydrogen recovery from refinery waste streams using the membrane/module product developed under this project. Active commercializations effort teaming with key industrial OEMs and end users is currently underway for these applications. In addition, the gas separation membrane developed under this project has demonstrated its economical viability for the CO2 removal from subquality natural gas and landfill gas, although performance stability at the elevated temperature remains to be confirmed in the field.
Magnetic separation for environmental remediation
Schake, A.R.; Avens, L.R.; Hill, D.D.; Padilla, D.D.; Prenger, F.C.; Romero, D.A.; Worl, L.A. [Los Alamos National Lab., NM (United States); Tolt, T.L. [Lockheed Environmental Systems and Technologies Co., Las Vegas, NV (United States)
1994-11-01
High Gradient Magnetic Separation (HGMS) is a form of magnetic separation used to separate solids from other solids, liquids or gases. HGMS uses large magnetic field gradients to separate ferromagnetic and paramagnetic particles from diamagnetic host materials. The technology relies only on physical properties, and therefore separations can be achieved while producing a minimum of secondary waste. Actinide and fission product wastes within the DOE weapons complex pose challenging problems for environmental remediation. Because the majority of actinide complexes and many fission products are paramagnetic, while most host materials are diamagnetic, HGMS can be used to concentrate the contaminants into a low volume waste stream. The authors are currently developing HGMS for applications to soil decontamination, liquid waste treatment, underground storage tank waste treatment, and actinide chemical processing residue concentration. Application of HGMS usually involves passing a slurry of the contaminated mixture through a magnetized volume. Field gradients are produced in the magnetized volume by a ferromagnetic matrix material, such as steel wool, expanded metal, iron shot, or nickel foam. The matrix fibers become trapping sites for ferromagnetic and paramagnetic particles in the host material. The particles with a positive susceptibility are attracted toward an increasing magnetic field gradient and can be extracted from diamagnetic particles, which react in the opposite direction, moving away from the areas of high field gradients. The extracted paramagnetic contaminants are flushed from the matrix fibers when the magnetic field is reduced to zero or when the matrix canister is removed from the magnetic field. Results are discussed for the removal of uranium trioxide from water, PuO{sub 2}, U, and Pu from various soils (Fernald, Nevada Test Site), and the waste water treatment of Pu and Am isotopes using HGMS.
Predicting Maximum Sunspot Number in Solar Cycle 24
Nipa J Bhatt; Rajmal Jain; Malini Aggarwal
2009-03-01
A few prediction methods have been developed based on the precursor technique which is found to be successful for forecasting the solar activity. Considering the geomagnetic activity aa indices during the descending phase of the preceding solar cycle as the precursor, we predict the maximum amplitude of annual mean sunspot number in cycle 24 to be 111 ± 21. This suggests that the maximum amplitude of the upcoming cycle 24 will be less than cycles 21–22. Further, we have estimated the annual mean geomagnetic activity aa index for the solar maximum year in cycle 24 to be 20.6 ± 4.7 and the average of the annual mean sunspot number during the descending phase of cycle 24 is estimated to be 48 ± 16.8.
Construction and enumeration of Boolean functions with maximum algebraic immunity
ZHANG WenYing; WU ChuanKun; LIU XiangZhong
2009-01-01
Algebraic immunity is a new cryptographic criterion proposed against algebraic attacks. In order to resist algebraic attacks, Boolean functions used in many stream ciphers should possess high algebraic immunity. This paper presents two main results to find balanced Boolean functions with maximum algebraic immunity. Through swapping the values of two bits, and then generalizing the result to swap some pairs of bits of the symmetric Boolean function constructed by Dalai, a new class of Boolean functions with maximum algebraic immunity are constructed. Enumeration of such functions is also given. For a given function p(x) with deg(p(x)) < [n/2], we give a method to construct functions in the form p(x)+q(x) which achieve the maximum algebraic immunity, where every term with nonzero coefficient in the ANF of q(x) has degree no less than [n/2].
Propane spectral resolution enhancement by the maximum entropy method
Bonavito, N. L.; Stewart, K. P.; Hurley, E. J.; Yeh, K. C.; Inguva, R.
1990-01-01
The Burg algorithm for maximum entropy power spectral density estimation is applied to a time series of data obtained from a Michelson interferometer and compared with a standard FFT estimate for resolution capability. The propane transmittance spectrum was estimated by use of the FFT with a 2 to the 18th data sample interferogram, giving a maximum unapodized resolution of 0.06/cm. This estimate was then interpolated by zero filling an additional 2 to the 18th points, and the final resolution was taken to be 0.06/cm. Comparison of the maximum entropy method (MEM) estimate with the FFT was made over a 45/cm region of the spectrum for several increasing record lengths of interferogram data beginning at 2 to the 10th. It is found that over this region the MEM estimate with 2 to the 16th data samples is in close agreement with the FFT estimate using 2 to the 18th samples.
Mass mortality of the vermetid gastropod Ceraesignum maximum
Brown, A. L.; Frazer, T. K.; Shima, J. S.; Osenberg, C. W.
2016-09-01
Ceraesignum maximum (G.B. Sowerby I, 1825), formerly Dendropoma maximum, was subject to a sudden, massive die-off in the Society Islands, French Polynesia, in 2015. On Mo'orea, where we have detailed documentation of the die-off, these gastropods were previously found in densities up to 165 m-2. In July 2015, we surveyed shallow back reefs of Mo'orea before, during and after the die-off, documenting their swift decline. All censused populations incurred 100% mortality. Additional surveys and observations from Mo'orea, Tahiti, Bora Bora, and Huahine (but not Taha'a) suggested a similar, and approximately simultaneous, die-off. The cause(s) of this cataclysmic mass mortality are currently unknown. Given the previously documented negative effects of C. maximum on corals, we expect the die-off will have cascading effects on the reef community.
The optimal polarizations for achieving maximum contrast in radar images
Swartz, A. A.; Yueh, H. A.; Kong, J. A.; Novak, L. M.; Shin, R. T.
1988-01-01
There is considerable interest in determining the optimal polarizations that maximize contrast between two scattering classes in polarimetric radar images. A systematic approach is presented for obtaining the optimal polarimetric matched filter, i.e., that filter which produces maximum contrast between two scattering classes. The maximization procedure involves solving an eigenvalue problem where the eigenvector corresponding to the maximum contrast ratio is an optimal polarimetric matched filter. To exhibit the physical significance of this filter, it is transformed into its associated transmitting and receiving polarization states, written in terms of horizontal and vertical vector components. For the special case where the transmitting polarization is fixed, the receiving polarization which maximizes the contrast ratio is also obtained. Polarimetric filtering is then applies to synthetic aperture radar images obtained from the Jet Propulsion Laboratory. It is shown, both numerically and through the use of radar imagery, that maximum image contrast can be realized when data is processed with the optimal polarimeter matched filter.
Penalized maximum likelihood estimation and variable selection in geostatistics
Chu, Tingjin; Wang, Haonan; 10.1214/11-AOS919
2012-01-01
We consider the problem of selecting covariates in spatial linear models with Gaussian process errors. Penalized maximum likelihood estimation (PMLE) that enables simultaneous variable selection and parameter estimation is developed and, for ease of computation, PMLE is approximated by one-step sparse estimation (OSE). To further improve computational efficiency, particularly with large sample sizes, we propose penalized maximum covariance-tapered likelihood estimation (PMLE$_{\\mathrm{T}}$) and its one-step sparse estimation (OSE$_{\\mathrm{T}}$). General forms of penalty functions with an emphasis on smoothly clipped absolute deviation are used for penalized maximum likelihood. Theoretical properties of PMLE and OSE, as well as their approximations PMLE$_{\\mathrm{T}}$ and OSE$_{\\mathrm{T}}$ using covariance tapering, are derived, including consistency, sparsity, asymptotic normality and the oracle properties. For covariance tapering, a by-product of our theoretical results is consistency and asymptotic normal...
Influence of maximum decking charge on intensity of blasting vibration
无
2006-01-01
Based on the character of short-time non-stationary random signal, the relationship between the maximum decking charge and energy distribution of blasting vibration signals was investigated by means of the wavelet packet method. Firstly, the characteristics of wavelet transform and wavelet packet analysis were described. Secondly, the blasting vibration signals were analyzed by wavelet packet based on software MATLAB, and the change of energy distribution curve at different frequency bands were obtained. Finally, the law of energy distribution of blasting vibration signals changing with the maximum decking charge was analyzed. The results show that with the increase of decking charge, the ratio of the energy of high frequency to total energy decreases, the dominant frequency bands of blasting vibration signals tend towards low frequency and blasting vibration does not depend on the maximum decking charge.
The subsequence weight distribution of summed maximum length digital sequences
Weathers, G. D.; Graf, E. R.; Wallace, G. R.
1974-01-01
An attempt is made to develop mathematical formulas to provide the basis for the design of pseudorandom signals intended for applications requiring accurate knowledge of the statistics of the signals. The analysis approach involves calculating the first five central moments of the weight distribution of subsequences of hybrid-sum sequences. The hybrid-sum sequence is formed from the modulo-two sum of k maximum length sequences and is an extension of the sum sequences formed from two maximum length sequences that Gilson (1966) evaluated. The weight distribution of the subsequences serves as an approximation to the filtering process. The basic reason for the analysis of hybrid-sum sequences is to establish a large group of sequences with good statistical properties. It is shown that this can be accomplished much more efficiently using the hybrid-sum approach rather than forming the group strictly from maximum length sequences.
Maximum power point tracking for optimizing energy harvesting process
Akbari, S.; Thang, P. C.; Veselov, D. S.
2016-10-01
There has been a growing interest in using energy harvesting techniques for powering wireless sensor networks. The reason for utilizing this technology can be explained by the sensors limited amount of operation time which results from the finite capacity of batteries and the need for having a stable power supply in some applications. Energy can be harvested from the sun, wind, vibration, heat, etc. It is reasonable to develop multisource energy harvesting platforms for increasing the amount of harvesting energy and to mitigate the issue concerning the intermittent nature of ambient sources. In the context of solar energy harvesting, it is possible to develop algorithms for finding the optimal operation point of solar panels at which maximum power is generated. These algorithms are known as maximum power point tracking techniques. In this article, we review the concept of maximum power point tracking and provide an overview of the research conducted in this area for wireless sensor networks applications.
Proscriptive Bayesian Programming and Maximum Entropy: a Preliminary Study
Koike, Carla Cavalcante
2008-11-01
Some problems found in robotics systems, as avoiding obstacles, can be better described using proscriptive commands, where only prohibited actions are indicated in contrast to prescriptive situations, which demands that a specific command be specified. An interesting question arises regarding the possibility to learn automatically if proscriptive commands are suitable and which parametric function could be better applied. Lately, a great variety of problems in robotics domain are object of researches using probabilistic methods, including the use of Maximum Entropy in automatic learning for robot control systems. This works presents a preliminary study on automatic learning of proscriptive robot control using maximum entropy and using Bayesian Programming. It is verified whether Maximum entropy and related methods can favour proscriptive commands in an obstacle avoidance task executed by a mobile robot.
Multitime maximum principle approach of minimal submanifolds and harmonic maps
Udriste, Constantin
2011-01-01
Some optimization problems coming from the Differential Geometry, as for example, the minimal submanifolds problem and the harmonic maps problem are solved here via interior solutions of appropriate multitime optimal control problems. Section 1 underlines some science domains where appear multitime optimal control problems. Section 2 (Section 3) recalls the multitime maximum principle for optimal control problems with multiple (curvilinear) integral cost functionals and $m$-flow type constraint evolution. Section 4 shows that there exists a multitime maximum principle approach of multitime variational calculus. Section 5 (Section 6) proves that the minimal submanifolds (harmonic maps) are optimal solutions of multitime evolution PDEs in an appropriate multitime optimal control problem. Section 7 uses the multitime maximum principle to show that of all solids having a given surface area, the sphere is the one having the greatest volume. Section 8 studies the minimal area of a multitime linear flow as optimal c...
A Maximum Entropy Estimator for the Aggregate Hierarchical Logit Model
Pedro Donoso
2011-08-01
Full Text Available A new approach for estimating the aggregate hierarchical logit model is presented. Though usually derived from random utility theory assuming correlated stochastic errors, the model can also be derived as a solution to a maximum entropy problem. Under the latter approach, the Lagrange multipliers of the optimization problem can be understood as parameter estimators of the model. Based on theoretical analysis and Monte Carlo simulations of a transportation demand model, it is demonstrated that the maximum entropy estimators have statistical properties that are superior to classical maximum likelihood estimators, particularly for small or medium-size samples. The simulations also generated reduced bias in the estimates of the subjective value of time and consumer surplus.
Approximate maximum-entropy moment closures for gas dynamics
McDonald, James G.
2016-11-01
Accurate prediction of flows that exist between the traditional continuum regime and the free-molecular regime have proven difficult to obtain. Current methods are either inaccurate in this regime or prohibitively expensive for practical problems. Moment closures have long held the promise of providing new, affordable, accurate methods in this regime. The maximum-entropy hierarchy of closures seems to offer particularly attractive physical and mathematical properties. Unfortunately, several difficulties render the practical implementation of maximum-entropy closures very difficult. This work examines the use of simple approximations to these maximum-entropy closures and shows that physical accuracy that is vastly improved over continuum methods can be obtained without a significant increase in computational cost. Initially the technique is demonstrated for a simple one-dimensional gas. It is then extended to the full three-dimensional setting. The resulting moment equations are used for the numerical solution of shock-wave profiles with promising results.
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
Kenneth W. K. Lui
2009-01-01
Full Text Available We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.
Remarks on the strong maximum principle for nonlocal operators
Jerome Coville
2008-05-01
Full Text Available In this note, we study the existence of a strong maximum principle for the nonlocal operator $$ mathcal{M}[u](x :=int_{G}J(gu(x*g^{-1}dmu(g - u(x, $$ where $G$ is a topological group acting continuously on a Hausdorff space $X$ and $u in C(X$. First we investigate the general situation and derive a pre-maximum principle. Then we restrict our analysis to the case of homogeneous spaces (i.e., $ X=G /H$. For such Hausdorff spaces, depending on the topology, we give a condition on $J$ such that a strong maximum principle holds for $mathcal{M}$. We also revisit the classical case of the convolution operator (i.e. $G=(mathbb{R}^n,+, X=mathbb{R}^n, dmu =dy$.
Resource-constrained maximum network throughput on space networks
Yanling Xing; Ning Ge; Youzheng Wang
2015-01-01
This paper investigates the maximum network through-put for resource-constrained space networks based on the delay and disruption-tolerant networking (DTN) architecture. Specifical y, this paper proposes a methodology for calculating the maximum network throughput of multiple transmission tasks under storage and delay constraints over a space network. A mixed-integer linear programming (MILP) is formulated to solve this problem. Simula-tions results show that the proposed methodology can successful y calculate the optimal throughput of a space network under storage and delay constraints, as wel as a clear, monotonic relationship between end-to-end delay and the maximum network throughput under storage constraints. At the same time, the optimization re-sults shine light on the routing and transport protocol design in space communication, which can be used to obtain the optimal network throughput.
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
Lui, Kenneth W. K.; So, H. C.
2009-12-01
We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.
Quality, precision and accuracy of the maximum No. 40 anemometer
Obermeir, J. [Otech Engineering, Davis, CA (United States); Blittersdorf, D. [NRG Systems Inc., Hinesburg, VT (United States)
1996-12-31
This paper synthesizes available calibration data for the Maximum No. 40 anemometer. Despite its long history in the wind industry, controversy surrounds the choice of transfer function for this anemometer. Many users are unaware that recent changes in default transfer functions in data loggers are producing output wind speed differences as large as 7.6%. Comparison of two calibration methods used for large samples of Maximum No. 40 anemometers shows a consistent difference of 4.6% in output speeds. This difference is significantly larger than estimated uncertainty levels. Testing, initially performed to investigate related issues, reveals that Gill and Maximum cup anemometers change their calibration transfer functions significantly when calibrated in the open atmosphere compared with calibration in a laminar wind tunnel. This indicates that atmospheric turbulence changes the calibration transfer function of cup anemometers. These results call into question the suitability of standard wind tunnel calibration testing for cup anemometers. 6 refs., 10 figs., 4 tabs.
The evolution of maximum body size of terrestrial mammals.
Smith, Felisa A; Boyer, Alison G; Brown, James H; Costa, Daniel P; Dayan, Tamar; Ernest, S K Morgan; Evans, Alistair R; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; McCain, Christy; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Stephens, Patrick R; Theodor, Jessica; Uhen, Mark D
2010-11-26
The extinction of dinosaurs at the Cretaceous/Paleogene (K/Pg) boundary was the seminal event that opened the door for the subsequent diversification of terrestrial mammals. Our compilation of maximum body size at the ordinal level by sub-epoch shows a near-exponential increase after the K/Pg. On each continent, the maximum size of mammals leveled off after 40 million years ago and thereafter remained approximately constant. There was remarkable congruence in the rate, trajectory, and upper limit across continents, orders, and trophic guilds, despite differences in geological and climatic history, turnover of lineages, and ecological variation. Our analysis suggests that although the primary driver for the evolution of giant mammals was diversification to fill ecological niches, environmental temperature and land area may have ultimately constrained the maximum size achieved.
The maximum force in a column under constant speed compression
Kuzkin, Vitaly A
2015-01-01
Dynamic buckling of an elastic column under compression at constant speed is investigated assuming the first-mode buckling. Two cases are considered: (i) an imperfect column (Hoff's statement), and (ii) a perfect column having an initial lateral deflection. The range of parameters, where the maximum load supported by a column exceeds Euler static force is determined. In this range, the maximum load is represented as a function of the compression rate, slenderness ratio, and imperfection/initial deflection. Considering the results we answer the following question: "How slowly the column should be compressed in order to measure static load-bearing capacity?" This question is important for the proper setup of laboratory experiments and computer simulations of buckling. Additionally, it is shown that the behavior of a perfect column having an initial deflection differ significantlys form the behavior of an imperfect column. In particular, the dependence of the maximum force on the compression rate is non-monotoni...
Maximum-Entropy Inference with a Programmable Annealer
Chancellor, Nicholas; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A
2015-01-01
Optimisation problems in science and engineering typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this approach maximises the likelihood that the solution found is correct. An alternative approach is to make use of prior statistical information about the noise in conjunction with Bayes's theorem. The maximum entropy solution to the problem then takes the form of a Boltzmann distribution over the ground and excited states of the cost function. Here we use a programmable Josephson junction array for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that maximum entropy decoding at finite temperature can in certain cases give competitive and even slightly better bit-error-rates than the maximum likelihood approach at zero temperature, confirming that useful information can be extracted from the excited states of the annealing...
Estimating the maximum potential revenue for grid connected electricity storage :
Byrne, Raymond Harry; Silva Monroy, Cesar Augusto.
2012-12-01
The valuation of an electricity storage device is based on the expected future cash flow generated by the device. Two potential sources of income for an electricity storage system are energy arbitrage and participation in the frequency regulation market. Energy arbitrage refers to purchasing (stor- ing) energy when electricity prices are low, and selling (discharging) energy when electricity prices are high. Frequency regulation is an ancillary service geared towards maintaining system frequency, and is typically procured by the independent system operator in some type of market. This paper outlines the calculations required to estimate the maximum potential revenue from participating in these two activities. First, a mathematical model is presented for the state of charge as a function of the storage device parameters and the quantities of electricity purchased/sold as well as the quantities o ered into the regulation market. Using this mathematical model, we present a linear programming optimization approach to calculating the maximum potential revenue from an elec- tricity storage device. The calculation of the maximum potential revenue is critical in developing an upper bound on the value of storage, as a benchmark for evaluating potential trading strate- gies, and a tool for capital nance risk assessment. Then, we use historical California Independent System Operator (CAISO) data from 2010-2011 to evaluate the maximum potential revenue from the Tehachapi wind energy storage project, an American Recovery and Reinvestment Act of 2009 (ARRA) energy storage demonstration project. We investigate the maximum potential revenue from two di erent scenarios: arbitrage only and arbitrage combined with the regulation market. Our analysis shows that participation in the regulation market produces four times the revenue compared to arbitrage in the CAISO market using 2010 and 2011 data. Then we evaluate several trading strategies to illustrate how they compare to the
Spatio-temporal observations of tertiary ozone maximum
V. F. Sofieva
2009-03-01
Full Text Available We present spatio-temporal distributions of tertiary ozone maximum (TOM, based on GOMOS (Global Ozone Monitoring by Occultation of Stars ozone measurements in 2002–2006. The tertiary ozone maximum is typically observed in the high-latitude winter mesosphere at altitude ~72 km. Although the explanation for this phenomenon has been found recently – low concentrations of odd-hydrogen cause the subsequent decrease in odd-oxygen losses – models have had significant deviations from existing observations until recently. Good coverage of polar night regions by GOMOS data has allowed for the first time obtaining spatial and temporal observational distributions of night-time ozone mixing ratio in the mesosphere.
The distributions obtained from GOMOS data have specific features, which are variable from year to year. In particular, due to a long lifetime of ozone in polar night conditions, the downward transport of polar air by the meridional circulation is clearly observed in the tertiary ozone maximum time series. Although the maximum tertiary ozone mixing ratio is achieved close to the polar night terminator (as predicted by the theory, TOM can be observed also at very high latitudes, not only in the beginning and at the end, but also in the middle of winter. We have compared the observational spatio-temporal distributions of tertiary ozone maximum with that obtained using WACCM (Whole Atmosphere Community Climate Model and found that the specific features are reproduced satisfactorily by the model.
Since ozone in the mesosphere is very sensitive to HO_{x} concentrations, energetic particle precipitation can significantly modify the shape of the ozone profiles. In particular, GOMOS observations have shown that the tertiary ozone maximum was temporarily destroyed during the January 2005 and December 2006 solar proton events as a result of the HO_{x} enhancement from the increased ionization.
Beat the Deviations in Estimating Maximum Power of Thermoelectric Modules
Gao, Junling; Chen, Min
2013-01-01
Under a certain temperature difference, the maximum power of a thermoelectric module can be estimated by the open-circuit voltage and the short-circuit current. In practical measurement, there exist two switch modes, either from open to short or from short to open, but the two modes can give...... different estimations on the maximum power. Using TEG-127-2.8-3.5-250 and TEG-127-1.4-1.6-250 as two examples, the difference is about 10%, leading to some deviations with the temperature change. This paper analyzes such differences by means of a nonlinear numerical model of thermoelectricity, and finds out...
Microcanonical origin of the maximum entropy principle for open systems.
Lee, Julian; Pressé, Steve
2012-10-01
There are two distinct approaches for deriving the canonical ensemble. The canonical ensemble either follows as a special limit of the microcanonical ensemble or alternatively follows from the maximum entropy principle. We show the equivalence of these two approaches by applying the maximum entropy formulation to a closed universe consisting of an open system plus bath. We show that the target function for deriving the canonical distribution emerges as a natural consequence of partial maximization of the entropy over the bath degrees of freedom alone. By extending this mathematical formalism to dynamical paths rather than equilibrium ensembles, the result provides an alternative justification for the principle of path entropy maximization as well.
Information Entropy Production of Spatio-Temporal Maximum Entropy Distributions
Cofre, Rodrigo
2015-01-01
Spiking activity from populations of neurons display causal interactions and memory effects. Therefore, they are expected to show some degree of irreversibility in time. Motivated by the spike train statistics, in this paper we build a framework to quantify the degree of irreversibility of any maximum entropy distribution. Our approach is based on the transfer matrix technique, which enables us to find an homogeneous irreducible Markov chain that shares the same maximum entropy measure. We provide relevant examples in the context of spike train statistics