Research Progress on Dark Matter Model Based on Weakly Interacting Massive Particles
He, Yu; Lin, Wen-bin
2017-04-01
The cosmological model of cold dark matter (CDM) with the dark energy and a scale-invariant adiabatic primordial power spectrum has been considered as the standard cosmological model, i.e. the ΛCDM model. Weakly interacting massive particles (WIMPs) become a prominent candidate for the CDM. Many models extended from the standard model can provide the WIMPs naturally. The standard calculations of relic abundance of dark matter show that the WIMPs are well in agreement with the astronomical observation of ΩDM h2 ≈0.11. The WIMPs have a relatively large mass, and a relatively slow velocity, so they are easy to aggregate into clusters, and the results of numerical simulations based on the WIMPs agree well with the observational results of cosmic large-scale structures. In the aspect of experiments, the present accelerator or non-accelerator direct/indirect detections are mostly designed for the WIMPs. Thus, a wide attention has been paid to the CDM model based on the WIMPs. However, the ΛCDM model has a serious problem for explaining the small-scale structures under one Mpc. Different dark matter models have been proposed to alleviate the small-scale problem. However, so far there is no strong evidence enough to exclude the CDM model. We plan to introduce the research progress of the dark matter model based on the WIMPs, such as the WIMPs miracle, numerical simulation, small-scale problem, and the direct/indirect detection, to analyze the criterion for discriminating the ;cold;, ;hot;, and ;warm; dark matter, and present the future prospects for the study in this field.
Pedestrian detection algorithm in traffic scene based on weakly supervised hierarchical deep model
Directory of Open Access Journals (Sweden)
Yingfeng Cai
2016-02-01
Full Text Available The emergence and development of deep learning theory in machine learning field provide new method for visual-based pedestrian recognition technology. To achieve better performance in this application, an improved weakly supervised hierarchical deep learning pedestrian recognition algorithm with two-dimensional deep belief networks is proposed. The improvements are made by taking into consideration the weaknesses of structure and training methods of existing classifiers. First, traditional one-dimensional deep belief network is expanded to two-dimensional that allows image matrix to be loaded directly to preserve more information of a sample space. Then, a determination regularization term with small weight is added to the traditional unsupervised training objective function. By this modification, original unsupervised training is transformed to weakly supervised training. Subsequently, that gives the extracted features discrimination ability. Multiple sets of comparative experiments show that the performance of the proposed algorithm is better than other deep learning algorithms in recognition rate and outperforms most of the existing state-of-the-art methods in non-occlusion pedestrian data set while performs fair in weakly and heavily occlusion data set.
Fermion condensates and weak symmetry breaking in a superstring-based model
International Nuclear Information System (INIS)
Mahapatra, S.; Misra, S.P.
1986-01-01
We start with the gauge group SU(3)/sub C/ x SU(2)/sub L/ x U(1)/sub R/ x U(1)/sub N/ (equivalentG 3211 ), which is a rank-five subgroup of E 6 . We include chiral-fermion-condensate terms in the effective four-dimensional Lagrangian derived from superstrings and discuss how this condensation can be responsible for weak symmetry breaking at a scale of 100 GeV. One experimental effect of the above will be the nonobservation of light Higgs scalars of the Salam-Weinberg model, although the other results of the same remain unchanged
SIMULATION OF SUBGRADE EMBANKMENT ON WEAK BASE
Directory of Open Access Journals (Sweden)
V. D. Petrenko
2015-08-01
Full Text Available Purpose. This article provides: the question of the sustainability of the subgrade on a weak base is considered in the paper. It is proposed to use the method of jet grouting. Investigation of the possibility of a weak base has an effect on the overall deformation of the subgrade; the identification and optimization of the parameters of subgrade based on studies using numerical simulation. Methodology. The theoretical studies of the stress-strain state of the base and subgrade embankment by modeling in the software package LIRA have been conducted to achieve this goal. Findings. After making the necessary calculations perform building fields of a subsidence, borders cramped thickness, bed’s coefficients of Pasternak and Winkler. The diagrams construction of vertical stress performs at any point of load application. Also, using the software system may perform peer review subsidence, rolls railroad tracks in natural and consolidated basis. Originality. For weak soils is the most appropriate nonlinear model of the base with the existing areas of both elastic and limit equilibrium, mixed problem of the theory of elasticity and plasticity. Practical value. By increasing the load on the weak base as a result of the second track construction, adds embankment or increasing axial load when changing the rolling stock process of sedimentation and consolidation may continue again. Therefore, one of the feasible and promising options for the design and reconstruction of embankments on weak bases is to strengthen the bases with the help of jet grouting. With the expansion of the railway infrastructure, increasing speed and weight of the rolling stock is necessary to ensure the stability of the subgrade on weak bases. LIRA software package allows you to perform all the necessary calculations for the selection of a proper way of strengthening weak bases.
DEFF Research Database (Denmark)
Khan, Jamal; Rades, Thomas; Boyd, Ben J
2016-01-01
weakly basic drug and was dissolved in a medium-chain (MC) LBF, which was subject to in vitro lipolysis experiments at various pH levels above and below the reported pKa value of cinnarizine (7.47). The solid-state form of the precipitated drug was analyzed using X-ray diffraction (XRD), Fourier......The tendency for poorly water-soluble weakly basic drugs to precipitate in a noncrystalline form during the in vitro digestion of lipid-based formulations (LBFs) was linked to an ionic interaction between drug and fatty acid molecules produced upon lipid digestion. Cinnarizine was chosen as a model...... from the starting free base crystalline material to the hydrochloride salt, thus supporting the case that ionic interactions between weak bases and fatty acid molecules during digestion are responsible for producing amorphous-salts upon precipitation. The conclusion has wide implications...
Anstey, Chris M
2005-06-01
Currently, three strong ion models exist for the determination of plasma pH. Mathematically, they vary in their treatment of weak acids, and this study was designed to determine whether any significant differences exist in the simulated performance of these models. The models were subjected to a "metabolic" stress either in the form of variable strong ion difference and fixed weak acid effect, or vice versa, and compared over the range 25 titration curves. The results were analyzed for linearity by using ordinary least squares regression and for collinearity by using correlation. In every case, the results revealed a linear relationship between log(Pco(2)) and pH over the range 6.8 acid-base physiology and by the ease of measurement of the independent model parameters.
Processing on weak electric signals by the autoregressive model
Ding, Jinli; Zhao, Jiayin; Wang, Lanzhou; Li, Qiao
2008-10-01
A model of the autoregressive model of weak electric signals in two plants was set up for the first time. The result of the AR model to forecast 10 values of the weak electric signals is well. It will construct a standard set of the AR model coefficient of the plant electric signal and the environmental factor, and can be used as the preferences for the intelligent autocontrol system based on the adaptive characteristic of plants to achieve the energy saving on agricultural productions.
Cang, Ji; Liu, Xu
2011-09-26
Based on the generalized spectral model for non-Kolmogorov atmospheric turbulence, analytic expressions of the scintillation index (SI) are derived for plane, spherical optical waves and a partially coherent Gaussian beam propagating through non-Kolmogorov turbulence horizontally in the weak fluctuation regime. The new expressions relate the SI to the finite turbulence inner and outer scales, spatial coherence of the source and spectral power-law and then used to analyze the effects of atmospheric condition and link length on the performance of wireless optical communication links. © 2011 Optical Society of America
Pre-relaxation in weakly interacting models
Bertini, Bruno; Fagotti, Maurizio
2015-07-01
We consider time evolution in models close to integrable points with hidden symmetries that generate infinitely many local conservation laws that do not commute with one another. The system is expected to (locally) relax to a thermal ensemble if integrability is broken, or to a so-called generalised Gibbs ensemble if unbroken. In some circumstances expectation values exhibit quasi-stationary behaviour long before their typical relaxation time. For integrability-breaking perturbations, these are also called pre-thermalisation plateaux, and emerge e.g. in the strong coupling limit of the Bose-Hubbard model. As a result of the hidden symmetries, quasi-stationarity appears also in integrable models, for example in the Ising limit of the XXZ model. We investigate a weak coupling limit, identify a time window in which the effects of the perturbations become significant and solve the time evolution through a mean-field mapping. As an explicit example we study the XYZ spin-\\frac{1}{2} chain with additional perturbations that break integrability. One of the most intriguing results of the analysis is the appearance of persistent oscillatory behaviour. To unravel its origin, we study in detail a toy model: the transverse-field Ising chain with an additional nonlocal interaction proportional to the square of the transverse spin per unit length (2013 Phys. Rev. Lett. 111 197203). Despite being nonlocal, this belongs to a class of models that emerge as intermediate steps of the mean-field mapping and shares many dynamical properties with the weakly interacting models under consideration.
Nonstationary weak signal detection based on normalization ...
Indian Academy of Sciences (India)
Haibin Zhang
Time-varying signal; weak signal detection; varying parameters; stochastic resonance. 1. Introduction. In general view, noise ..... the numerical solution for the typical first-order differential equation as Eq. (2). The discrete fourth-rank Runge–Kutta method [27] as follows is applied to solve the equation numerically. x. 0 ¼ dx dt.
Modelling, Measuring and Compensating Color Weak Vision.
Oshima, Satoshi; Mochizuki, Rika; Lenz, Reiner; Chao, Jinhui
2016-03-08
We use methods from Riemann geometry to investigate transformations between the color spaces of color-normal and color weak observers. The two main applications are the simulation of the perception of a color weak observer for a color normal observer and the compensation of color images in a way that a color weak observer has approximately the same perception as a color normal observer. The metrics in the color spaces of interest are characterized with the help of ellipsoids defined by the just-noticable-differences between color which are measured with the help of color-matching experiments. The constructed mappings are isometries of Riemann spaces that preserve the perceived color-differences for both observers. Among the two approaches to build such an isometry, we introduce normal coordinates in Riemann spaces as a tool to construct a global color-weak compensation map. Compared to previously used methods this method is free from approximation errors due to local linearizations and it avoids the problem of shifting locations of the origin of the local coordinate system. We analyse the variations of the Riemann metrics for different observers obtained from new color matching experiments and describe three variations of the basic method. The performance of the methods is evaluated with the help of semantic differential (SD) tests.
Human action recognition based on estimated weak poses
Gong, Wenjuan; Gonzàlez, Jordi; Roca, Francesc Xavier
2012-12-01
We present a novel method for human action recognition (HAR) based on estimated poses from image sequences. We use 3D human pose data as additional information and propose a compact human pose representation, called a weak pose, in a low-dimensional space while still keeping the most discriminative information for a given pose. With predicted poses from image features, we map the problem from image feature space to pose space, where a Bag of Poses (BOP) model is learned for the final goal of HAR. The BOP model is a modified version of the classical bag of words pipeline by building the vocabulary based on the most representative weak poses for a given action. Compared with the standard k-means clustering, our vocabulary selection criteria is proven to be more efficient and robust against the inherent challenges of action recognition. Moreover, since for action recognition the ordering of the poses is discriminative, the BOP model incorporates temporal information: in essence, groups of consecutive poses are considered together when computing the vocabulary and assignment. We tested our method on two well-known datasets: HumanEva and IXMAS, to demonstrate that weak poses aid to improve action recognition accuracies. The proposed method is scene-independent and is comparable with the state-of-art method.
Nyasulu, Frazier; McMills, Lauren; Barlag, Rebecca
2013-01-01
A laboratory to determine the equilibrium constants of weak acid negative weak base reactions is described. The equilibrium constants of component reactions when multiplied together equal the numerical value of the equilibrium constant of the summative reaction. The component reactions are weak acid ionization reactions, weak base hydrolysis…
Directory of Open Access Journals (Sweden)
Hamid Farrokh Ghatte
2016-09-01
Full Text Available Although many theoretical and experimental studies are available on external confinement of columns using fiber-reinforced polymer (FRP jackets, as well as numerous models proposed for the axial stress-axial strain relation of concrete confined with FRP jackets, they have not been validated with a sufficient amount and variety of experimental data obtained through full-scale tests of reinforced concrete (RC columns with different geometrical and mechanical characteristics. Particularly, no systematical experimental data have been presented on full-scale rectangular substandard RC columns subjected to reversed cyclic lateral loads along either their strong or weak axes. In this study, firstly, test results of five full-scale rectangular substandard RC columns with a cross-sectional aspect ratio of two (300 mm × 600 mm are briefly summarized. The columns were tested under constant axial load and reversed cyclic lateral loads along their strong or weak axes before and after retrofitting with external FRP jackets. In the second stage, inelastic lateral force-displacement relationships of the columns are obtained analytically, making use of the plastic hinge assumption and different FRP confinement models available in the literature. Finally, the analytical findings are compared with the test results for both strong and weak directions of the columns. Comparisons showed that use of different models for the stress-strain relationship of FRP-confined concrete can yield significantly non-conservative or too conservative retrofit designs, particularly in terms of deformation capacity.
Nonstationary weak signal detection based on normalization ...
Indian Academy of Sciences (India)
Haibin Zhang
Kutta numerical method as well as the normalized transformation of a bistable stochastic resonance system. The model performs well in the ... For the SNR in fractional domain in literature. [25], it can be only used in the LFM signal .... the numerical solution for the typical first-order differential equation as Eq. (2). The discrete ...
Classical and Weak Solutions for Two Models in Mathematical Finance
Gyulov, Tihomir B.; Valkov, Radoslav L.
2011-12-01
We study two mathematical models, arising in financial mathematics. These models are one-dimensional analogues of the famous Black-Scholes equation on finite interval. The main difficulty is the degeneration at the both ends of the space interval. First, classical solutions are studied. Positivity and convexity properties of the solutions are discussed. Variational formulation in weighted Sobolev spaces is introduced and existence and uniqueness of the weak solution is proved. Maximum principle for weak solution is discussed.
Weak Memory Models: Balancing Definitional Simplicity and Implementation Flexibility
Zhang, Sizhuo; Vijayaraghavan, Muralidaran; Arvind
2017-01-01
The memory model for RISC-V, a newly developed open source ISA, has not been finalized yet and thus, offers an opportunity to evaluate existing memory models. We believe RISC-V should not adopt the memory models of POWER or ARM, because their axiomatic and operational definitions are too complicated. We propose two new weak memory models: WMM and WMM-S, which balance definitional simplicity and implementation flexibility differently. Both allow all instruction reorderings except overtaking of...
Overcoming Microsoft Excel's Weaknesses for Crop Model Building and Simulations
Sung, Christopher Teh Boon
2011-01-01
Using spreadsheets such as Microsoft Excel for building crop models and running simulations can be beneficial. Excel is easy to use, powerful, and versatile, and it requires the least proficiency in computer programming compared to other programming platforms. Excel, however, has several weaknesses: it does not directly support loops for iterative…
Weak interactions physics: from its birth to the eletroweak model
International Nuclear Information System (INIS)
Lopes, J.L.
1987-01-01
A review of the evolution of weak interaction physics from its beginning (Fermi-Majorana-Perrin) to the eletroweak model (Glashow-Weinberg-Salam). Contributions from Brazilian physicists are specially mentioned as well as the first prediction of electroweak-unification, of the neutral intermediate vector Z 0 and the first approximate value of the mass of the W-bosons. (Author) [pt
School-Based Sexuality Education in Portugal: Strengths and Weaknesses
Rocha, Ana Cristina; Leal, Cláudia; Duarte, Cidália
2016-01-01
Portugal, like many other countries, faces obstacles regarding school-based sexuality education. This paper explores Portuguese schools' approaches to implementing sexuality education at a local level, and provides a critical analysis of potential strengths and weaknesses. Documents related to sexuality education in a convenience sample of 89…
A Weak Value Based QKD Protocol Robust Against Detector Attacks
Troupe, James
2015-03-01
We propose a variation of the BB84 quantum key distribution protocol that utilizes the properties of weak values to insure the validity of the quantum bit error rate estimates used to detect an eavesdropper. The protocol is shown theoretically to be secure against recently demonstrated attacks utilizing detector blinding and control and should also be robust against all detector based hacking. Importantly, the new protocol promises to achieve this additional security without negatively impacting the secure key generation rate as compared to that originally promised by the standard BB84 scheme. Implementation of the weak measurements needed by the protocol should be very feasible using standard quantum optical techniques.
Overview of DFIG-based Wind Power System Resonances under Weak Networks
DEFF Research Database (Denmark)
Song, Yipeng; Blaabjerg, Frede
2017-01-01
The wind power generation techniques are continuing to develop and increasing numbers of Doubly Fed Induction Generator (DFIG)-based wind power systems are connecting to the on-shore and off-shore grids, local standalone weak networks, and also micro grid applications. The impedances of the weak...... weak network respectively. This paper will discuss the SSR and the HFR phenomena based on the impedance modeling of the DFIG system and the weak networks, and the cause of these two resonances will be explained in details. The following factors including 1) transformer configuration; 2) different power...... networks are too large to be neglected and require careful attention. Due to the impedance interaction between the weak network and the DFIG system, both Sub- Synchronous Resonance (SSR) and High Frequency Resonance (HFR) may occur when the DFIG system is connected to the series or parallel compensated...
Statistical data processing of mobility curves of univalent weak bases
Czech Academy of Sciences Publication Activity Database
Šlampová, Andrea; Boček, Petr
2008-01-01
Roč. 29, č. 2 (2008), s. 538-541 ISSN 0173-0835 R&D Projects: GA AV ČR IAA400310609; GA ČR GA203/05/2106 Institutional research plan: CEZ:AV0Z40310501 Keywords : mobility curve * univalent weak bases * statistical evaluation Subject RIV: CB - Analytical Chemistry, Separation Impact factor: 3.509, year: 2008
"Weak quantum chaos" and its resistor network modeling.
Stotland, Alexander; Pecora, Louis M; Cohen, Doron
2011-06-01
Weakly chaotic or weakly interacting systems have a wide regime where the common random matrix theory modeling does not apply. As an example we consider cold atoms in a nearly integrable optical billiard with a displaceable wall (piston). The motion is completely chaotic but with a small Lyapunov exponent. The Hamiltonian matrix does not look like one taken from a Gaussian ensemble, but rather it is very sparse and textured. This can be characterized by parameters s and g which reflect the percentage of large elements and their connectivity, respectively. For g we use a resistor network calculation that has a direct relation to the semilinear response characteristics of the system, hence leading to a prediction regarding the energy absorption rate of cold atoms in optical billiards with vibrating walls.
Constraining unified dark matter models with weak lensing
Energy Technology Data Exchange (ETDEWEB)
Camera, S. [Dipartimento di Fisica Generale Amedeo Avogadro, Universita degli Studi di Torino, Torino (Italy); Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Torino, Torino (Italy)
2010-04-15
Unified Dark Matter (UDM) models provide an intriguing alternative to Dark Matter (DM) and Dark Energy (DE) through only one exotic component, i.e. a classical scalar field {phi}(t,x). Thanks to a non-canonical kinetic term, this scalar field can mimic both the behaviour of the matter-dominated era at earlier times, as DM do, and the outcoming late-time acceleration, as a cosmological constant DE. Thus, it has been shown that these models can reproduce the same expansion history of the {lambda}CDM concordance model. In this work I review the first prediction of a physical observable, the power spectrum of the weak lensing cosmic convergence (shear). I present the weak lensing signal as predicted by the standard {lambda}CDM model and by a family of viable UDM models parameterized by the late-time sound speed c{sub {infinity}} of the scalar field.last-scattering surface and a series of background galaxies peaked at different redshifts and spread over different redshifts as described by a functional form of their distribution of sources. (Abstract Copyright [2010], Wiley Periodicals, Inc.)
Krieg, Brian J; Taghavi, Seyed Mohammad; Amidon, Gordon L; Amidon, Gregory E
2015-09-01
Bicarbonate is the main buffer in the small intestine and it is well known that buffer properties such as pKa can affect the dissolution rate of ionizable drugs. However, bicarbonate buffer is complicated to work with experimentally. Finding a suitable substitute for bicarbonate buffer may provide a way to perform more physiologically relevant dissolution tests. The dissolution of weak acid and weak base drugs was conducted in bicarbonate and phosphate buffer using rotating disk dissolution methodology. Experimental results were compared with the predicted results using the film model approach of (Mooney K, Mintun M, Himmelstein K, Stella V. 1981. J Pharm Sci 70(1):22-32) based on equilibrium assumptions as well as a model accounting for the slow hydration reaction, CO2 + H2 O → H2 CO3 . Assuming carbonic acid is irreversible in the dehydration direction: CO2 + H2 O ← H2 CO3 , the transport analysis can accurately predict rotating disk dissolution of weak acid and weak base drugs in bicarbonate buffer. The predictions show that matching the dissolution of weak acid and weak base drugs in phosphate and bicarbonate buffer is possible. The phosphate buffer concentration necessary to match physiologically relevant bicarbonate buffer [e.g., 10.5 mM (HCO3 (-) ), pH = 6.5] is typically in the range of 1-25 mM and is very dependent upon drug solubility and pKa . © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.
A weak blind signature scheme based on quantum cryptography
Wen, Xiaojun; Niu, Xiamu; Ji, Liping; Tian, Yuan
2009-02-01
In this paper, we present a weak blind signature scheme based on the correlation of EPR (Einstein-Padolsky-Rosen) pairs. Different from classical blind signature schemes and current quantum signature schemes, our quantum blind signature scheme could guarantee not only the unconditionally security but also the anonymity of the message owner. To achieve that, quantum key distribution and one-time pad are adopted in our scheme. Experimental analysis proved that our scheme have the characteristics of non-counterfeit, non-disavowal, blindness and traceability. It has a wide application to E-payment system, E-government, E-business, and etc.
Multiple Discrete Endogenous Variables in Weakly-Separable Triangular Models
Directory of Open Access Journals (Sweden)
Sung Jae Jun
2016-02-01
Full Text Available We consider a model in which an outcome depends on two discrete treatment variables, where one treatment is given before the other. We formulate a three-equation triangular system with weak separability conditions. Without assuming assignment is random, we establish the identification of an average structural function using two-step matching. We also consider decomposing the effect of the first treatment into direct and indirect effects, which are shown to be identified by the proposed methodology. We allow for both of the treatment variables to be non-binary and do not appeal to an identification-at-infinity argument.
Santagati, C.; Lo Turco, M.; Bocconcino, M. M.; Donato, V.; Galizia, M.
2017-11-01
Nowadays, 3D digital imaging proposes effective solutions for preserving the expression of human creativity across the centuries, as well as is a great tool to guarantee global dissemination of knowledge and wide access to these invaluable resources of the past. Nevertheless, in several cases, a massive digitalisation of cultural heritage items (from the archaeological site up to the monument and museum collections) could be unworkable due to the still high costs in terms of equipment and human resources: 3D acquisition technologies and the need of skilled team within cultural institutions. Therefore, it is necessary to explore new possibilities offered by growing technologies: the lower costs of these technologies as well as their attractive visual quality constitute a challenge for researchers. Besides these possibilities, it is also important to consider how information is spread through graphic representation of knowledge. The focus of this study is to explore the potentialities and weaknesses of a newly released low cost device in the cultural heritage domain, trying to understand its effective usability in museum collections. The aim of the research is to test their usability, critically analysing the final outcomes of this entry level technology in relation to the other better assessed low cost technologies for 3D scanning, such as Structure from Motion (SfM) techniques (also produced by the same device) combined with dataset generated by a professional digital camera. The final outcomes were compared in terms of quality definition, time processing and file size. The specimens of the collections of the Civic Museum Castello Ursino in Catania have been chosen as the site of experimentation.
Directory of Open Access Journals (Sweden)
C. Santagati
2017-11-01
Full Text Available Nowadays, 3D digital imaging proposes effective solutions for preserving the expression of human creativity across the centuries, as well as is a great tool to guarantee global dissemination of knowledge and wide access to these invaluable resources of the past. Nevertheless, in several cases, a massive digitalisation of cultural heritage items (from the archaeological site up to the monument and museum collections could be unworkable due to the still high costs in terms of equipment and human resources: 3D acquisition technologies and the need of skilled team within cultural institutions. Therefore, it is necessary to explore new possibilities offered by growing technologies: the lower costs of these technologies as well as their attractive visual quality constitute a challenge for researchers. Besides these possibilities, it is also important to consider how information is spread through graphic representation of knowledge. The focus of this study is to explore the potentialities and weaknesses of a newly released low cost device in the cultural heritage domain, trying to understand its effective usability in museum collections. The aim of the research is to test their usability, critically analysing the final outcomes of this entry level technology in relation to the other better assessed low cost technologies for 3D scanning, such as Structure from Motion (SfM techniques (also produced by the same device combined with dataset generated by a professional digital camera. The final outcomes were compared in terms of quality definition, time processing and file size. The specimens of the collections of the Civic Museum Castello Ursino in Catania have been chosen as the site of experimentation.
Weak diffusion limits of dynamic conditional correlation models
DEFF Research Database (Denmark)
Hafner, Christian M.; Laurent, Sebastien; Violante, Francesco
by a diffusion matrix of reduced rank. The degeneracy is due to perfect collinearity between the innovations of the volatility and correlation dynamics. For the special case of constant conditional correlations, a non-degenerate diffusion limit can be obtained. Alternative sets of conditions are considered......The properties of dynamic conditional correlation (DCC) models are still not entirely understood. This paper fills one of the gaps by deriving weak diffusion limits of a modified version of the classical DCC model. The limiting system of stochastic differential equations is characterized...... for the rate of convergence of the parameters, obtaining time-varying but deterministic variances and/or correlations. A Monte Carlo experiment confirms that the quasi approximate maximum likelihood (QAML) method to estimate the diffusion parameters is inconsistent for any fixed frequency, but that it may...
A mathematical model for the Fermi weak interaction
Amour, L; Guillot, J C
2006-01-01
We consider a mathematical model of the Fermi theory of weak interactions as patterned according to the well-known current-current coupling of quantum electrodynamics. We focuss on the example of the decay of the muons into electrons, positrons and neutrinos but other examples are considered in the same way. We prove that the Hamiltonian describing this model has a ground state in the fermionic Fock space for a sufficiently small coupling constant. Furthermore we determine the absolutely continuous spectrum of the Hamiltonian and by commutator estimates we prove that the spectrum is absolutely continuous away from a small neighborhood of the thresholds of the free Hamiltonian. For all these results we do not use any infrared cutoff or infrared regularization even if fermions with zero mass are involved.
Glimpse: Sparsity based weak lensing mass-mapping tool
Lanusse, F.; Starck, J.-L.; Leonard, A.; Pires, S.
2018-02-01
Glimpse, also known as Glimpse2D, is a weak lensing mass-mapping tool that relies on a robust sparsity-based regularization scheme to recover high resolution convergence from either gravitational shear alone or from a combination of shear and flexion. Including flexion allows the supplementation of the shear on small scales in order to increase the sensitivity to substructures and the overall resolution of the convergence map. To preserve all available small scale information, Glimpse avoids any binning of the irregularly sampled input shear and flexion fields and treats the mass-mapping problem as a general ill-posed inverse problem, regularized using a multi-scale wavelet sparsity prior. The resulting algorithm incorporates redshift, reduced shear, and reduced flexion measurements for individual galaxies and is made highly efficient by the use of fast Fourier estimators.
The Role of a Weak Layer at the Base of an Oceanic Plate on Subduction Dynamics
Carluccio, R.; Moresi, L. N.; Kaus, B. J. P.
2017-12-01
Plate tectonics relies on the concept of an effectively rigid lithospheric lid moving over a weaker asthenosphere. In this model, the lithosphere asthenosphere boundary (LAB) is a first-order discontinuity that accommodates differential motion between tectonic plates and the underlying mantle. Recent seismic studies have revealed the existence of a low velocity and high electrical conductivity layer at the base of subducting tectonic plates. This thin layer has been interpreted as being weak and slightly buoyant and it has the potential to influence the dynamics of subducting plates. However, geodynamically, the role of a weak layer at the base of the lithosphere remains poorly studied, especially at subduction zones. Here, we use numerical models to investigate the first-order effects of a weak buoyant layer at the LAB on subduction dynamics. We employ both 2-D and 3-D models in which the slab and the mantle are either linear viscous or have a more realistic temperature-dependent, visco-elastic-plastic rheology and we vary the properties of the layer at the base of the oceanic lithosphere. Our results show that the presence of a weak layer affects the dynamics of plates, primarily by increasing the subduction speed and also influences the morphology of subducting slab. For moderate viscosity contrasts (viscosity contrasts (>1000), it can also change the morphology of the subduction itself and for thinner and more buoyant layers, the overall effect is reduced. The overall impact of this effects may depend on the effective contrast between the properties of the slab and the weak layer + mantle systems, and so, by the layer characteristics modelled such as its viscosity, density, thickness and rheology. In this study, we show and summarise this impact consistently with the recent seismological constraints and observations, for example, a pile-up of weak material in the bending zone of the subducting plate.
Electron kinetics modeling in a weakly ionized gas
International Nuclear Information System (INIS)
Boeuf, Jean-Pierre
1985-01-01
This work presents some features of electron kinetics in a weakly ionized gas. After a summary of the basis and recent developments of the kinetic theory, and a review of the most efficient numerical techniques for solving the Boltzmann equation, several aspects of electron motion in gases are analysed. Relaxation phenomena toward equilibrium under a uniform electric field, and the question of the existence of the hydrodynamic regime are first studied. The coupling between electron kinetics and chemical kinetics due to second kind collisions in Nitrogen is then analysed; a quantitative description of the evolution of the energy balance, accounting for electron-molecule as well as molecule-molecule energy transfer is also given. Finally, electron kinetics in space charge distorted, highly non uniform electric fields (glow discharges, streamers propagation) is investigated with microscopic numerical methods based on Boltzmann and Poisson equations. (author) [fr
Landau fluid model for weakly nonlinear dispersive magnetohydrodynamics
International Nuclear Information System (INIS)
Passot, T.; Sulem, P. L.
2005-01-01
In may astrophysical plasmas such as the solar wind, the terrestrial magnetosphere, or in the interstellar medium at small enough scales, collisions are negligible. When interested in the large-scale dynamics, a hydrodynamic approach is advantageous not only because its numerical simulations is easier than of the full Vlasov-Maxwell equations, but also because it provides a deep understanding of cross-scale nonlinear couplings. It is thus of great interest to construct fluid models that extended the classical magnetohydrodynamic (MHD) equations to collisionless situations. Two ingredients need to be included in such a model to capture the main kinetic effects: finite Larmor radius (FLR) corrections and Landau damping, the only fluid-particle resonance that can affect large scales and can be modeled in a relatively simple way. The Modelization of Landau damping in a fluid formalism is hardly possible in the framework of a systematic asymptotic expansion and was addressed mainly by means of parameter fitting in a linearized setting. We introduced a similar Landau fluid model but, that has the advantage of taking dispersive effects into account. This model properly describes dispersive MHD waves in quasi-parallel propagation. Since, by construction, the system correctly reproduces their linear dynamics, appropriate tests should address the nonlinear regime. In a first case, we show analytically that the weakly nonlinear modulational dynamics of quasi-parallel propagating Alfven waves is well captured. As a second test we consider the parametric decay instability of parallel Alfven waves and show that numerical simulations of the dispersive Landau fluid model lead to results that closely match the outcome of hybrid simulations. (Author)
Failure Behavior and Constitutive Model of Weakly Consolidated Soft Rock
Directory of Open Access Journals (Sweden)
Wei-ming Wang
2013-01-01
Full Text Available Mining areas in western China are mainly located in soft rock strata with poor bearing capacity. In order to make the deformation failure mechanism and strength behavior of weakly consolidated soft mudstone and coal rock hosted in Ili No. 4 mine of Xinjiang area clear, some uniaxial and triaxial compression tests were carried out according to the samples of rocks gathered in the studied area, respectively. Meanwhile, a damage constitutive model which considered the initial damage was established by introducing a damage variable and a correction coefficient. A linearization process method was introduced according to the characteristics of the fitting curve and experimental data. The results showed that samples under different moisture contents and confining pressures presented completely different failure mechanism. The given model could accurately describe the elastic and plastic yield characteristics as well as the strain softening behavior of collected samples at postpeak stage. Moreover, the model could precisely reflect the relationship between the elastic modulus and confining pressure at prepeak stage.
A weakly compressible formulation for modelling liquid-gas sloshing
CSIR Research Space (South Africa)
Heyns, Johan A
2012-09-01
Full Text Available , the implementation of a weakly compressible formulation which accounts for variations in the gas density is presented. With the aim of ensuring a computational efficient implementation of the proposed formulation, an implicit iterative GMRES solver with LU...
μe universality problem in the unified models of weak and electromagnetic interactions
International Nuclear Information System (INIS)
Rekalo, M.P.; Koval'chuk, V.A.; Rekalo, A.P.
1979-01-01
The unified SU(2)xU(1) model of the weak and electromagnetic interactions of leptons and quarks is suggested. In this model the μe universality is violated for the neutral current weak interactions: the muonic neutral weak current is a vector, while the electronic neutral weak current is a sum of both vector and axial parts. The μe universality of charged current weak interaction and electromagnetic interaction is conserved in the suggested model. The model is generalized on the hadronic processes
Direct reactions of weakly-bound nuclei within a one dimensional model
Moschini, L.; Vitturi, A.; Moro, AM
2018-03-01
A line of research has been developed to describe structure and dynamics of weakly-bound systems with one or more valence particles. To simplify the problem we are assuming particles moving in one dimension and, despite the drastic assumption, the model encompasses many characteristics observed in experiments. Within this model we can describe, for example, one- and two-particle breakup and one- and two-particle transfer processes. We concentrate here in models involving weakly-bound nuclei with just one valence particle. Exact solutions obtained by directly solving the time-dependent Schroedinger equation can be compared with the results obtained with different approximation schemes (coupled-channels formalism, continuum discretization, etc). Our goal is to investigate the limitations of the models based on approximations, and in particular to understand the role of continuum in the reaction mechanism.
Impedance-Based High Frequency Resonance Analysis of DFIG System in Weak Grids
Song, Yipeng; Wang, Xiongfei; Blaabjerg, Frede
2017-01-01
The impedance-based model of Doubly Fed Induction Generator (DFIG) systems, including the rotor part (Rotor Side Converter (RSC) and induction machine), and the grid part (Grid Side Converter (GSC) and its output filter), has been developed for analysis and mitigation of the Sub- Synchronous Resonance (SSR). However, the High Frequency Resonance (HFR) of DFIG systems due to the impedance interaction between DFIG system and parallel compensated weak network is often overlooked. This paper thus...
Nap, R J; Tagliazucchi, M; Szleifer, I
2014-01-14
This work addresses the effect of the Born self-energy contribution in the modeling of the structural and thermodynamical properties of weak polyelectrolytes confined to planar and curved surfaces. The theoretical framework is based on a theory that explicitly includes the conformations, size, shape, and charge distribution of all molecular species and considers the acid-base equilibrium of the weak polyelectrolyte. Namely, the degree of charge in the polymers is not imposed but it is a local varying property that results from the minimization of the total free energy. Inclusion of the dielectric properties of the polyelectrolyte is important as the environment of a polymer layer is very different from that in the adjacent aqueous solution. The main effect of the Born energy contribution on the molecular organization of an end-grafted weak polyacid layer is uncharging the weak acid (or basic) groups and consequently decreasing the concentration of mobile ions within the layer. The magnitude of the effect increases with polymer density and, in the case of the average degree of charge, it is qualitatively equivalent to a small shift in the equilibrium constant for the acid-base equilibrium of the weak polyelectrolyte monomers. The degree of charge is established by the competition between electrostatic interactions, the polymer conformational entropy, the excluded volume interactions, the translational entropy of the counterions and the acid-base chemical equilibrium. Consideration of the Born energy introduces an additional energetic penalty to the presence of charged groups in the polyelectrolyte layer, whose effect is mitigated by down-regulating the amount of charge, i.e., by shifting the local-acid base equilibrium towards its uncharged state. Shifting of the local acid-base equilibrium and its effect on the properties of the polyelectrolyte layer, without considering the Born energy, have been theoretically predicted previously. Account of the Born energy leads
Nap, R. J.; Tagliazucchi, M.; Szleifer, I.
2014-01-01
This work addresses the effect of the Born self-energy contribution in the modeling of the structural and thermodynamical properties of weak polyelectrolytes confined to planar and curved surfaces. The theoretical framework is based on a theory that explicitly includes the conformations, size, shape, and charge distribution of all molecular species and considers the acid-base equilibrium of the weak polyelectrolyte. Namely, the degree of charge in the polymers is not imposed but it is a local varying property that results from the minimization of the total free energy. Inclusion of the dielectric properties of the polyelectrolyte is important as the environment of a polymer layer is very different from that in the adjacent aqueous solution. The main effect of the Born energy contribution on the molecular organization of an end-grafted weak polyacid layer is uncharging the weak acid (or basic) groups and consequently decreasing the concentration of mobile ions within the layer. The magnitude of the effect increases with polymer density and, in the case of the average degree of charge, it is qualitatively equivalent to a small shift in the equilibrium constant for the acid-base equilibrium of the weak polyelectrolyte monomers. The degree of charge is established by the competition between electrostatic interactions, the polymer conformational entropy, the excluded volume interactions, the translational entropy of the counterions and the acid-base chemical equilibrium. Consideration of the Born energy introduces an additional energetic penalty to the presence of charged groups in the polyelectrolyte layer, whose effect is mitigated by down-regulating the amount of charge, i.e., by shifting the local-acid base equilibrium towards its uncharged state. Shifting of the local acid-base equilibrium and its effect on the properties of the polyelectrolyte layer, without considering the Born energy, have been theoretically predicted previously. Account of the Born energy leads
Policy-based benchmarking of weak heaps and their relatives
DEFF Research Database (Denmark)
Bruun, Asger; Edelkamp, Stefan; Katajainen, Jyrki
2010-01-01
), and a run-relaxed weak queue that of both insert and decrease to O(1). As competitors to these structures, we considered a binary heap, a Fibonacci heap, and a pairing heap. Generic programming techniques were heavily used in the code development. For benchmarking purposes we developed several component...
Oubei, Hassan M.
2017-06-16
In this Letter, we use laser beam intensity fluctuation measurements to model and describe the statistical properties of weak temperature-induced turbulence in underwater wireless optical communication (UWOC) channels. UWOC channels with temperature gradients are modeled by the generalized gamma distribution (GGD) with an excellent goodness of fit to the measured data under all channel conditions. Meanwhile, thermally uniform channels are perfectly described by the simple gamma distribution which is a special case of GGD. To the best of our knowledge, this is the first model that comprehensively describes both thermally uniform and gradient-based UWOC channels.
Weak coupling polaron and Landau-Zener scenario: Qubits modeling
Jipdi, M. N.; Tchoffo, M.; Fokou, I. F.; Fai, L. C.; Ateuafack, M. E.
2017-06-01
The paper presents a weak coupling polaron in a spherical dot with magnetic impurities and investigates conditions for which the system mimics a qubit. Particularly, the work focuses on the Landau-Zener (LZ) scenario undergone by the polaron and derives transition coefficients (transition probabilities) as well as selection rules for polaron's transitions. It is proven that, the magnetic impurities drive the polaron to a two-state superposition leading to a qubit structure. We also showed that the symmetry deficiency induced by the magnetic impurities (strong magnetic field) yields to the banishment of transition coefficients with non-stacking states. However, the transition coefficients revived for large confinement frequency (or weak magnetic field) with the orbital quantum numbers escorting transitions. The polaron is then shown to map a qubit independently of the number of relevant states with the transition coefficients lifted as LZ probabilities and given as a function of the electron-phonon coupling constant (Fröhlich constant).
Topic Detection Based on Weak Tie Analysis: A Case Study of LIS Research
Directory of Open Access Journals (Sweden)
Ling Wei
2016-11-01
Full Text Available Purpose: Based on the weak tie theory, this paper proposes a series of connection indicators of weak tie subnets and weak tie nodes to detect research topics, recognize their connections, and understand their evolution. Design/methodology/approach: First, keywords are extracted from article titles and preprocessed. Second, high-frequency keywords are selected to generate weak tie co-occurrence networks. By removing the internal lines of clustered sub-topic networks, we focus on the analysis of weak tie subnets' composition and functions and the weak tie nodes' roles. Findings: The research topics' clusters and themes changed yearly; the subnets clustered with technique-related and methodology-related topics have been the core, important subnets for years; while close subnets are highly independent, research topics are generally concentrated and most topics are application-related; the roles and functions of nodes and weak ties are diversified. Research limitations: The parameter values are somewhat inconsistent; the weak tie subnets and nodes are classified based on empirical observations, and the conclusions are not verified or compared to other methods. Practical implications: The research is valuable for detecting important research topics as well as their roles, interrelations, and evolution trends. Originality/value: To contribute to the strength of weak tie theory, the research translates weak and strong ties concepts to co-occurrence strength, and analyzes weak ties' functions. Also, the research proposes a quantitative method to classify and measure the topics' clusters and nodes.
Impedance-Based High Frequency Resonance Analysis of DFIG System in Weak Grids
DEFF Research Database (Denmark)
Song, Yipeng; Wang, Xiongfei; Blaabjerg, Frede
2017-01-01
The impedance-based model of Doubly Fed Induction Generator (DFIG) systems, including the rotor part (Rotor Side Converter (RSC) and induction machine), and the grid part (Grid Side Converter (GSC) and its output filter), has been developed for analysis and mitigation of the Sub- Synchronous...... Resonance (SSR). However, the High Frequency Resonance (HFR) of DFIG systems due to the impedance interaction between DFIG system and parallel compensated weak network is often overlooked. This paper thus investigates the impedance characteristics of DFIG systems for the analysis of HFR. The influences...... of the rotor speed variation, the machine mutual inductance and the digital control delay are evaluated. Two resonances phenomena are revealed, i.e., 1) the series HFR between the DFIG system and weak power grid; 2) the parallel HFR between the rotor part and the grid part of DFIG system. The impedance...
Weak Memory Models with Matching Axiomatic and Operational Definitions
Zhang, Sizhuo; Vijayaraghavan, Muralidaran; Lustig, Dan; Arvind
2017-01-01
Memory consistency models are notorious for being difficult to define precisely, to reason about, and to verify. More than a decade of effort has gone into nailing down the definitions of the ARM and IBM Power memory models, and yet there still remain aspects of those models which (perhaps surprisingly) remain unresolved to this day. In response to these complexities, there has been somewhat of a recent trend in the (general-purpose) architecture community to limit new memory models to being ...
Noise-induced shifts in the population model with a weak Allee effect
Bashkirtseva, Irina; Ryashko, Lev
2018-02-01
We consider the Truscott-Brindley system of interacting phyto- and zooplankton populations with a weak Allee effect. We add a random noise to the parameter of the prey carrying capacity, and study how the noise affects the dynamic behavior of this nonlinear prey-predator model. Phenomena of the stochastic excitement and noise-induced shifts in zones of the Andronov-Hopf bifurcation and Canard explosion are analyzed on the base of the direct numerical simulation and stochastic sensitivity functions technique. A relationship of these phenomena with transitions between order and chaos is discussed.
Operational Semantics of a Weak Memory Model inspired by Go
Fava, Daniel Schnetzer; Stolz, Volker; Valle, Stian
2017-01-01
A memory model dictates which values may be returned when reading from memory. In a parallel computing setting, the memory model affects how processes communicate through shared memory. The design of a proper memory model is a balancing act. On one hand, memory models must be lax enough to allow common hardware and compiler optimizations. On the other, the more lax the model, the harder it is for developers to reason about their programs. In order to alleviate the burden on programmers, a wea...
A Sparsity-Promoted Method Based on Majorization-Minimization for Weak Fault Feature Enhancement.
Ren, Bangyue; Hao, Yansong; Wang, Huaqing; Song, Liuyang; Tang, Gang; Yuan, Hongfang
2018-03-28
Fault transient impulses induced by faulty components in rotating machinery usually contain substantial interference. Fault features are comparatively weak in the initial fault stage, which renders fault diagnosis more difficult. In this case, a sparse representation method based on the Majorzation-Minimization (MM) algorithm is proposed to enhance weak fault features and extract the features from strong background noise. However, the traditional MM algorithm suffers from two issues, which are the choice of sparse basis and complicated calculations. To address these challenges, a modified MM algorithm is proposed in which a sparse optimization objective function is designed firstly. Inspired by the Basis Pursuit (BP) model, the optimization function integrates an impulsive feature-preserving factor and a penalty function factor. Second, a modified Majorization iterative method is applied to address the convex optimization problem of the designed function. A series of sparse coefficients can be achieved through iterating, which only contain transient components. It is noteworthy that there is no need to select the sparse basis in the proposed iterative method because it is fixed as a unit matrix. Then the reconstruction step is omitted, which can significantly increase detection efficiency. Eventually, envelope analysis of the sparse coefficients is performed to extract weak fault features. Simulated and experimental signals including bearings and gearboxes are employed to validate the effectiveness of the proposed method. In addition, comparisons are made to prove that the proposed method outperforms the traditional MM algorithm in terms of detection results and efficiency.
Extraction of Weak Scatterer Features Based on Multipath Exploitation in Radar Imagery
Directory of Open Access Journals (Sweden)
Muhannad Almutiry
2017-01-01
Full Text Available We proposed an improved solution to two problems. The first problem is caused by the sidelobe of the dominant scatterer masking a weak scatterer. The proposed solution is to suppress the dominant scatterer by modeling its electromagnetic effects as a secondary source or “extra dependent transmitter” in the measurement domain. The suppression of the domain scatterer reveals the presence of the weak scatterer based on exploitation of multipath effects. The second problem is linearizing the mathematical forward model in the measurement domain. Improving the quantity of the prediction, including multipath scattering effects (neglected under the Born approximation, allows us to solve the inverse problem. The multiple bounce (multipath scattering effect is the interaction of more than one target in the scene. Modeling reflections from one target towards another as a transmitting dipole will add the multiple scattering effects to the scattering field and permit us to solve a linear inverse problem without sophisticated solutions of a nonlinear matrix in the forward model. Simulation results are presented to validate the concept.
Singh, Saumya; Parikh, Tapan; Sandhu, Harpreet K; Shah, Navnit H; Malick, A Waseem; Singhal, Dharmendra; Serajuddin, Abu T M
2013-06-01
To present a novel approach of greatly enhancing aqueous solubility of a model weakly basic drug, haloperidol, by using weak acids that would not form salts with the drug and to attain physically stable form of amorphous drug by drying such aqueous solutions. Aqueous solubility of haloperidol in presence of increasing concentrations of four different weak organic acids (malic, tartaric, citric, fumaric) were determined. Several concentrated aqueous solutions with differing drug-to-acid molar ratios were dried in vacuum oven, and dried materials were characterized by DSC, powder XRD, dissolution testing, and stability study. Acids were selected such that they would not form salts with haloperidol. Haloperidol solubility increased greatly with increased concentrations of malic, tartaric and citric acids, reaching >300 mg/g of solution. In contrast to the haloperidol HCl aqueous solubility of 4 mg/g, this may be called supersolubilization. Fumaric acid did not cause such solubilization as it had low water solubility. Dried solids formed dispersions of amorphous haloperidol in acids that were either amorphous or partially crystalline. Amorphous haloperidol was physically stable and had better dissolution rate than HCl salt. A novel method of drug solubilization in aqueous media by acid-base interaction is presented. Physically stable amorphous systems of drugs may also be prepared by using this organic solvent-free approach.
Constraining the interacting dark energy models from weak gravity conjecture and recent observations
International Nuclear Information System (INIS)
Chen Ximing; Wang Bin; Pan Nana; Gong Yungui
2011-01-01
We examine the effectiveness of the weak gravity conjecture in constraining the dark energy by comparing with observations. For general dark energy models with plausible phenomenological interactions between dark sectors, we find that although the weak gravity conjecture can constrain the dark energy, the constraint is looser than that from the observations.
International Nuclear Information System (INIS)
Leite Lopes, J.
1976-01-01
A survey of the fundamental ideas on weak currents such as CVC and PCAC and a presentation of the Cabibbo current and the neutral weak currents according to the Salam-Weinberg model and the Glashow-Iliopoulos-Miami model are given [fr
Induction, bounding, weak combinatorial principles, and the homogeneous model theorem
Hirschfeldt, Denis R; Shore, Richard A
2017-01-01
Goncharov and Peretyat'kin independently gave necessary and sufficient conditions for when a set of types of a complete theory T is the type spectrum of some homogeneous model of T. Their result can be stated as a principle of second order arithmetic, which is called the Homogeneous Model Theorem (HMT), and analyzed from the points of view of computability theory and reverse mathematics. Previous computability theoretic results by Lange suggested a close connection between HMT and the Atomic Model Theorem (AMT), which states that every complete atomic theory has an atomic model. The authors show that HMT and AMT are indeed equivalent in the sense of reverse mathematics, as well as in a strong computability theoretic sense and do the same for an analogous result of Peretyat'kin giving necessary and sufficient conditions for when a set of types is the type spectrum of some model.
Modeling of crack propagation in weak snowpack layers using the discrete element method
Gaume, J.; van Herwijnen, A.; Chambon, G.; Birkeland, K. W.; Schweizer, J.
2015-10-01
Dry-snow slab avalanches are generally caused by a sequence of fracture processes including (1) failure initiation in a weak snow layer underlying a cohesive slab, (2) crack propagation within the weak layer and (3) tensile fracture through the slab which leads to its detachment. During the past decades, theoretical and experimental work has gradually led to a better understanding of the fracture process in snow involving the collapse of the structure in the weak layer during fracture. This now allows us to better model failure initiation and the onset of crack propagation, i.e., to estimate the critical length required for crack propagation. On the other hand, our understanding of dynamic crack propagation and fracture arrest propensity is still very limited. To shed more light on this issue, we performed numerical propagation saw test (PST) experiments applying the discrete element (DE) method and compared the numerical results with field measurements based on particle tracking. The goal is to investigate the influence of weak layer failure and the mechanical properties of the slab on crack propagation and fracture arrest propensity. Crack propagation speeds and distances before fracture arrest were derived from the DE simulations for different snowpack configurations and mechanical properties. Then, in order to compare the numerical and experimental results, the slab mechanical properties (Young's modulus and strength) which are not measured in the field were derived from density. The simulations nicely reproduced the process of crack propagation observed in field PSTs. Finally, the mechanical processes at play were analyzed in depth which led to suggestions for minimum column length in field PSTs.
Cranking model interpretation of weakly coupled bands in Hg isotopes
International Nuclear Information System (INIS)
Guttormsen, M.; Huebel, H.
1982-01-01
The positive-parity yrast states of the transitional sup(189-198)Hg isotopes are interpreted within the Bengtsson and Frauendorf version of the cranking model. The very sharp backbendings can be explained by small interaction matrix elements between the ground and s-bands. The experimentally observed large aligned angular momenta and the low band-crossing frequencies are well reproduced in the calculations. (orig.)
Numerical modeling of continental lithospheric weak zone over plume
Perepechko, Y. V.; Sorokin, K. E.
2011-12-01
The work is devoted to the development of magmatic systems in the continental lithosphere over diffluent mantle plumes. The areas of tension originating over them are accompanied by appearance of fault zones, and the formation of permeable channels, which are distributed magmatic melts. The numerical simulation of the dynamics of deformation fields in the lithosphere due to convection currents in the upper mantle, and the formation of weakened zones that extend up to the upper crust and create the necessary conditions for the formation of intermediate magma chambers has been carried out. Thermodynamically consistent non-isothermal model simulates the processes of heat and mass transfer of a wide class of magmatic systems, as well as the process of strain localization in the lithosphere and their influence on the formation of high permeability zones in the lower crust. The substance of the lithosphere is a rheologic heterophase medium, which is described by a two-velocity hydrodynamics. This makes it possible to take into account the process of penetration of the melt from the asthenosphere into the weakened zone. The energy dissipation occurs mainly due to interfacial friction and inelastic relaxation of shear stresses. The results of calculation reveal a nonlinear process of the formation of porous channels and demonstrate the diversity of emerging dissipative structures which are determined by properties of both heterogeneous lithosphere and overlying crust. Mutual effect of a permeable channel and the corresponding filtration process of the melt on the mantle convection and the dynamics of the asthenosphere have been studied. The formation of dissipative structures in heterogeneous lithosphere above mantle plumes occurs in accordance with the following scenario: initially, the elastic behavior of heterophase lithosphere leads to the formation of the narrow weakened zone, though sufficiently extensive, with higher porosity. Further, the increase in the width of
Analyses and testing of model prestressed concrete reactor vessels with built-in planes of weakness
International Nuclear Information System (INIS)
Dawson, P.; Paton, A.A.; Fleischer, C.C.
1990-01-01
This paper describes the design, construction, analyses and testing of two small scale, single cavity prestressed concrete reactor vessel models, one without planes of weakness and one with planes of weakness immediately behind the cavity liner. This work was carried out to extend a previous study which had suggested the likely feasibility of constructing regions of prestressed concrete reactor vessels and biological shields, which become activated, using easily removable blocks, separated by a suitable membrane. The paper describes the results obtained and concludes that the planes of weakness concept could offer a means of facilitating the dismantling of activated regions of prestressed concrete reactor vessels, biological shields and similar types of structure. (author)
HTSC-based composites as materials with high magnetic resistance in weak magnetic fields
Balaev, D A; Popkov, S I; Shajkhutdinov, K A; Petrov, M I
2001-01-01
The magnetoresistance of the composites on the HTSC-basis with the structure of 1-2-3- + dielectric and HTSC + normal metal are studied. The composite materials are characterized by high magnetoresistance effect in weak magnetic fields within the wide temperature range. Such a behavior is explained on the basis of the notions on the nonreversibility line in the HTSC and thermal fluctuations and in the net of the Josephson-type weak bonds realized in the HTSC-composites. The HTSC-based composites are characterized by high sensitivity to weak magnetic fields (up to 300 Oe) at the liquid nitrogen temperature
The Leaky Dielectric Model as a Weak Electrolyte Limit of an Electrodiffusion Model
Mori, Yoichiro; Young, Yuan-Nan
2017-11-01
The Taylor-Melcher (TM) model is the standard model for the electrohydrodynamics of poorly conducting leaky dielectric fluids under an electric field. The TM model treats the fluid as an ohmic conductor, without modeling ion dynamics. On the other hand, electrodiffusion models, which have been successful in describing electokinetic phenomena, incorporates ionic concentration dynamics. Mathematical reconciliation between electrodiffusion and the TM models has been a major issue for electrohydrodynamic theory. Here, we derive the TM model from an electrodiffusion model where we explicitly model the electrochemistry of ion dissociation. We introduce salt dissociation reaction in the bulk and take the limit of weak salt dissociation (corresponding to poor conductors in the TM model.) Assuming small Debye length we derive the TM model with or without the surface charge advection term depending upon the scaling of relevant dimensionless parameters. Our analysis also gives a description of the ionic concentration distribution within the Debye layer, which hints at possible scenarios for electrohydrodynamic singularity formation. In our analysis we also allow for a jump in voltage across the liquid interface which causes a drifting velocity for a liquid drop under an electric field. YM is partially supported by NSF-DMS-1516978 and NSF-DMS-1620316. YNY is partially supported by NSF-DMS-1412789 and NSF-DMS-1614863.
Miller, Daniel C.; Maricle, Denise E.; Jones, Alicia M.
2016-01-01
Processing Strengths and Weaknesses (PSW) models have been proposed as a method for identifying specific learning disabilities. Three PSW models were examined for their ability to predict expert identified specific learning disabilities cases. The Dual Discrepancy/Consistency Model (DD/C; Flanagan, Ortiz, & Alfonso, 2013) as operationalized by…
3D Modeling of Ultrasonic Wave Interaction with Disbonds and Weak Bonds
Leckey, C.; Hinders, M.
2011-01-01
Ultrasonic techniques, such as the use of guided waves, can be ideal for finding damage in the plate and pipe-like structures used in aerospace applications. However, the interaction of waves with real flaw types and geometries can lead to experimental signals that are difficult to interpret. 3-dimensional (3D) elastic wave simulations can be a powerful tool in understanding the complicated wave scattering involved in flaw detection and for optimizing experimental techniques. We have developed and implemented parallel 3D elastodynamic finite integration technique (3D EFIT) code to investigate Lamb wave scattering from realistic flaws. This paper discusses simulation results for an aluminum-aluminum diffusion disbond and an aluminum-epoxy disbond and compares results from the disbond case to the common artificial flaw type of a flat-bottom hole. The paper also discusses the potential for extending the 3D EFIT equations to incorporate physics-based weak bond models for simulating wave scattering from weak adhesive bonds.
The Acid-Base Titration of a Very Weak Acid: Boric Acid
Celeste, M.; Azevedo, C.; Cavaleiro, Ana M. V.
2012-01-01
A laboratory experiment based on the titration of boric acid with strong base in the presence of d-mannitol is described. Boric acid is a very weak acid and direct titration with NaOH is not possible. An auxiliary reagent that contributes to the release of protons in a known stoichiometry facilitates the acid-base titration. Students obtain the…
International Nuclear Information System (INIS)
Shen, Colin Y.; Evans, Thomas E.
2004-01-01
A non-hydrostatic density-stratified hydrodynamic model with a free surface has been developed from the vorticity equations rather than the usual momentum equations. This approach has enabled the model to be obtained in two different forms, weakly non-hydrostatic and fully non-hydrostatic, with the computationally efficient weakly non-hydrostatic form applicable to motions having horizontal scales greater than the local water depth. The hydrodynamic model in both its weakly and fully non-hydrostatic forms is validated numerically using exact nonlinear non-hydrostatic solutions given by the Dubriel-Jacotin-Long equation for periodic internal gravity waves, internal solitary waves, and flow over a ridge. The numerical code is developed based on a semi-Lagrangian scheme and higher order finite-difference spatial differentiation and interpolation. To demonstrate the applicability of the model to coastal ocean situations, the problem of tidal generation of internal solitary waves at a shelf-break is considered. Simulations carried out with the model obtain the evolution of solitary wave generation and propagation consistent with past results. Moreover, the weakly non-hydrostatic simulation is shown to compare favorably with the fully non-hydrostatic simulation. The capability of the present model to simulate efficiently relatively large scale non-hydrostatic motions suggests that the weakly non-hydrostatic form of the model may be suitable for application in a large-area domain while the computationally intensive fully non-hydrostatic form of the model may be used in an embedded sub-domain where higher resolution is needed
International Nuclear Information System (INIS)
Wojcicki, S.
1978-11-01
Lectures are given on weak decays from a phenomenological point of view, emphasizing new results and ideas and the relation of recent results to the new standard theoretical model. The general framework within which the weak decay is viewed and relevant fundamental questions, weak decays of noncharmed hadrons, decays of muons and the tau, and the decays of charmed particles are covered. Limitation is made to the discussion of those topics that either have received recent experimental attention or are relevant to the new physics. (JFP) 178 references
International Nuclear Information System (INIS)
Ogava, S.; Savada, S.; Nakagava, M.
1983-01-01
The problem of the use of weak interaction laws to study models of elementary particles is discussed. The most typical examples of weak interaction is beta-decay of nucleons and muons. Beta-interaction is presented by quark currents in the form of universal interaction of the V-A type. Universality of weak interactions is well confirmed using as examples e- and μ-channels of pion decay. Hypothesis on partial preservation of axial current is applicable to the analysis of processes with pion participation. In the framework of the model with four flavours lepton decays of hadrons are considered. Weak interaction without lepton participation are also considered. Properties of neutral currents are described briefly
Self-Similarity Based Corresponding-Point Extraction from Weakly Textured Stereo Pairs
Directory of Open Access Journals (Sweden)
Min Mao
2014-01-01
Full Text Available For the areas of low textured in image pairs, there is nearly no point that can be detected by traditional methods. The information in these areas will not be extracted by classical interest-point detectors. In this paper, a novel weakly textured point detection method is presented. The points with weakly textured characteristic are detected by the symmetry concept. The proposed approach considers the gray variability of the weakly textured local regions. The detection mechanism can be separated into three steps: region-similarity computation, candidate point searching, and refinement of weakly textured point set. The mechanism of radius scale selection and texture strength conception are used in the second step and the third step, respectively. The matching algorithm based on sparse representation (SRM is used for matching the detected points in different images. The results obtained on image sets with different objects show high robustness of the method to background and intraclass variations as well as to different photometric and geometric transformations; the points detected by this method are also the complement of points detected by classical detectors from the literature. And we also verify the efficacy of SRM by comparing with classical algorithms under the occlusion and corruption situations for matching the weakly textured points. Experiments demonstrate the effectiveness of the proposed weakly textured point detection algorithm.
Osteolytic Breast Cancer Causes Skeletal Muscle Weakness in an Immunocompetent Syngeneic Mouse Model
Directory of Open Access Journals (Sweden)
Jenna N. Regan
2017-12-01
Full Text Available Muscle weakness and cachexia are significant paraneoplastic syndromes of many advanced cancers. Osteolytic bone metastases are common in advanced breast cancer and are a major contributor to decreased survival, performance, and quality of life for patients. Pathologic fracture caused by osteolytic cancer in bone (OCIB leads to a significant (32% increased risk of death compared to patients without fracture. Since muscle weakness is linked to risk of falls which are a major cause of fracture, we have investigated skeletal muscle response to OCIB. Here, we show that a syngeneic mouse model of OCIB (4T1 mammary tumor cells leads to cachexia and skeletal muscle weakness associated with oxidation of the ryanodine receptor and calcium (Ca2+ release channel (RyR1. Muscle atrophy follows known pathways via both myostatin signaling and expression of muscle-specific ubiquitin ligases, atrogin-1 and MuRF1. We have identified a mechanism for skeletal muscle weakness due to increased oxidative stress on RyR1 via NAPDH oxidases [NADPH oxidase 2 (Nox2 and NADPH oxidase 4 (Nox4]. In addition, SMAD3 phosphorylation is higher in muscle from tumor-bearing mice, a critical step in the intracellular signaling pathway that transmits TGFβ signaling to the nucleus. This is the first time that skeletal muscle weakness has been described in a syngeneic model of OCIB and represents a unique model system in which to study cachexia and changes in skeletal muscle.
Directory of Open Access Journals (Sweden)
Ayman G. Abdel Tawab
2012-06-01
Full Text Available The interest in securing and sustaining the townscape and urban values of the historic environment has escalated as a response to the writings of intellectuals, such as Kevin Lynch and Gordon Cullen. Such interest has been construed by the governments’ introduction of statutory tools allowing them the right to designate urban areas within the boundaries of which the historic environment can be provided a statutory protection. The earliest European attempt to introduce such tools has been the Dutch establishment of the model of conservation areas known as “Protected Town and Village Views” in 1961. In 1962, the renowned Malraux Act has officially established the French similar model of protected areas known as “Secteurs Sauvegardés”. The introduction of such tools has marked the emergence of what has been later called area-based conservation. In Egypt, the enactment of the Act No. 119, in 2008, and the establishment of the model of protected areas known as “Areas Enjoying a Distinctive Value”, seem to have marked the emergence of the Egyptian official experience in area-based conservation. The main aim of this study was to preview the key features of the Egyptian emerging experience in area-based conservation and to unveil its strengths and weaknesses. The study approached the issue by means of a comparative analysis conducted among a group of adopted case studies. The adopted case studies included the British, the Dutch, the Egyptian, the French, the Irish and the Maltese experiences in area-based conservation, in addition to the international institutions’ experiences. The findings indicated that adopting the centralized approach to designate the Egyptian “Areas Enjoying a Distinctive Value” seems to be the major weakness of the Egyptian experience. The findings suggest the further boosting of the role of the Egyptian local authorities in the management of such designated areas.
Directory of Open Access Journals (Sweden)
Jun Li
2018-03-01
Full Text Available Direction of arrival (DOA estimation is the basis for underwater target localization and tracking using towed line array sonar devices. A method of DOA estimation for underwater wideband weak targets based on coherent signal subspace (CSS processing and compressed sensing (CS theory is proposed. Under the CSS processing framework, wideband frequency focusing is accompanied by a two-sided correlation transformation, allowing the DOA of underwater wideband targets to be estimated based on the spatial sparsity of the targets and the compressed sensing reconstruction algorithm. Through analysis and processing of simulation data and marine trial data, it is shown that this method can accomplish the DOA estimation of underwater wideband weak targets. Results also show that this method can considerably improve the spatial spectrum of weak target signals, enhancing the ability to detect them. It can solve the problems of low directional resolution and unreliable weak-target detection in traditional beamforming technology. Compared with the conventional minimum variance distortionless response beamformers (MVDR, this method has many advantages, such as higher directional resolution, wider detection range, fewer required snapshots and more accurate detection for weak targets.
Rong, Y; Padron, A V; Hagerty, K J; Nelson, N; Chi, S; Keyhani, N O; Katz, J; Datta, S P A; Gomes, C; McLamore, E S
2018-04-09
Impedimetric biosensors for measuring small molecules based on weak/transient interactions between bioreceptors and target analytes are a challenge for detection electronics, particularly in field studies or in the analysis of complex matrices. Protein-ligand binding sensors have enormous potential for biosensing, but achieving accuracy in complex solutions is a major challenge. There is a need for simple post hoc analytical tools that are not computationally expensive, yet provide near real time feedback on data derived from impedance spectra. Here, we show the use of a simple, open source support vector machine learning algorithm for analyzing impedimetric data in lieu of using equivalent circuit analysis. We demonstrate two different protein-based biosensors to show that the tool can be used for various applications. We conclude with a mobile phone-based demonstration focused on the measurement of acetone, an important biomarker related to the onset of diabetic ketoacidosis. In all conditions tested, the open source classifier was capable of performing as well as, or better, than the equivalent circuit analysis for characterizing weak/transient interactions between a model ligand (acetone) and a small chemosensory protein derived from the tsetse fly. In addition, the tool has a low computational requirement, facilitating use for mobile acquisition systems such as mobile phones. The protocol is deployed through Jupyter notebook (an open source computing environment available for mobile phone, tablet or computer use) and the code was written in Python. For each of the applications, we provide step-by-step instructions in English, Spanish, Mandarin and Portuguese to facilitate widespread use. All codes were based on scikit-learn, an open source software machine learning library in the Python language, and were processed in Jupyter notebook, an open-source web application for Python. The tool can easily be integrated with the mobile biosensor equipment for rapid
Weak first-order orientational transition in the Lebwohl-Lasher model for liquid crystals
DEFF Research Database (Denmark)
Zhang, Zhengping; Mouritsen, Ole G.; Zuckermann, Martin J.
1992-01-01
The nature of the orientational phase transition in the three-dimensional Lebwohl-Lasher model of liquid crystals has been studied by computer simulation using reweighting techniques and finite-size scaling analysis. Unambiguous numerical evidence is found in favor of a weak first-order transition...
Susceptibility and Phase Transitions in the Pseudospin-Electron Model at Weak Coupling
International Nuclear Information System (INIS)
Stasyuk, I.V.; Mysakovych, T.S.
2003-01-01
The pseudospin-electron model (PEM) is considered in the case of the weak pseudospin-electron coupling. It is shown that the transition to uniform and chess-board phases occurs when the chemical potential is situated near the electron band edges and near the band centre, respectively. The incommensurate phase is realized at the intermediate values of the chemical potential. (author)
Enhanced LVRT Control Strategy for DFIG-Based WECS in Weak Grid
DEFF Research Database (Denmark)
Abulanwar, Elsayed; Chen, Zhe; Iov, Florin
2013-01-01
An enhanced coordinated low voltage ride-through, LVRT, control strategy for a Doubly-fed Induction generator (DFIG)-based wind energy conversion system, WECS, connected to a weak grid is presented in this paper. The compliance with the grid code commitments is also considered. A proposed decoupled...
Critical currents in ballistic two-dimensional InAs-based superconducting weak links
Heida, J.P.; Wees, B.J. van; Klapwijk, T.M.; Borghs, G.
1999-01-01
The critical supercurrent Ic carried by a short (0.3 to 0.8 µm) ballistic two-dimensional InAs-based electron gas between superconducting niobium electrodes is studied. In relating the maximum value to the resistance of the weak link in the normal state Rn a much lower value is found than
Characteristics of weak base-induced vacuoles formed around individual acidic organelles.
Hiruma, Hiromi; Kawakami, Tadashi
2011-01-01
We have previously found that the weak base 4-aminopyridine induces Brownian motion of acidic organelles around which vacuoles are formed, causing organelle traffic disorder in neurons. Our present study investigated the characteristics of vacuoles induced by weak bases (NH(4)Cl, aminopyridines, and chloroquine) using mouse cells. Individual vacuoles included acidic organelles identified by fluorescent protein expression. Mitochondria and actin filaments were extruded outside the vacuoles, composing the vacuole rim. Staining with amine-reactive fluorescence showed no protein/amino acid content in vacuoles. Thus, serous vacuolar contents are probably partitioned by viscous cytosol, other organelles, and cytoskeletons, but not membrane. The weak base (chloroquine) was immunochemically detected in intravacuolar organelles, but not in vacuoles. Early vacuolization was reversible, but long-term vacuolization caused cell death. The vacuolization and cell death were blocked by the vacuolar H(+)-ATPase inhibitor and Cl--free medium. Staining with LysoTracker or LysoSensor indicated that intravacuolar organelles were strongly acidic and vacuoles were slightly acidic. This suggests that vacuolization is caused by accumulation of weak base and H(+) in acidic organelles, driven by vacuolar H(+)-ATPase associated with Cl(-) entering, and probably by subsequent extrusion of H(+) and water from organelles to the surrounding cytoplasm.
Li, Xu; Wang, Jun; Zhang, Jiao; Han, Yanfeng; Li, Xi
2015-01-01
A Ni-based superalloy CMSX-6 was directionally solidified at various drawing speeds (5–20 μm·s−1) and diameters (4 mm, 12 mm) under a 0.5 T weak transverse magnetic field. The results show that the application of a weak transverse magnetic field significantly modified the solidification microstructure. It was found that if the drawing speed was lower than 10 μm·s−1, the magnetic field caused extensive macro-segregation in the mushy zone, and a change in the mushy zone length. The magnetic fie...
A new physics-based method for detecting weak nuclear signals via spectral decomposition
International Nuclear Information System (INIS)
Chan, Kung-Sik; Li, Jinzheng; Eichinger, William; Bai, Erwei
2012-01-01
We propose a new physics-based method to determine the presence of the spectral signature of one or more nuclides from a poorly resolved spectra with weak signatures. The method is different from traditional methods that rely primarily on peak finding algorithms. The new approach considers each of the signatures in the library to be a linear combination of subspectra. These subspectra are obtained by assuming a signature consisting of just one of the unique gamma rays emitted by the nuclei. We propose a Poisson regression model for deducing which nuclei are present in the observed spectrum. In recognition that a radiation source generally comprises few nuclear materials, the underlying Poisson model is sparse, i.e. most of the regression coefficients are zero (positive coefficients correspond to the presence of nuclear materials). We develop an iterative algorithm for a penalized likelihood estimation that prompts sparsity. We illustrate the efficacy of the proposed method by simulations using a variety of poorly resolved, low signal-to-noise ratio (SNR) situations, which show that the proposed approach enjoys excellent empirical performance even with SNR as low as to -15 db.
International Nuclear Information System (INIS)
Walecka, J.D.
1983-01-01
Nuclei provide systems where the strong, electomagnetic, and weak interactions are all present. The current picture of the strong interactions is based on quarks and quantum chromodynamics (QCD). The symmetry structure of this theory is SU(3)/sub C/ x SU(2)/sub W/ x U(1)/sub W/. The electroweak interactions in nuclei can be used to probe this structure. Semileptonic weak interactions are considered. The processes under consideration include beta decay, neutrino scattering and weak neutral-current interactions. The starting point in the analysis is the effective Lagrangian of the Standard Model
Assessment of two-temperature kinetic model for dissociating and weakly-ionizing nitrogen
Park, C.
1986-01-01
The validity of the author's two-temperature, chemical/kinetic model which the author has recently improved is assessed by comparing the calculated results with the existing experimental data for nitrogen in the dissociating and weakly ionizing regime produced behind a normal shock wave. The computer program Shock Tube Radiation Program (STRAP) based on the two-temperature model is used in calculating the flow properties behind the shock wave and the Nonequilibrium Air Radiation (NEQAIR) program, in determining the radiative characteristics of the flow. Both programs were developed earlier. Comparison is made between the calculated and the existing shock tube data on (1) spectra in the equilibrium region, (2) rotational temperature of the N2(+) B state, (3) vibrational temperature of the N2(+) B state, (4) electronic excitation temperature of the N2 B state, (5) the shape of time-variation of radiation intensities, (6) the times to reach the peak in radiation intensity and equilibrium, and (7) the ratio of nonequilibrium to equilibrium radiative heat fluxes. Good agreement is seen between the experimental data and the present calculation except for the vibrational temperature. A possible reason for the discrepancy is given.
Unitary standard model from spontaneous dimensional reduction and weak boson scattering at the LHC
He, Hong-Jian; Xianyu, Zhong-Zhi
2013-04-01
Spontaneous dimensional reduction (SDR) is a striking phenomenon predicted by a number of quantum gravity approaches which all indicate that the spacetime dimensions get reduced at high energies. In this work, we formulate an effective theory of electroweak interactions based upon the standard model, incorporating the spontaneous reduction of space-dimensions at TeV scale. The electroweak gauge symmetry is nonlinearly realized with or without a Higgs boson. We demonstrate that the SDR ensures good high-energy behavior and predicts unitary weak boson scattering. For a light Higgs boson of mass 125GeV, the TeV scale SDR gives a natural solution to the hierarchy problem. Such a light Higgs boson can have induced anomalous gauge couplings from the TeV scale SDR. We find that the corresponding WW scattering cross sections become unitary at TeV scale, but exhibit different behaviors from that of the 4d standard model. These can be discriminated by the WW scattering experiments at the LHC.
Retrieval-based Face Annotation by Weak Label Regularized Local Coordinate Coding.
Wang, Dayong; Hoi, Steven C H; He, Ying; Zhu, Jianke; Mei, Tao; Luo, Jiebo
2013-08-02
Retrieval-based face annotation is a promising paradigm of mining massive web facial images for automated face annotation. This paper addresses a critical problem of such paradigm, i.e., how to effectively perform annotation by exploiting the similar facial images and their weak labels which are often noisy and incomplete. In particular, we propose an effective Weak Label Regularized Local Coordinate Coding (WLRLCC) technique, which exploits the principle of local coordinate coding in learning sparse features, and employs the idea of graph-based weak label regularization to enhance the weak labels of the similar facial images. We present an efficient optimization algorithm to solve the WLRLCC task. We conduct extensive empirical studies on two large-scale web facial image databases: (i) a Western celebrity database with a total of $6,025$ persons and $714,454$ web facial images, and (ii)an Asian celebrity database with $1,200$ persons and $126,070$ web facial images. The encouraging results validate the efficacy of the proposed WLRLCC algorithm. To further improve the efficiency and scalability, we also propose a PCA-based approximation scheme and an offline approximation scheme (AWLRLCC), which generally maintains comparable results but significantly saves much time cost. Finally, we show that WLRLCC can also tackle two existing face annotation tasks with promising performance.
Weak ωNN coupling in the non-linear chiral model
International Nuclear Information System (INIS)
Shmatikov, M.
1988-01-01
In the non-linear chiral model with the soliton solution stabilized by the ω-meson field the weak ωNN coupling constants are calculated. Applying the vector dominance model for the isoscalar current the constant of the isoscalar P-odd ωNN interaction h ω (0) =0 is obtained while the constant of the isovector (of the Lagrangian of the ωNN interaction proves to be h ω (1) ≅ 1.0x10 -7
Ion exchange behaviour of citrate and EDTA anions on strong and weak base organic ion exchangers
International Nuclear Information System (INIS)
Askarieh, M.M.; White, D.A.
1988-01-01
The exchange of citrate and EDTA ions with two strong base and two weak base exchangers is considered. Citrate and EDTA analysis for this work was performed using a colorimetric method developed here. The ions most selectively exchanged on the resins are H 2 cit - and H 2 EDTA 2- , though EDTA is generally less strongly sorbed on strong base resins. In contact with weak base resins, deprotonation of the resin occurs during ion exchange with a noticeable drop in solution pH. Although EDTA sorption can be reversed by nitric acid, citrate ions are significantly held on the resin at low pH. The exchange of citrate can be made reversible if bicarbonate is added to the initial solutions. Alkaline regeneration of exchangers loaded with EDTA proved to be very effective. (author)
A weak instrument [Formula: see text]-test in linear IV models with multiple endogenous variables.
Sanderson, Eleanor; Windmeijer, Frank
2016-02-01
We consider testing for weak instruments in a model with multiple endogenous variables. Unlike Stock and Yogo (2005), who considered a weak instruments problem where the rank of the matrix of reduced form parameters is near zero, here we consider a weak instruments problem of a near rank reduction of one in the matrix of reduced form parameters. For example, in a two-variable model, we consider weak instrument asymptotics of the form [Formula: see text] where [Formula: see text] and [Formula: see text] are the parameters in the two reduced-form equations, [Formula: see text] is a vector of constants and [Formula: see text] is the sample size. We investigate the use of a conditional first-stage [Formula: see text]-statistic along the lines of the proposal by Angrist and Pischke (2009) and show that, unless [Formula: see text], the variance in the denominator of their [Formula: see text]-statistic needs to be adjusted in order to get a correct asymptotic distribution when testing the hypothesis [Formula: see text]. We show that a corrected conditional [Formula: see text]-statistic is equivalent to the Cragg and Donald (1993) minimum eigenvalue rank test statistic, and is informative about the maximum total relative bias of the 2SLS estimator and the Wald tests size distortions. When [Formula: see text] in the two-variable model, or when there are more than two endogenous variables, further information over and above the Cragg-Donald statistic can be obtained about the nature of the weak instrument problem by computing the conditional first-stage [Formula: see text]-statistics.
Fashion Evaluation Method for Clothing Recommendation Based on Weak Appearance Feature
Directory of Open Access Journals (Sweden)
Yan Zhang
2017-01-01
Full Text Available With the rapid rising of living standard, people gradually developed higher shopping enthusiasm and increasing demand for garment. Nowadays, an increasing number of people pursue fashion. However, facing too many types of garment, consumers need to try them on repeatedly, which is somewhat time- and energy-consuming. Besides, it is difficult for merchants to master the real-time demand of consumers. Herein, there is not enough cohesiveness between consumer information and merchants. Thus, a novel fashion evaluation method on the basis of the appearance weak feature is proposed in this paper. First of all, image database is established and three aspects of appearance weak feature are put forward to characterize the fashion level. Furthermore, the appearance weak features are extracted according to the characters’ facial feature localization method. Last but not least, consumers’ fashion level can be classified through support vector product, and the classification is verified with the hierarchical analysis method. The experimental results show that consumers’ fashion level can be accurately described based on the indexes of appearance weak feature and the approach has higher application value for the clothing recommendation system.
Standard Model Higgs boson searches in the weak boson decay channels with the ATLAS detector
Carrillo-Montoya, Germán; Wu, Sau Lan
The search of the Standard Model Higgs boson decaying into a pair of weak bosons with the subsequent leptonic decay of the $W$ or $Z$ bosons is presented. The contributions achieved by this work range from the reevaluation of Higgs boson normalisation cross-sections, to the development of the analysis strategies using detailed Monte Carlo simulations and the search results for the $H\t\\to ZZ \\to l^{+}l^{-} \
On weak solutions to a diffuse interface model of a binary mixture of compressible fluids
Czech Academy of Sciences Publication Activity Database
Feireisl, Eduard
2016-01-01
Roč. 9, č. 1 (2016), s. 173-183 ISSN 1937-1632 R&D Projects: GA ČR GA13-00522S Institutional support: RVO:67985840 Keywords : Euler-Cahn-Hilliard system * weak solution * diffuse interface model Subject RIV: BA - General Mathematics Impact factor: 0.781, year: 2016 http://aimsciences.org/ journals /displayArticlesnew.jsp?paperID=12093
On weak solutions to a diffuse interface model of a binary mixture of compressible fluids
Czech Academy of Sciences Publication Activity Database
Feireisl, Eduard
2016-01-01
Roč. 9, č. 1 (2016), s. 173-183 ISSN 1937-1632 R&D Projects: GA ČR GA13-00522S Institutional support: RVO:67985840 Keywords : Euler-Cahn-Hilliard system * weak solution * diffuse interface model Subject RIV: BA - General Mathematics Impact factor: 0.781, year: 2016 http://aimsciences.org/journals/displayArticlesnew.jsp?paperID=12093
A Quantum Proxy Weak Blind Signature Scheme Based on Controlled Quantum Teleportation
Cao, Hai-Jing; Yu, Yao-Feng; Song, Qin; Gao, Lan-Xiang
2015-04-01
Proxy blind signature is applied to the electronic paying system, electronic voting system, mobile agent system, security of internet, etc. A quantum proxy weak blind signature scheme is proposed in this paper. It is based on controlled quantum teleportation. Five-qubit entangled state functions as quantum channel. The scheme uses the physical characteristics of quantum mechanics to implement message blinding, so it could guarantee not only the unconditional security of the scheme but also the anonymity of the messages owner.
Standard systems for measurement of pK values and ionic mobilities 2. Univalent weak bases
Czech Academy of Sciences Publication Activity Database
Šlampová, Andrea; Křivánková, Ludmila; Gebauer, Petr; Boček, Petr
2009-01-01
Roč. 1216, č. 17 (2009), s. 3637-3641 ISSN 0021-9673 R&D Projects: GA AV ČR IAA400310609; GA AV ČR IAA400310703; GA ČR GA203/08/1536 Institutional research plan: CEZ:AV0Z40310501 Keywords : CZE * dissociation constant * ionic mobility * univalent weak bases Subject RIV: CB - Analytical Chemistry, Separation Impact factor: 4.101, year: 2009
Evaluating of arsenic(V) removal from water by weak-base anion exchange adsorbents.
Awual, M Rabiul; Hossain, M Amran; Shenashen, M A; Yaita, Tsuyoshi; Suzuki, Shinichi; Jyo, Akinori
2013-01-01
Arsenic contamination of groundwater has been called the largest mass poisoning calamity in human history and creates severe health problems. The effective adsorbents are imperative in response to the widespread removal of toxic arsenic exposure through drinking water. Evaluation of arsenic(V) removal from water by weak-base anion exchange adsorbents was studied in this paper, aiming at the determination of the effects of pH, competing anions, and feed flow rates to improvement on remediation. Two types of weak-base adsorbents were used to evaluate arsenic(V) removal efficiency both in batch and column approaches. Anion selectivity was determined by both adsorbents in batch method as equilibrium As(V) adsorption capacities. Column studies were performed in fixed-bed experiments using both adsorbent packed columns, and kinetic performance was dependent on the feed flow rate and competing anions. The weak-base adsorbents clarified that these are selective to arsenic(V) over competition of chloride, nitrate, and sulfate anions. The solution pH played an important role in arsenic(V) removal, and a higher pH can cause lower adsorption capacities. A low concentration level of arsenic(V) was also removed by these adsorbents even at a high flow rate of 250-350 h(-1). Adsorbed arsenic(V) was quantitatively eluted with 1 M HCl acid and regenerated into hydrochloride form simultaneously for the next adsorption operation after rinsing with water. The weak-base anion exchange adsorbents are to be an effective means to remove arsenic(V) from drinking water. The fast adsorption rate and the excellent adsorption capacity in the neutral pH range will render this removal technique attractive in practical use in chemical industry.
2016-03-01
have recently become emplaced in and ac- cessible through the Internet . Worldwide, internet usage is increasing at an astounding rate, particularly...Distribution Unlimited Final Report on "Survey of Quantification and Distance Functions Used for Internet -based Weak-link Sociological Phenomena...comments regarding this burden estimate or any other aspect of this collection of information, including suggesstions for reducing this burden, to
International Nuclear Information System (INIS)
Moral, A. del; Azanza, María J.
2015-01-01
A biomagnetic-electrical model is presented that explains rather well the experimentally observed synchronization of the bioelectric potential firing rate (“frequency”), f, of single unit neurons of Helix aspersa mollusc under the application of extremely low frequency (ELF) weak alternating (AC) magnetic fields (MF). The proposed model incorporates to our widely experimentally tested model of superdiamagnetism (SD) and Ca 2+ Coulomb explosion (CE) from lipid (LP) bilayer membrane (SD–CE model), the electrical quadrupolar long range interaction between the bilayer LP membranes of synchronized neuron pairs, not considered before. The quadrupolar interaction is capable of explaining well the observed synchronization. Actual extension of our SD–CE-model shows that the neuron firing frequency field, B, dependence becomes not modified, but the bioelectric frequency is decreased and its spontaneous temperature, T, dependence is modified. A comparison of the model with synchronization experimental results of pair of neurons under weak (B 0 ≅0.2–15 mT) AC-MF of frequency f M =50 Hz is reported. From the deduced size of synchronized LP clusters under B, is suggested the formation of small neuron networks via the membrane lipid correlation. - Highlights: • Neuron pair synchronization under low frequency alternating (AC) magnetic field (MF). • Superdiamagnetism and Ca 2+ Coulomb explosion for AC MF effect in synchronized frequency. • Membrane lipid electrical quadrupolar pair interaction as synchronization mechamism. • Good agreement of model with electrophysiological experiments on mollusc Helix neurons
Energy Technology Data Exchange (ETDEWEB)
Moral, A. del, E-mail: delmoral@unizar.es [Laboratorio de Magnetismo, Departamento de Física de Materia Condensada and Instituto de Ciencia de Materiales, Universidad de Zaragoza and Consejo Superior de Investigaciones Científicas, 50009 Zaragoza (Spain); Laboratorio de Magnetobiología, Departamento de Anatomía e Histología, Facultad de Medicina, Universidad de Zaragoza, 50009 Zaragoza (Spain); Centro de Tecnología Biomédica, Universidad Politécnica de Madrid, 28223 Madrid (Spain); Azanza, María J., E-mail: mjazanza@unizar.es [Laboratorio de Magnetobiología, Departamento de Anatomía e Histología, Facultad de Medicina, Universidad de Zaragoza, 50009 Zaragoza (Spain); Centro de Tecnología Biomédica, Universidad Politécnica de Madrid, 28223 Madrid (Spain)
2015-03-01
A biomagnetic-electrical model is presented that explains rather well the experimentally observed synchronization of the bioelectric potential firing rate (“frequency”), f, of single unit neurons of Helix aspersa mollusc under the application of extremely low frequency (ELF) weak alternating (AC) magnetic fields (MF). The proposed model incorporates to our widely experimentally tested model of superdiamagnetism (SD) and Ca{sup 2+} Coulomb explosion (CE) from lipid (LP) bilayer membrane (SD–CE model), the electrical quadrupolar long range interaction between the bilayer LP membranes of synchronized neuron pairs, not considered before. The quadrupolar interaction is capable of explaining well the observed synchronization. Actual extension of our SD–CE-model shows that the neuron firing frequency field, B, dependence becomes not modified, but the bioelectric frequency is decreased and its spontaneous temperature, T, dependence is modified. A comparison of the model with synchronization experimental results of pair of neurons under weak (B{sub 0}≅0.2–15 mT) AC-MF of frequency f{sub M}=50 Hz is reported. From the deduced size of synchronized LP clusters under B, is suggested the formation of small neuron networks via the membrane lipid correlation. - Highlights: • Neuron pair synchronization under low frequency alternating (AC) magnetic field (MF). • Superdiamagnetism and Ca{sup 2+} Coulomb explosion for AC MF effect in synchronized frequency. • Membrane lipid electrical quadrupolar pair interaction as synchronization mechamism. • Good agreement of model with electrophysiological experiments on mollusc Helix neurons.
The Weak Charge of the Proton. A Search For Physics Beyond the Standard Model
Energy Technology Data Exchange (ETDEWEB)
MacEwan, Scott J. [Univ. of Manitoba, Winnipeg, MB (Canada)
2015-05-01
The Q_{weak} experiment, which completed running in May of 2012 at Jefferson Laboratory, has measured the parity-violating asymmetry in elastic electron-proton scattering at four-momentum transfer Q^{2} =0.025 (GeV/c)^{2} in order to provide the first direct measurement of the proton's weak charge, Q_{W}^{p}. The Standard Model makes firm predictions for the weak charge; deviations from the predicted value would provide strong evidence of new physics beyond the Standard Model. Using an 89% polarized electron beam at 145 microA scattering from a 34.4 cm long liquid hydrogen target, scattered electrons were detected using an array of eight fused-silica detectors placed symmetric about the beam axis. The parity-violating asymmetry was then measured by reversing the helicity of the incoming electrons and measuring the normalized difference in rate seen in the detectors. The low Q^{2} enables a theoretically clean measurement; the higher-order hadronic corrections are constrained using previous parity-violating electron scattering world data. The experimental method will be discussed, with recent results constituting 4% of our total data and projections of our proposed uncertainties on the full data set.
Landsgesell, Jonas; Holm, Christian; Smiatek, Jens
2017-02-14
We present a novel method for the study of weak polyelectrolytes and general acid-base reactions in molecular dynamics and Monte Carlo simulations. The approach combines the advantages of the reaction ensemble and the Wang-Landau sampling method. Deprotonation and protonation reactions are simulated explicitly with the help of the reaction ensemble method, while the accurate sampling of the corresponding phase space is achieved by the Wang-Landau approach. The combination of both techniques provides a sufficient statistical accuracy such that meaningful estimates for the density of states and the partition sum can be obtained. With regard to these estimates, several thermodynamic observables like the heat capacity or reaction free energies can be calculated. We demonstrate that the computation times for the calculation of titration curves with a high statistical accuracy can be significantly decreased when compared to the original reaction ensemble method. The applicability of our approach is validated by the study of weak polyelectrolytes and their thermodynamic properties.
High Frequency Resonance Damping of DFIG based Wind Power System under Weak Network
DEFF Research Database (Denmark)
Song, Yipeng; Wang, Xiongfei; Blaabjerg, Frede
2017-01-01
When operating in a micro or weak grid which has a relatively large network impedance, the Doubly Fed Induction Generator (DFIG) based wind power generation system is prone to suffer high frequency resonance due to the impedance interaction between DFIG system and the parallel compensated network...... (series RL + shunt C). In order to improve the performance of the DFIG system as well as other units and loads connected to the weak grid, the high frequency resonance needs to be effectively damped. In this paper, the proposed active damping control strategy is able to implement effective damping either...... in the Rotor Side Converter (RSC) or in the Grid Side Converter (GSC), through the introduction of virtual positive capacitor or virtual negative inductor to reshape the DFIG system impedance and mitigate the high frequency resonance. A detailed theoretical explanation on the virtual positive capacitor...
The strong-weak coupling symmetry in 2D Φ4 field models
Directory of Open Access Journals (Sweden)
B.N.Shalaev
2005-01-01
Full Text Available It is found that the exact beta-function β(g of the continuous 2D gΦ4 model possesses two types of dual symmetries, these being the Kramers-Wannier (KW duality symmetry and the strong-weak (SW coupling symmetry f(g, or S-duality. All these transformations are explicitly constructed. The S-duality transformation f(g is shown to connect domains of weak and strong couplings, i.e. above and below g*. Basically it means that there is a tempting possibility to compute multiloop Feynman diagrams for the β-function using high-temperature lattice expansions. The regular scheme developed is found to be strongly unstable. Approximate values of the renormalized coupling constant g* found from duality symmetry equations are in an agreement with available numerical results.
Electromagnetic and weak observables in the context of the shell model
International Nuclear Information System (INIS)
Wildenthal, B.H.
1984-01-01
Wave functions for A = 17-39 nuclei have been obtained from diagonalizations of a single Hamiltonian formulation in the complete sd-shell configuration space for each NTJ system. These wave functions are used to generate the one-body density matrices corresponding to weak and electromagnetic transitions and moments. These densities are combined with different assumptions for the single-particle matrix elements of the weak and electromagnetic operators to produce theoretical matrix elements. The predictions are compared with experiment to determine, in some ''linearly dependent'' fashion, the correctness of the wave functions themselves, the optimum values of the single-particle matrix elements, and the viability of the overall shell-model formulation. (author)
Global weak solutions for a gas liquid model with external forces and general pressure law
Evje, Steinar; Friis, Helmer André
2011-01-01
This is a copy of an article previously published in; SIAM journal on applied mathematics, which has been made available here with permission. Original article; http://dx.doi.org/10.1137/100813336. In this work we show existence of global weak solutions for a two-phase gas-liquid model where the gas phase is represented by a general isothermal pressure law, whereas the liquid is assumed to be incompressible. To make the model relevant for pipe and well-flow applications we have included ex...
Testing the Standard Model by precision measurement of the weak charges of quarks
Energy Technology Data Exchange (ETDEWEB)
Ross Young; Roger Carlini; Anthony Thomas; Julie Roche
2007-05-01
In a global analysis of the latest parity-violating electron scattering measurements on nuclear targets, we demonstrate a significant improvement in the experimental knowledge of the weak neutral-current lepton-quark interactions at low-energy. The precision of this new result, combined with earlier atomic parity-violation measurements, limits the magnitude of possible contributions from physics beyond the Standard Model - setting a model-independent, lower-bound on the scale of new physics at ~1 TeV.
Weak Interaction Models with New Quarks and Right-handed Currents
Wilczek, F. A.; Zee, A.; Kingsley, R. L.; Treiman, S. B.
1975-06-01
We discuss various weak interaction issues for a general class of models within the SU(2) x U(1) gauge theory framework, with special emphasis on the effects of right-handed, charged currents and of quarks bearing new quantum numbers. In particular we consider the restrictions on model building which are imposed by the small KL - KS mass difference and by the .I = = rule; and we classify various possibilities for neutral current interactions and, in the case of heavy mesons with new quantum numbers, various possibilities for mixing effects analogous to KL - KS mixing.
García-Morales, Vladimir; Manzanares, José A.; Mafe, Salvador
2017-04-01
We present a weakly coupled map lattice model for patterning that explores the effects exerted by weakening the local dynamic rules on model biological and artificial networks composed of two-state building blocks (cells). To this end, we use two cellular automata models based on (i) a smooth majority rule (model I) and (ii) a set of rules similar to those of Conway's Game of Life (model II). The normal and abnormal cell states evolve according to local rules that are modulated by a parameter κ . This parameter quantifies the effective weakening of the prescribed rules due to the limited coupling of each cell to its neighborhood and can be experimentally controlled by appropriate external agents. The emergent spatiotemporal maps of single-cell states should be of significance for positional information processes as well as for intercellular communication in tumorigenesis, where the collective normalization of abnormal single-cell states by a predominantly normal neighborhood may be crucial.
Directory of Open Access Journals (Sweden)
Xu Li
2015-06-01
Full Text Available A Ni-based superalloy CMSX-6 was directionally solidified at various drawing speeds (5–20 μm·s−1 and diameters (4 mm, 12 mm under a 0.5 T weak transverse magnetic field. The results show that the application of a weak transverse magnetic field significantly modified the solidification microstructure. It was found that if the drawing speed was lower than 10 μm·s−1, the magnetic field caused extensive macro-segregation in the mushy zone, and a change in the mushy zone length. The magnetic field significantly decreases the size of γ’ and the content of γ-γ’ eutectic. The formation of macro-segregation under a weak magnetic field was attributed to the interdendritic solute transport driven by the thermoelectric magnetic convection (TEMC. The γ’ phase refinement could be attributed to a decrease in nucleation activation energy owing to the magnetic field during solid phase transformation. The change of element segregation is responsible for the content decrease of γ-γ’ eutectic.
Higgs production via weak boson fusion in the standard model and the MSSM
International Nuclear Information System (INIS)
Figy, Terrance; Palmer, Sophy
2010-12-01
Weak boson fusion is expected to be an important Higgs production channel at the LHC. Complete one-loop results for weak boson fusion in the Standard Model have been obtained by calculating the full virtual electroweak corrections and photon radiation and implementing these results into the public Monte Carlo program VBFNLO (which includes the NLO QCD corrections). Furthermore the dominant supersymmetric one-loop corrections to neutral Higgs production, in the general case where the MSSM includes complex phases, have been calculated. These results have been combined with all one-loop corrections of Standard Model type and with the propagator-type corrections from the Higgs sector of the MSSM up to the two-loop level. Within the Standard Model the electroweak corrections are found to be as important as the QCD corrections after the application of appropriate cuts. The corrections yield a shift in the cross section of order 5% for a Higgs of mass 100-200 GeV, confirming the result obtained previously in the literature. For the production of a light Higgs boson in the MSSM the Standard Model result is recovered in the decoupling limit, while the loop contributions from superpartners to the production of neutral MSSM Higgs bosons can give rise to corrections in excess of 10% away from the decoupling region. (orig.)
Angular Structure of Jet Quenching Within a Hybrid Strong/Weak Coupling Model
Casalderrey-Solana, Jorge; Milhano, Guilherme; Pablos, Daniel; Rajagopal, Krishna
2017-01-01
Within the context of a hybrid strong/weak coupling model of jet quenching, we study the modification of the angular distribution of the energy within jets in heavy ion collisions, as partons within jet showers lose energy and get kicked as they traverse the strongly coupled plasma produced in the collision. To describe the dynamics transverse to the jet axis, we add the effects of transverse momentum broadening into our hybrid construction, introducing a parameter $K\\equiv \\hat q/T^3$ that governs its magnitude. We show that, because of the quenching of the energy of partons within a jet, even when $K\
Enhanced LVRT Control Strategy for DFIG-Based WECS in Weak Grid
DEFF Research Database (Denmark)
Abulanwar, Elsayed; Chen, Zhe; Iov, Florin
2013-01-01
An enhanced coordinated low voltage ride-through, LVRT, control strategy for a Doubly-fed Induction generator (DFIG)-based wind energy conversion system, WECS, connected to a weak grid is presented in this paper. The compliance with the grid code commitments is also considered. A proposed decoupled...... protection scheme. Furthermore, additional compensation terms are incorporated with the traditional GSC and rotor side converter, RSC, controllers to effectively suppress rotor as well as stator currents and meanwhile regulate the rotor speed. A diverse set of voltage excursions are conducted to evaluate...
General analytical procedure for determination of acidity parameters of weak acids and bases.
Pilarski, Bogusław; Kaliszan, Roman; Wyrzykowski, Dariusz; Młodzianowski, Janusz; Balińska, Agata
2015-01-01
The paper presents a new convenient, inexpensive, and reagent-saving general methodology for the determination of pK a values for components of the mixture of diverse chemical classes weak organic acids and bases in water solution, without the need to separate individual analytes. The data obtained from simple pH-metric microtitrations are numerically processed into reliable pK a values for each component of the mixture. Excellent agreement has been obtained between the determined pK a values and the reference literature data for compounds studied.
Higgs Production via Weak Boson Fusion in the Standard Model and the MSSM
Figy, Terrance; Weiglein, Georg
2012-01-01
Weak boson fusion is expected to be an important Higgs production channel at the LHC. Complete one-loop results for weak boson fusion in the Standard Model have been obtained by calculating the full virtual electroweak corrections and photon radiation and implementing these results into the public Monte Carlo program VBFNLO which includes the NLO QCD corrections. Furthermore the dominant supersymmetric one-loop corrections to neutral Higgs production, in the general case where the MSSM includes complex phases, have been calculated. These results have been combined with all one-loop corrections of Standard Model type and with the propagator-type corrections from the Higgs sector of the MSSM up to the two-loop level. Within the Standard Model the electroweak corrections are found to be as important as the QCD corrections after the application of appropriate cuts. The corrections yield a shift in the cross section of order 5% for a Higgs of mass 100-200 GeV, confirming the result obtained previously in the liter...
Axtell, Jonathan C; Kirlikovali, Kent O; Djurovich, Peter I; Jung, Dahee; Nguyen, Vinh T; Munekiyo, Brian; Royappa, A Timothy; Rheingold, Arnold L; Spokoyny, Alexander M
2016-12-07
We report the development of a new class of phosphorescent zwitterionic bis(heteroleptic) Ir(III) compounds containing pyridyl ligands with weakly coordinating nido-carboranyl substituents. Treatment of phenylpyridine-based Ir(III) precursors with C-substituted ortho-carboranylpyridines in 2-ethoxyethanol results in a facile carborane deboronation and the formation of robust and highly luminescent metal complexes. The resulting nido-carboranyl fragments associate with the cationic Ir(III) center through primarily electrostatic interactions. These compounds phosphoresce at blue wavelengths (450-470 nm) both in a poly(methyl methacrylate) (PMMA) matrix and in solution at 77 K. These complexes display structural stability at temperatures beyond 300 °C and quantum yields greater than 40%. Importantly, the observed quantum yields correspond to a dramatic 10-fold enhancement over the previously reported Ir(III) congeners featuring carboranyl-containing ligands in which the boron cluster is covalently attached to the metal. Ultimately, this work suggests that the use of a ligand framework containing a weakly coordinating anionic component can provide a new avenue for designing efficient Ir(III)-based phosphorescent emitters.
Modified Standard Penetration Test–based Drilled Shaft Design Method for Weak Rocks (Phase 2 Study)
2017-12-15
In this project, Illinois-specific design procedures were developed for drilled shafts founded in weak shale or rock. In particular, a modified standard penetration test was developed and verified to characterize the in situ condition of weak shales ...
A collisional-radiative model for low-pressure weakly magnetized Ar plasmas
Zhu, Xi-Ming; Tsankov, Tsanko; Czarnetzki, Uwe; Marchuk, Oleksandr
2016-09-01
Collisional-radiative (CR) models are widely investigated in plasma physics for describing the kinetics of reactive species and for optical emission spectroscopy. This work reports a new Ar CR model used in low-pressure (0.01-10 Pa) weakly magnetized (Tesla) plasmas, including ECR, helicon, and NLD discharges. In this model 108 realistic levels are individually studied, i.e. 51 lowest levels of the Ar atom and 57 lowest levels of the Ar ion. We abandon the concept of an ``effective level'' usually adopted in previous models for glow discharges. Only in this way the model can correctly predict the non-equilibrium population distribution of close energy levels. In addition to studying atomic metastable and radiative levels, this model describes the kinetic processes of ionic metastable and radiative levels in detail for the first time. This is important for investigation of plasma-surface interaction and for optical diagnostics using atomic and ionic line-ratios. This model could also be used for studying Ar impurities in tokamaks and astrophysical plasmas.
Directory of Open Access Journals (Sweden)
Francesco Tornabene
2017-07-01
Full Text Available The authors are presenting a novel formulation based on the Differential Quadrature (DQ method which is used to approximate derivatives and integrals. The resulting scheme has been termed strong and weak form finite elements (SFEM or WFEM, according to the numerical scheme employed in the computation. Such numerical methods are applied to solve some structural problems related to the mechanical behavior of plates and shells, made of isotropic or composite materials. The main differences between these two approaches rely on the initial formulation – which is strong or weak (variational – and the implementation of the boundary conditions, that for the former include the continuity of stresses and displacements, whereas in the latter can consider the continuity of the displacements or both. The two methodologies consider also a mapping technique to transform an element of general shape described in Cartesian coordinates into the same element in the computational space. Such technique can be implemented by employing the classic Lagrangian-shaped elements with a fixed number of nodes along the element edges or blending functions which allow an “exact mapping” of the element. In particular, the authors are employing NURBS (Not-Uniform Rational B-Splines for such nonlinear mapping in order to use the “exact” shape of CAD designs.
A Carrier Estimation Method Based on MLE and KF for Weak GNSS Signals.
Zhang, Hongyang; Xu, Luping; Yan, Bo; Zhang, Hua; Luo, Liyan
2017-06-22
Maximum likelihood estimation (MLE) has been researched for some acquisition and tracking applications of global navigation satellite system (GNSS) receivers and shows high performance. However, all current methods are derived and operated based on the sampling data, which results in a large computation burden. This paper proposes a low-complexity MLE carrier tracking loop for weak GNSS signals which processes the coherent integration results instead of the sampling data. First, the cost function of the MLE of signal parameters such as signal amplitude, carrier phase, and Doppler frequency are used to derive a MLE discriminator function. The optimal value of the cost function is searched by an efficient Levenberg-Marquardt (LM) method iteratively. Its performance including Cramér-Rao bound (CRB), dynamic characteristics and computation burden are analyzed by numerical techniques. Second, an adaptive Kalman filter is designed for the MLE discriminator to obtain smooth estimates of carrier phase and frequency. The performance of the proposed loop, in terms of sensitivity, accuracy and bit error rate, is compared with conventional methods by Monte Carlo (MC) simulations both in pedestrian-level and vehicle-level dynamic circumstances. Finally, an optimal loop which combines the proposed method and conventional method is designed to achieve the optimal performance both in weak and strong signal circumstances.
Rodionova, D. O.; Voronyuk, I. V.; Eliseeva, T. V.
2016-07-01
Features of the sorption of substituted aromatic aldehydes by a weak-base anion exchanger under equilibrium conditions are investigated using vanillin and ethylvanillin as examples. Analysis of the sorption isotherms of carbonyl compounds at different temperatures allows us to calculate the equilibrium characteristics of their sorption and assess the entropy and enthalpy contributions to the energy of the process. Hydration characteristics of the macroporous weak-base anion exchanger before and after the sorption of aromatic aldehydes are compared.
Beyond the Standard Model: The Weak Scale, Neutrino Mass, and the Dark Sector
International Nuclear Information System (INIS)
Weiner, Neal
2010-01-01
The goal of this proposal was to advance theoretical studies into questions of collider physics at the weak scale, models and signals of dark matter, and connections between neutrino mass and dark energy. The project was a significant success, with a number of developments well beyond what could have been anticipated at the outset. A total of 35 published papers and preprints were produced, with new ideas and signals for LHC physics and dark matter experiments, in particular. A number of new ideas have been found on the possible indirect signals of models of dark matter which relate to the INTEGRAL signal of astrophysical positron production, high energy positrons seen at PAMELA and Fermi, studies into anomalous gamma rays at Fermi, collider signatures of sneutrino dark matter, scenarios of Higgs physics arising in SUSY models, the implications of galaxy cluster surveys for photon-axion conversion models, previously unconsidered collider phenomenology in the form of 'lepton jets' and a very significant result for flavor physics in supersymmetric theories. Progress continues on all fronts, including development of models with dramatic implications for direct dark matter searches, dynamics of dark matter with various excited states, flavor physics, and consequences of modified missing energy signals for collider searches at the LHC.
Impedance-Matched, Double-Zero Optical Metamaterials Based on Weakly Resonant Metal Oxide Nanowires
Directory of Open Access Journals (Sweden)
Diego R. Abujetas
2018-03-01
Full Text Available Artificial optical metamaterial with a zero index of refraction holds promise for many diverse phenomena and applications, which can be achieved with vacuum (or related surface impedance and materials in the optical domain. Here, we propose simple metal-oxide nanorods as meta-atoms on the basis of an effective medium approach, based on their weak overlapping (electric/magnetic resonances. We thus studied the optical properties of TiO 2 nanowire arrays with a high-filling fraction through their photonic band structure, which exhibits a double-degeneracy point without a band gap at the center of the Brillouin zone. Various configurations are considered that reveal their performance over a reasonable range of incident wave vectors as impedance-matched, double-zero, bulk (low-loss metamaterials.
Negative differential mobility of weakly driven particles in models of glass formers
Energy Technology Data Exchange (ETDEWEB)
Jack, Robert L.; Kelsey, David; Garrahan, Juan P.; Chandler, David
2008-04-01
We study the response of probe particles to weak constant driving in kinetically constrained models of glassy systems, and show that the probe's response can be non-monotonic and give rise to negative differential mobility: increasing the applied force can reduce the probe's drift velocity in the force direction. Other significant non-linear effects are also demonstrated, such as the enhancement with increasing force of the probe's fluctuations away from the average path, a phenomenon known in other contexts as giant diffusivity. We show that these results can be explained analytically by a continuous-time random walk approximation where there is decoupling between persistence and exchange times for local displacements of the probe. This decoupling is due to dynamic heterogeneity in the glassy system, which also leads to bimodal distributions of probe particle displacements. We discuss the relevance of our results to experiments.
Escalante, George
2017-05-01
Weak Value Measurements (WVMs) with pre- and post-selected quantum mechanical ensembles were proposed by Aharonov, Albert, and Vaidman in 1988 and have found numerous applications in both theoretical and applied physics. In the field of precision metrology, WVM techniques have been demonstrated and proven valuable as a means to shift, amplify, and detect signals and to make precise measurements of small effects in both quantum and classical systems, including: particle spin, the Spin-Hall effect of light, optical beam deflections, frequency shifts, field gradients, and many others. In principal, WVM amplification techniques are also possible in radar and could be a valuable tool for precision measurements. However, relatively limited research has been done in this area. This article presents a quantum-inspired model of radar range and range-rate measurements of arbitrary strength, including standard and pre- and post-selected measurements. The model is used to extend WVM amplification theory to radar, with the receive filter performing the post-selection role. It is shown that the description of range and range-rate measurements based on the quantum-mechanical measurement model and formalism produces the same results as the conventional approach used in radar based on signal processing and filtering of the reflected signal at the radar receiver. Numerical simulation results using simple point scatterrer configurations are presented, applying the quantum-inspired model of radar range and range-rate measurements that occur in the weak measurement regime. Potential applications and benefits of the quantum inspired approach to radar measurements are presented, including improved range and Doppler measurement resolution.
Gaume, Johan; van Herwijnen, Alec; Chambon, Guillaume; Schweizer, Jürg
2015-04-01
Dry-snow slab avalanches are generally caused by a sequence of fracture processes including (1) failure initiation in a weak snow layer underlying a cohesive slab, (2) crack propagation within the weak layer and (3) slab tensile failure leading to its detachment. During the past decades, theoretical and experimental work has gradually led to a better understanding of the fracture process in snow involving the collapse of the structure in the weak layer during fracture. This now allows us to better model failure initiation and the onset of crack propagation, i.e. to estimate the critical length required for crack propagation. However, the most complete model to date, namely the anticrack model, is based on fracture mechanics and is therefore not applicable to avalanche forecasting procedures which assess snowpack stability in terms of stresses and strength. Furthermore, the anticrack model requires the knowledge of the specific fracture energy of the weak layer which is very difficult to evaluate in practice and very sensitive to the experimental method used. To overcome this limitation, a new and simple analytical model was developed to evaluate the critical length as a function of the mechanical properties of the slab, the strength of the weak layer as well as the collapse height. This model was inferred from discrete element simulations of the propagation saw test (PST) allowing to reproduce the high porosity, and thus the collapse, of weak snow layers. The analytical model showed a very good agreement with PST field data, and could thus be used in practice to refine stability indices.
Wei, Bing; Li, Linqian; Yang, Qian; Ge, Debiao
Based on high order hierarchical basis function and the ideal of shift operator, shift operator discontinuous Galerkin time domain (SO-DGTD) technique for dealing with weakly ionized dusty plasma electromagnetic problems is proposed. Lagrange interpolation is used to transform the metallic blunt cone aircraft with weakly ionized dusty plasma sheath from geometric model built by COMSOL to electromagnetic computational model. In the case of two-dimensional transverse magnetic (TM) wave, the electromagnetic wave propagation in weakly ionized dusty plasma sheath is calculated by SO-DGTD technique. And then, the influence of dust particle concentration and dust radius on the radio wave transmission is analyzed, and the radio wave propagation characteristics pass through the sheath is compared with the flight speed and height change.
Wu, Lin; Wang, Yang; Pan, Shirui
2017-12-01
It is now well established that sparse representation models are working effectively for many visual recognition tasks, and have pushed forward the success of dictionary learning therein. Recent studies over dictionary learning focus on learning discriminative atoms instead of purely reconstructive ones. However, the existence of intraclass diversities (i.e., data objects within the same category but exhibit large visual dissimilarities), and interclass similarities (i.e., data objects from distinct classes but share much visual similarities), makes it challenging to learn effective recognition models. To this end, a large number of labeled data objects are required to learn models which can effectively characterize these subtle differences. However, labeled data objects are always limited to access, committing it difficult to learn a monolithic dictionary that can be discriminative enough. To address the above limitations, in this paper, we propose a weakly-supervised dictionary learning method to automatically learn a discriminative dictionary by fully exploiting visual attribute correlations rather than label priors. In particular, the intrinsic attribute correlations are deployed as a critical cue to guide the process of object categorization, and then a set of subdictionaries are jointly learned with respect to each category. The resulting dictionary is highly discriminative and leads to intraclass diversity aware sparse representations. Extensive experiments on image classification and object recognition are conducted to show the effectiveness of our approach.
A Nanotechnology-Ready Computing Scheme based on a Weakly Coupled Oscillator Network
Vodenicarevic, Damir; Locatelli, Nicolas; Abreu Araujo, Flavio; Grollier, Julie; Querlioz, Damien
2017-03-01
With conventional transistor technologies reaching their limits, alternative computing schemes based on novel technologies are currently gaining considerable interest. Notably, promising computing approaches have proposed to leverage the complex dynamics emerging in networks of coupled oscillators based on nanotechnologies. The physical implementation of such architectures remains a true challenge, however, as most proposed ideas are not robust to nanotechnology devices’ non-idealities. In this work, we propose and investigate the implementation of an oscillator-based architecture, which can be used to carry out pattern recognition tasks, and which is tailored to the specificities of nanotechnologies. This scheme relies on a weak coupling between oscillators, and does not require a fine tuning of the coupling values. After evaluating its reliability under the severe constraints associated to nanotechnologies, we explore the scalability of such an architecture, suggesting its potential to realize pattern recognition tasks using limited resources. We show that it is robust to issues like noise, variability and oscillator non-linearity. Defining network optimization design rules, we show that nano-oscillator networks could be used for efficient cognitive processing.
Modeling the adsorption of weak organic acids on goethite : the ligand and charge distribution model
Filius, J.D.
2001-01-01
A detailed study is presented in which the CD-MUSIC modeling approach is used in a new modeling approach that can describe the binding of large organic molecules by metal (hydr)oxides taking the full speciation of the adsorbed molecule into account. Batch equilibration experiments were
Energy Technology Data Exchange (ETDEWEB)
Newman, J.; Reed, L.W.
1980-01-01
Guidelines for selecting weak-base versus strong-base anion-exchange resins for the recovery of chromate from cooling tower blowdown are given, together with actual operating data on large-scale industrial systems based on strong-base anion-exchange resins, data from a similar pilot system based on weak-base anion resin, and the chemical costs for operating both systems for a cooling tower blowdown containing 2500 ppm total dissolved solids and 20 ppm chromata.
Testing a phenomenologically extended DGP model with upcoming weak lensing surveys
Energy Technology Data Exchange (ETDEWEB)
Camera, Stefano; Diaferio, Antonaldo [Dipartimento di Fisica Generale ' ' A. Avogadro' ' , Università di Torino, via P. Giuria 1, 10125 Torino (Italy); Cardone, Vincenzo F., E-mail: camera@ph.unito.it, E-mail: diaferio@ph.unito.it, E-mail: winnyenodrac@gmail.com [Dipartimento di Scienze e Tecnologie per l' Ambiente e il Territorio, Università degli Studi del Molise, Contrada Fonte Lappone, 86090 Pesche (Italy)
2011-01-01
A phenomenological extension of the well-known brane-world cosmology of Dvali, Gabadadze and Porrati (eDGP) has recently been proposed. In this model, a cosmological-constant-like term is explicitly present as a non-vanishing tension σ on the brane, and an extra parameter α tunes the cross-over scale r{sub c}, the scale at which higher dimensional gravity effects become non negligible. Since the Hubble parameter in this cosmology reproduces the same ΛCDM expansion history, we study how upcoming weak lensing surveys, such as Euclid and DES (Dark Energy Survey), can confirm or rule out this class of models. We perform Monte Carlo Markov Chain simulations to determine the parameters of the model, using Type Ia Supernovæ, H(z) data, Gamma Ray Bursts and Baryon Acoustic Oscillations. We also fit the power spectrum of the temperature anisotropies of the Cosmic Microwave Background to obtain the correct normalisation for the density perturbation power spectrum. Then, we compute the matter and the cosmic shear power spectra, both in the linear and non-linear régimes. The latter is calculated with the two different approaches of Hu and Sawicki (2007) (HS) and Khoury and Wyman (2009) (KW). With the eDGP parameters coming from the Markov Chains, KW reproduces the ΛCDM matter power spectrum at both linear and non-linear scales and the ΛCDM and eDGP shear signals are degenerate. This result does not hold with the HS prescription. Indeed, Euclid can distinguish the eDGP model from ΛCDM because their expected power spectra roughly differ by the 3σ uncertainty in the angular scale range 700∼
Testing a phenomenologically extended DGP model with upcoming weak lensing surveys
International Nuclear Information System (INIS)
Camera, Stefano; Diaferio, Antonaldo; Cardone, Vincenzo F.
2011-01-01
A phenomenological extension of the well-known brane-world cosmology of Dvali, Gabadadze and Porrati (eDGP) has recently been proposed. In this model, a cosmological-constant-like term is explicitly present as a non-vanishing tension σ on the brane, and an extra parameter α tunes the cross-over scale r c , the scale at which higher dimensional gravity effects become non negligible. Since the Hubble parameter in this cosmology reproduces the same ΛCDM expansion history, we study how upcoming weak lensing surveys, such as Euclid and DES (Dark Energy Survey), can confirm or rule out this class of models. We perform Monte Carlo Markov Chain simulations to determine the parameters of the model, using Type Ia Supernovæ, H(z) data, Gamma Ray Bursts and Baryon Acoustic Oscillations. We also fit the power spectrum of the temperature anisotropies of the Cosmic Microwave Background to obtain the correct normalisation for the density perturbation power spectrum. Then, we compute the matter and the cosmic shear power spectra, both in the linear and non-linear régimes. The latter is calculated with the two different approaches of Hu and Sawicki (2007) (HS) and Khoury and Wyman (2009) (KW). With the eDGP parameters coming from the Markov Chains, KW reproduces the ΛCDM matter power spectrum at both linear and non-linear scales and the ΛCDM and eDGP shear signals are degenerate. This result does not hold with the HS prescription. Indeed, Euclid can distinguish the eDGP model from ΛCDM because their expected power spectra roughly differ by the 3σ uncertainty in the angular scale range 700∼< l∼<3000; on the contrary, the two models differ at most by the 1σ uncertainty over the range 500∼< l∼<3000 in the DES experiment and they are virtually indistinguishable
JESS at thirty: Strengths, weaknesses and future needs in the modelling of chemical speciation
International Nuclear Information System (INIS)
May, Peter M.
2015-01-01
Highlights: • Powerful chemical speciation code and database facility described. • Thermodynamic data harmonisation and automatic consistency-checking implemented. • Metal–ligand concentrations and solubilities in seawater and biofluids calculated. • Metastable equilibria included in aquatic chemistry modelling. • Limitations of ion association frameworks caused by specific-ion interactions. - Abstract: The current status of the software package JESS (Joint Expert Speciation System), which has been developed over the last 30 years, is described. Chemical speciation models of seawater and of metal-ion complexation in human blood plasma are used as large equilibrium systems to explore the present capabilities of the code and database. Strengths of JESS are considered to be (a) the power and flexibility of its command-driven programs, (b) the size, generality and openness of its reaction database, (c) its automatic facility to achieve thermodynamic consistency, and (d) its ability to partition chemical reactions to model kinetic constraints. A special feature of JESS is its ability automatically to generate a complete, stand-alone FORTRAN program for any particular chemical equilibrium model that has been developed. Weaknesses of JESS include the lack of a graphical user interface, the resulting effort required in familiarisation, and certain limitations regarding pressure and temperature corrections in concentrated solutions. However, the most troublesome issues – common to all ‘ion-association’ frameworks – are due to inadequacies in the thermodynamic data available from the literature and to persistent deficiencies in the fundamental theory of concentrated electrolyte solutions. Accordingly, work on JESS has commenced to construct a global parameterisation facility using both reaction data and the physicochemical properties of strong electrolytes in aqueous solution (including solubilities) intended to improve model function testing and to
Kawakami, Shun; Sasaki, Toshihiko; Koashi, Masato
2017-07-01
An essential step in quantum key distribution is the estimation of parameters related to the leaked amount of information, which is usually done by sampling of the communication data. When the data size is finite, the final key rate depends on how the estimation process handles statistical fluctuations. Many of the present security analyses are based on the method with simple random sampling, where hypergeometric distribution or its known bounds are used for the estimation. Here we propose a concise method based on Bernoulli sampling, which is related to binomial distribution. Our method is suitable for the Bennett-Brassard 1984 (BB84) protocol with weak coherent pulses [C. H. Bennett and G. Brassard, Proceedings of the IEEE Conference on Computers, Systems and Signal Processing (IEEE, New York, 1984), Vol. 175], reducing the number of estimated parameters to achieve a higher key generation rate compared to the method with simple random sampling. We also apply the method to prove the security of the differential-quadrature-phase-shift (DQPS) protocol in the finite-key regime. The result indicates that the advantage of the DQPS protocol over the phase-encoding BB84 protocol in terms of the key rate, which was previously confirmed in the asymptotic regime, persists in the finite-key regime.
Slade, Gordon; Tomberg, Alexandre
2016-03-01
We extend and apply a rigorous renormalisation group method to study critical correlation functions, on the 4-dimensional lattice Z4, for the weakly coupled n-component {|\\varphi|4} spin model for all {n ≥ 1}, and for the continuous-time weakly self-avoiding walk. For the {|\\varphi|4} model, we prove that the critical two-point function has | x|-2 (Gaussian) decay asymptotically, for {n ≥ 1}. We also determine the asymptotic decay of the critical correlations of the squares of components of {\\varphi}, including the logarithmic corrections to Gaussian scaling, for {n ≥ 1}. The above extends previously known results for n = 1 to all {n ≥ 1}, and also observes new phenomena for n > 1, all with a new method of proof. For the continuous-time weakly self-avoiding walk, we determine the decay of the critical generating function for the "watermelon" network consisting of p weakly mutually- and self-avoiding walks, for all {p ≥ 1}, including the logarithmic corrections. This extends a previously known result for p = 1, for which there is no logarithmic correction, to a much more general setting. In addition, for both models, we study the approach to the critical point and prove the existence of logarithmic corrections to scaling for certain correlation functions. Our method gives a rigorous analysis of the weakly self-avoiding walk as the n = 0 case of the {|\\varphi|4} model, and provides a unified treatment of both models, and of all the above results.
Theory and method for weak signal detection in engineering practice based on stochastic resonance
Zhao, Wenli; Wang, Linze; Fan, Jian
2017-11-01
In this paper, the Kramers rate was derived using the Fokker-Planck (FP) equation with the condition of adiabatic approximation (the amplitude and frequency of signal detected are small ≪ 1) and the signal-to-noise ratio (SNR) was proved by means of Fourier transform and the power spectrum in bistable system. This is a concise and superior method to demonstrate the Kramers rate and SNR compared to the past methods. It is convenient for readers to understand. The SNR of the bistable system obtained shows that stochastic resonance (SR) can be used to realize energy transition from noise to a periodic signal under the adiabatic approximation condition. Therefore, SR could enhance the SNR of the output signal. The signal modulation technique was employed to transform the large frequency components into a small parameter signal to meet the adiabatic approximation requirement. Furthermore, we have designed the model of modulator. The simulation results show that the modulation method can generate SR in a bistable system and detect weak signals with large parameters from strong noise background.
Directory of Open Access Journals (Sweden)
2011-06-01
Full Text Available This study aims to evidence the formation of stable polyelectrolyte complex particles as colloidal dispersions using some weak polyelectrolytes: chitosan and poly(allylamine hydrochloride as polycations and poly(acrylic acid (PAA and poly(2-acrylamido-2-methylpropanesulfonic acid – co – acrylic acid (PAMPSAA as polyanions. Polyelectrolyte complex particles as colloidal dispersion were prepared by controlled mixing of the oppositely charged polymers, with a constant addition rate. The influences of the polyelectrolytes structure and the molar ratio between ionic charges on the morphology, size, and colloidal stability of the complex particles have been deeply investigated by turbidimetry, dynamic light scattering and atomic force microscopy. A strong influence of polyanion structure on the values of molar ratio n–/n+ when neutral complex particles were obtained has been noticed, which shifts from the theoretical value of 1.0, observed when PAA was used, to 0.7 for PAMPSAA based complexes. The polyions chain characteristics influenced the size and shape of the complexes, larger particles being obtained when chitosan was used, for the same polyanion, and when PAMPSAA was used, for the same polycation.
A New Method for Weak Fault Feature Extraction Based on Improved MED
Directory of Open Access Journals (Sweden)
Junlin Li
2018-01-01
Full Text Available Because of the characteristics of weak signal and strong noise, the low-speed vibration signal fault feature extraction has been a hot spot and difficult problem in the field of equipment fault diagnosis. Moreover, the traditional minimum entropy deconvolution (MED method has been proved to be used to detect such fault signals. The MED uses objective function method to design the filter coefficient, and the appropriate threshold value should be set in the calculation process to achieve the optimal iteration effect. It should be pointed out that the improper setting of the threshold will cause the target function to be recalculated, and the resulting error will eventually affect the distortion of the target function in the background of strong noise. This paper presents an improved MED based method of fault feature extraction from rolling bearing vibration signals that originate in high noise environments. The method uses the shuffled frog leaping algorithm (SFLA, finds the set of optimal filter coefficients, and eventually avoids the artificial error influence of selecting threshold parameter. Therefore, the fault bearing under the two rotating speeds of 60 rpm and 70 rpm is selected for verification with typical low-speed fault bearing as the research object; the results show that SFLA-MED extracts more obvious bearings and has a higher signal-to-noise ratio than the prior MED method.
Limitations of Hall MHD as a model for turbulence in weakly collisional plasmas
Directory of Open Access Journals (Sweden)
G. G. Howes
2009-03-01
Full Text Available The limitations of Hall MHD as a model for turbulence in weakly collisional plasmas are explored using quantitative comparisons to Vlasov-Maxwell kinetic theory over a wide range of parameter space. The validity of Hall MHD in the cold ion limit is shown, but spurious undamped wave modes exist in Hall MHD when the ion temperature is finite. It is argued that turbulence in the dissipation range of the solar wind must be one, or a mixture, of three electromagnetic wave modes: the parallel whistler, oblique whistler, or kinetic Alfvén waves. These modes are generally well described by Hall MHD. Determining the applicability of linear kinetic damping rates in turbulent plasmas requires a suite of fluid and kinetic nonlinear numerical simulations. Contrasting fluid and kinetic simulations will also shed light on whether the presence of spurious wave modes alters the nonlinear couplings inherent in turbulence and will illuminate the turbulent dynamics and energy transfer in the regime of the characteristic ion kinetic scales.
Modeling and notation of DEA with strong and weak disposable outputs.
Kuntz, Ludwig; Sülz, Sandra
2011-12-01
Recent articles published in Health Care Management Science have described DEA applications under the assumption of strong and weak disposable outputs. As we confidently assume that these papers include some methodical deficiencies, we aim to illustrate a revised approach.
Ottenheijm, Coen A C; Buck, Danielle; de Winter, Josine M; Ferrara, Claudia; Piroddi, Nicoletta; Tesi, Chiara; Jasper, Jeffrey R; Malik, Fady I; Meng, Hui; Stienen, Ger J M; Beggs, Alan H; Labeit, Siegfried; Poggesi, Corrado; Lawlor, Michael W; Granzier, Henk
2013-06-01
Nebulin--a giant sarcomeric protein--plays a pivotal role in skeletal muscle contractility by specifying thin filament length and function. Although mutations in the gene encoding nebulin (NEB) are a frequent cause of nemaline myopathy, the most common non-dystrophic congenital myopathy, the mechanisms by which mutations in NEB cause muscle weakness remain largely unknown. To better understand these mechanisms, we have generated a mouse model in which Neb exon 55 is deleted (Neb(ΔExon55)) to replicate a founder mutation seen frequently in patients with nemaline myopathy with Ashkenazi Jewish heritage. Neb(ΔExon55) mice are born close to Mendelian ratios, but show growth retardation after birth. Electron microscopy studies show nemaline bodies--a hallmark feature of nemaline myopathy--in muscle fibres from Neb(ΔExon55) mice. Western blotting studies with nebulin-specific antibodies reveal reduced nebulin levels in muscle from Neb(ΔExon55) mice, and immunofluorescence confocal microscopy studies with tropomodulin antibodies and phalloidin reveal that thin filament length is significantly reduced. In line with reduced thin filament length, the maximal force generating capacity of permeabilized muscle fibres and single myofibrils is reduced in Neb(ΔExon55) mice with a more pronounced reduction at longer sarcomere lengths. Finally, in Neb(ΔExon55) mice the regulation of contraction is impaired, as evidenced by marked changes in crossbridge cycling kinetics and by a reduction of the calcium sensitivity of force generation. A novel drug that facilitates calcium binding to the thin filament significantly augmented the calcium sensitivity of submaximal force to levels that exceed those observed in untreated control muscle. In conclusion, we have characterized the first nebulin-based nemaline myopathy model, which recapitulates important features of the phenotype observed in patients harbouring this particular mutation, and which has severe muscle weakness caused by
Kuo, Dave T F; Di Toro, Dominic M
2013-08-01
A model for whole-body in vivo biotransformation of neutral and weakly polar organic chemicals in fish is presented. It considers internal chemical partitioning and uses Abraham solvation parameters as reactivity descriptors. It assumes that only chemicals freely dissolved in the body fluid may bind with enzymes and subsequently undergo biotransformation reactions. Consequently, the whole-body biotransformation rate of a chemical is retarded by the extent of its distribution in different biological compartments. Using a randomly generated training set (n = 64), the biotransformation model is found to be: log (HLφfish ) = 2.2 (±0.3)B - 2.1 (±0.2)V - 0.6 (±0.3) (root mean square error of prediction [RMSE] = 0.71), where HL is the whole-body biotransformation half-life in days, φfish is the freely dissolved fraction in body fluid, and B and V are the chemical's H-bond acceptance capacity and molecular volume. Abraham-type linear free energy equations were also developed for lipid-water (Klipidw ) and protein-water (Kprotw ) partition coefficients needed for the computation of φfish from independent determinations. These were found to be 1) log Klipidw = 0.77E - 1.10S - 0.47A - 3.52B + 3.37V + 0.84 (in Lwat /kglipid ; n = 248, RMSE = 0.57) and 2) log Kprotw = 0.74E - 0.37S - 0.13A - 1.37B + 1.06V - 0.88 (in Lwat /kgprot ; n = 69, RMSE = 0.38), where E, S, and A quantify dispersive/polarization, dipolar, and H-bond-donating interactions, respectively. The biotransformation model performs well in the validation of HL (n = 424, RMSE = 0.71). The predicted rate constants do not exceed the transport limit due to circulatory flow. Furthermore, the model adequately captures variation in biotransformation rate between chemicals with varying log octanol-water partitioning coefficient, B, and V and exhibits high degree of independence from the choice of training chemicals. The
Pavani, Sri-Kaushik; Delgado Gomez, David; Frangi, Alejandro F.
This paper proposes Gaussian weak classifiers (GWCs) for use in real-time face detection systems. GWCs are based on Haar-like features (HFs) with four rectangles (HF4s), which constitute the majority of the HFs used to train a face detector. To label an image as face or clutter (non-face), GWC uses the responses of the two HF2s in a HF4 to compute a Mahalanobis distance which is later compared to a threshold to make decisions. For a fixed accuracy on the face class, GWCs can classify clutter images with more accuracy than the existing weak classifier types. Our experiments compare the accuracy and speed of the face detectors built with four different weak classifier types: GWCs, Viola & Jones’s, Rasolzadeh et al.’s and Mita et al.’s. On the standard MIT+CMU image database, the GWC-based face detector provided 40% less false positives and required 32% less time for the scanning process when compared to the detector that used Viola & Jones’s weak classifiers. When compared to detectors that used Rasolzadeh et al.’s and Mita et al.’s weak classifiers, the GWC-based detector produced 11% and 9% fewer false positives. Simultaneously, it required 37% and 42% less time for the scanning process.
Directory of Open Access Journals (Sweden)
Jun He
2012-03-01
Full Text Available By means of the nonequilibrium Green's functions and the density functional theory, we have investigated the electronic transport properties of C60 based electronic device with different intermolecular interactions. It is found that the electronic transport properties vary with the types of the interaction between two C60 molecules. A fast electrical switching behavior based on negative differential resistance has been found when two molecules are coupled by the weak π − π interaction. Compared to the solid bonding, the weak interaction is found to induce resonant tunneling, which is responsible for the fast response to the applied electric field and hence the velocity of switching.
Wang, X. Y.; Dou, J. M.; Shen, H.; Li, J.; Yang, G. S.; Fan, R. Q.; Shen, Q.
2018-03-01
With the continuous strengthening of power grids, the network structure is becoming more and more complicated. An open and regional data modeling is used to complete the calculation of the protection fixed value based on the local region. At the same time, a high precision, quasi real-time boundary fusion technique is needed to seamlessly integrate the various regions so as to constitute an integrated fault computing platform which can conduct transient stability analysis of covering the whole network with high accuracy and multiple modes, deal with the impact results of non-single fault, interlocking fault and build “the first line of defense” of the power grid. The boundary fusion algorithm in this paper is an automatic fusion algorithm based on the boundary accurate coupling of the networking power grid partition, which takes the actual operation mode for qualification, complete the boundary coupling algorithm of various weak coupling partition based on open-loop mode, improving the fusion efficiency, truly reflecting its transient stability level, and effectively solving the problems of too much data, too many difficulties of partition fusion, and no effective fusion due to mutually exclusive conditions. In this paper, the basic principle of fusion process is introduced firstly, and then the method of boundary fusion customization is introduced by scene description. Finally, an example is given to illustrate the specific algorithm on how it effectively implements the boundary fusion after grid partition and to verify the accuracy and efficiency of the algorithm.
Feng, Gang; Evangelisti, Luca; Caminati, Walther; Cacelli, Ivo; Carbonaro, Laura; Prampolini, Giacomo
2013-06-01
Following the investigation of the rotational spectra of three conformers (so-called ``book'', ``prism'' and ``cage'') of the water hexamer, and of some other water oligomers, we report here the rotational spectrum of the tetramer of a freon molecule. The pulse jet Fourier transform microwave (pj-FTMW) spectrum of an isomer of the difluoromethane tetramer has been assigned. This molecular system is made of units of a relatively heavy asymmetric rotor, held together by a network of weak hydrogen bonds. The search of the rotational spectrum has been based on a high-level reference method, the CCSD(T)/CBS protocol. It is interesting to outline that the rotational spectrum of the water tetramer was not observed, probably because the minimum energy structures of this oligomer is effectively nonpolar in its ground states, or because of high energy tunnelling splittings. The rotational spectra of the monomer, dimer, trimer and tetramer of difluoromethane have been assigned in 1952, 1999, 2007, and 2013 (present work), with a decreasing time spacing between the various steps, looking then promising for a continuous and rapid extension of the size limits of molecular systems accessible to MW spectroscopy. C. Pérez, M. T. Muckle, D. P. Zaleski, N. A. Seifert, B. Temelso, G. C. Shields, Z. Kisiel, B. H. Pate, Science {336} (2012) 897. D. R. Lide, Jr., J. Am. Chem. Soc. {74} (1952) 3548. W. Caminati, S. Melandri, P. Moreschini, P. G. Favero, Angew. Chem. Int. Ed. {38} (1999) 2924. S. Blanco, S. Melandri, P. Ottaviani, W. Caminati, J. Am. Chem. Soc. {129} (2007) 2700.
A weak-base fibrous anion exchanger effective for rapid phosphate removal from water.
Awual, Md Rabiul; Jyo, Akinori; El-Safty, Sherif A; Tamada, Masao; Seko, Noriaki
2011-04-15
This work investigated that weak-base anion exchange fibers named FVA-c and FVA-f were selectively and rapidly taken up phosphate from water. The chemical structure of both FVA-c and FVA-f was the same; i.e., poly(vinylamine) chains grafted onto polyethylene coated polypropylene fibers. Batch study using FVA-c clarified that this preferred phosphate to chloride, nitrate and sulfate in neutral pH region and an equilibrium capacity of FVA-c for phosphate was from 2.45 to 6.87 mmol/g. Column study using FVA-f made it clear that breakthrough capacities of FVA-f were not strongly affected by flow rates from 150 to 2000 h(-1) as well as phosphate feed concentration from 0.072 to 1.6mM. Under these conditions, breakthrough capacities were from 0.84 to 1.43 mmol/g indicating high kinetic performances. Trace concentration of phosphate was also removed from feeds containing 0.021 and 0.035 mM of phosphate at high feed flow rate of 2500 h(-1), breakthrough capacities were 0.676 and 0.741 mmol/g, respectively. The column study also clarified that chloride and sulfate did not strongly interfere with phosphate uptake even in their presence of equimolar and fivefold molar levels. Adsorbed phosphate on FVA-f was quantitatively eluted with 1M HCl acid and regenerated into hydrochloride form simultaneously for next phosphate adsorption operation. Therefore, FVA-f is able to use long time even under rigorous chemical treatment of multiple regeneration/reuse cycles without any noticeable deterioration. Copyright © 2011 Elsevier B.V. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
JAMES N. BRUNE AND ABDOLRASOOL ANOOSHEHPOOR
1998-02-23
We report results of foam-rubber modeling of the effect of a shallow weak layer on ground motion from strike-slip ruptures. Computer modeling of strong ground motion from strike-slip earthquakes has involved somewhat arbitrary assumptions about the nature of slip along the shallow part of the fault (e.g., fixing the slip to be zero along the upper 2 kilometers of the fault plane) in order to match certain strong motion accelerograms. Most modeling studies of earthquake strong ground motion have used what is termed kinematic dislocation modeling. In kinematic modeling the time function for slip on the fault is prescribed, and the response of the layered medium is calculated. Unfortunately, there is no guarantee that the model and the prescribed slip are physically reasonable unless the true nature of the medium and its motions are known ahead of time. There is good reason to believe that in many cases faults are weak along the upper few kilometers of the fault zone and may not be able to maintain high levels of shear strain required for high dynamic energy release during earthquakes. Physical models of faulting, as distinct from numerical or mathematical models, are guaranteed to obey static and dynamic mechanical laws. Foam-rubber modeling studies have been reported in a number of publications. The object of this paper is to present results of physical modeling using a shallow weak layer, in order to verify the physical basis for assuming a long rise time and a reduced high frequency pulse for the slip on the shallow part of faults. It appears a 2-kilometer deep, weak zone along strike-slip faults could indeed reduce the high frequency energy radiated from shallow slip, and that this effect can best be represented by superimposing a small amplitude, short rise-time pulse at the onset of a much longer rise-time slip. A weak zone was modeled by inserting weak plastic layers of a few inches in thickness into the foam rubber model. For the 15 cm weak zone the average
High Weak Order Methods for Stochastic Differential Equations Based on Modified Equations
Abdulle, Assyr
2012-01-01
© 2012 Society for Industrial and Applied Mathematics. Inspired by recent advances in the theory of modified differential equations, we propose a new methodology for constructing numerical integrators with high weak order for the time integration of stochastic differential equations. This approach is illustrated with the constructions of new methods of weak order two, in particular, semi-implicit integrators well suited for stiff (meansquare stable) stochastic problems, and implicit integrators that exactly conserve all quadratic first integrals of a stochastic dynamical system. Numerical examples confirm the theoretical results and show the versatility of our methodology.
International Nuclear Information System (INIS)
Morris, D.A.
1988-01-01
We examine contributions to the anomalous magnetic moment of the muon from weak-isosinglet squarks found in E 6 superstring models. We find that such contributions are up to 2 orders of magnitude larger than those previously calculated and correspondingly require smaller Yukawa couplings in order to maintain agreement with the measured muon anomalous magnetic moment
Model-Independent Analysis of $B \\to \\pi K$ Decays and Bounds on the Weak Phase $\\gamma$
Neubert, M
1999-01-01
A general parametrization of the amplitudes for the rare two-body decays B -> pi K is introduced, which makes maximal use of theoretical constraints arising from flavour symmetries of the strong interactions and the structure of the low-energy effective weak Hamiltonian. With the help of this parametrization, a model-independent analysis of the branching ratios and direct CP asymmetries in the various B -> pi K decay modes is performed, and the impact of hadronic uncertainties on bounds on the weak phase gamma = arg(Vub*) is investigated.
He, M.; Sun, M.; Wijk, E. van; Wietmarschen, H. van; Wijk, R. van; Wang, Z.; Wang, M.; Hankemeier, T.; Greef, J. van der
2016-01-01
To present the possibilities pertaining to linking ultra-weak photon emission (UPE) with Chinese medicine-based diagnostics principles, we conducted a review of Chinese literature regarding UPE with respect to a systems view of diagnostics. Data were summarized from human clinical studies and animal
Nehaniv, Chrystopher L; Rhodes, John; Egri-Nagy, Attila; Dini, Paolo; Morris, Eric Rothstein; Horváth, Gábor; Karimi, Fariba; Schreckling, Daniel; Schilstra, Maria J
2015-07-28
Interaction computing is inspired by the observation that cell metabolic/regulatory systems construct order dynamically, through constrained interactions between their components and based on a wide range of possible inputs and environmental conditions. The goals of this work are to (i) identify and understand mathematically the natural subsystems and hierarchical relations in natural systems enabling this and (ii) use the resulting insights to define a new model of computation based on interactions that is useful for both biology and computation. The dynamical characteristics of the cellular pathways studied in systems biology relate, mathematically, to the computational characteristics of automata derived from them, and their internal symmetry structures to computational power. Finite discrete automata models of biological systems such as the lac operon, the Krebs cycle and p53-mdm2 genetic regulation constructed from systems biology models have canonically associated algebraic structures (their transformation semigroups). These contain permutation groups (local substructures exhibiting symmetry) that correspond to 'pools of reversibility'. These natural subsystems are related to one another in a hierarchical manner by the notion of 'weak control'. We present natural subsystems arising from several biological examples and their weak control hierarchies in detail. Finite simple non-Abelian groups are found in biological examples and can be harnessed to realize finitary universal computation. This allows ensembles of cells to achieve any desired finitary computational transformation, depending on external inputs, via suitably constrained interactions. Based on this, interaction machines that grow and change their structure recursively are introduced and applied, providing a natural model of computation driven by interactions.
Mechanistic modelling of weak interlayers in flexible and semi-flexible road pavements: Part 2
CSIR Research Space (South Africa)
De Beer, Morris
2012-04-01
Full Text Available and investigate the existence of these weak layers in cemented pavement layers. In Part 2, several cases of the above conditions for different road pavement types are discussed, with field examples. Mechanistic analyses were done on a typical hot mix asphalt (HMA...
International Nuclear Information System (INIS)
Chanda, R.
1981-01-01
The theoretical and experimental evidences to form a basis for Lagrangian Quantum field theory for Weak Interactions are discussed. In this context, gauge invariance aspects of such interactions are showed. (L.C.) [pt
Rubbens, Jari; Brouwers, Joachim; Tack, Jan; Augustijns, Patrick
2016-12-01
This study investigated the impact of relevant gastrointestinal conditions on the intraluminal dissolution, supersaturation and precipitation behavior of the weakly basic drug indinavir. The influence of (i) concomitant PPI intake and (ii) the nutritional state on the gastrointestinal behavior of indinavir was assessed in order to identify the underlying mechanisms responsible for previously reported interactions. Five healthy volunteers were recruited into a crossover study containing the following arms: fasted state, fed state and fasted state with concomitant proton pump inhibitor (PPI) use. In each condition, one Crixivan® capsule (400mg indinavir) was orally administered with 240mL of water. Gastric and duodenal fluids, aspirated as a function of time, were monitored for total and dissolved indinavir concentrations on a UPLC-MS/MS system. Indinavir's thermodynamic solubility was determined in individual aspirates to evaluate supersaturation. The bioaccessible fraction of indinavir in aspirated duodenal fluids was determined in an ex vivo permeation experiment through an artificial membrane. A nearly complete dissolution of indinavir in the fasted stomach was observed (90±3%). Regardless of dosing conditions, less indinavir was in solution in the duodenum compared to the stomach. Duodenal supersaturation was observed in all three testing conditions. The highest degrees of duodenal supersaturation (6.5±5.9) were observed in the fasted state. Concomitant PPI use resulted in an increased gastric pH and a smaller fraction of indinavir being dissolved (58±24%), eventually resulting in lower intestinal concentrations. In fed state conditions, drug release from the capsule was delayed and more gradually, although a similar fraction of the intragastric indinavir dissolved compared to the fasted state (83±12%). Indinavir was still present in the lumen of the duodenum three hours after oral administration, although it already reached 70% (on average) of the fasted
Directory of Open Access Journals (Sweden)
SEDIGHEH MOKHTARPOUR
2016-10-01
Full Text Available Introduction: Responsive medicine is an appropriate training method which trains the graduates who can act effectively in initial and secondary aspects of health issues in the society. Methods: This was a cross-sectional descriptive-analytic study which was done using quantitative method. The target population of this study was all the students of the Nutrition and Health School of Shiraz University of Medical Sciences. The sample was randomly selected in this study and 75 students were selected based on the methodologist’s comments and similar studies and randomnumber table from a list obtained from the school’s department of education. This questionnaire was a researcher-made one which consisted of 23 questions in 2 sections with 21 closedended questions and 2 open-ended questions; 70 questionnaires were completed correctly. The closed-ended questions had 4 aspects (completely agree to completely disagree answered in 5-point Likert scale type. Its face validity was confirmed by 4 faculty members. The construct validity of the questionnaire was analyzed by factor analysis test and its reliability was assessed by a pilot on 20 students with a Cronbach’s alpha of 0.85. The data were analyzed using descriptive statistical tests (mean, standard deviation, … and the Pearson coefficient (p<0.001. Results: The results of this study showed that the maximum mean score was 3.58±0.65 which was related to the context of these courses and the minimum mean was 2.66±1.14 which was related to the logbook implementation. The 2 open-ended questions indicated that the most important strengths were the use of logbooks as a guide and determining the minimum training; of the weaknesses was the mismatch between the theoretical education and the practical activities. Also, developing the minimum training that an expert should know and using the common topics related to theoretical education were the most important points mentioned by the respondents
Angular structure of jet quenching within a hybrid strong/weak coupling model
Energy Technology Data Exchange (ETDEWEB)
Casalderrey-Solana, Jorge [Rudolf Peierls Centre for Theoretical Physics, University of Oxford,1 Keble Road, Oxford OX1 3NP (United Kingdom); Departament de Física Quàntica i Astrofísica & Institut de Ciències del Cosmos (ICC),Universitat de Barcelona, Martí i Franquès 1, 08028 Barcelona (Spain); Gulhan, Doga Can [CERN, EP Department,CH-1211 Geneva 23 (Switzerland); Milhano, José Guilherme [CENTRA, Instituto Superior Técnico, Universidade de Lisboa,Av. Rovisco Pais, P-1049-001 Lisboa (Portugal); Laboratório de Instrumentação e Física Experimental de Partículas (LIP),Av. Elias Garcia 14-1, P-1000-149 Lisboa (Portugal); Theoretical Physics Department, CERN,Geneva (Switzerland); Pablos, Daniel [Departament de Física Quàntica i Astrofísica & Institut de Ciències del Cosmos (ICC),Universitat de Barcelona, Martí i Franquès 1, 08028 Barcelona (Spain); Rajagopal, Krishna [Center for Theoretical Physics, Massachusetts Institute of Technology,Cambridge, MA 02139 (United States)
2017-03-27
Within the context of a hybrid strong/weak coupling model of jet quenching, we study the modification of the angular distribution of the energy within jets in heavy ion collisions, as partons within jet showers lose energy and get kicked as they traverse the strongly coupled plasma produced in the collision. To describe the dynamics transverse to the jet axis, we add the effects of transverse momentum broadening into our hybrid construction, introducing a parameter K≡q̂/T{sup 3} that governs its magnitude. We show that, because of the quenching of the energy of partons within a jet, even when K≠0 the jets that survive with some specified energy in the final state are narrower than jets with that energy in proton-proton collisions. For this reason, many standard observables are rather insensitive to K. We propose a new differential jet shape ratio observable in which the effects of transverse momentum broadening are apparent. We also analyze the response of the medium to the passage of the jet through it, noting that the momentum lost by the jet appears as the momentum of a wake in the medium. After freezeout this wake becomes soft particles with a broad angular distribution but with net momentum in the jet direction, meaning that the wake contributes to what is reconstructed as a jet. This effect must therefore be included in any description of the angular structure of the soft component of a jet. We show that the particles coming from the response of the medium to the momentum and energy deposited in it leads to a correlation between the momentum of soft particles well separated from the jet in angle with the direction of the jet momentum, and find qualitative but not quantitative agreement with experimental data on observables designed to extract such a correlation. More generally, by confronting the results that we obtain upon introducing transverse momentum broadening and the response of the medium to the jet with available jet data, we highlight the
Beamstop-based low-background ptychography to image weakly scattering objects
DEFF Research Database (Denmark)
Reinhardt, Juliane; Hoppe, Robert; Hofmann, Georg
2017-01-01
In recent years, X-ray ptychography has been established as a valuable tool for high-resolution imaging. Nevertheless, the spatial resolution and sensitivity in coherent diffraction imaging are limited by the signal that is detected over noise and over background scattering. Especially, coherent...... of the complementary information contained in the two scans. We experimentally demonstrate the potential of this scheme for hard X-ray ptychography by imaging a weakly scattering object composed of catalytic nanoparticles and provide the analysis of the signal-to-background ratio in the diffraction patterns....... imaging of weakly scattering specimens suffers from incoherent background that is generated by the interaction of the central beam with matter along its propagation path in particular close to and inside of the detector. Common countermeasures entail evacuated flight tubes or detector-side beamstops...
An Autonomous Divisive Algorithm for Community Detection Based on Weak Link and Link-Break Strategy
Directory of Open Access Journals (Sweden)
Xiaoyu Ding
2018-01-01
Full Text Available Divisive algorithms are widely used for community detection. A common strategy of divisive algorithms is to remove the external links which connect different communities so that communities get disconnected from each other. Divisive algorithms have been investigated for several decades but some challenges remain unsolved: (1 how to efficiently identify external links, (2 how to efficiently remove external links, and (3 how to end a divisive algorithm with no help of predefined parameters or community definitions. To overcome these challenges, we introduced a concept of the weak link and autonomous division. The implementation of the proposed divisive algorithm adopts a new link-break strategy similar to a tug-of-war contest, where communities act as contestants and weak links act as breakable ropes. Empirical evaluations on artificial and real-world networks show that the proposed algorithm achieves a better accuracy-efficiency trade-off than some of the latest divisive algorithms.
Beamstop-based low-background ptychography to image weakly scattering objects
Energy Technology Data Exchange (ETDEWEB)
Reinhardt, Juliane, E-mail: juliane.reinhardt@desy.de [Deutsches Elektronen-Synchrotron DESY, D-22607 Hamburg (Germany); Hoppe, Robert [Institute of Structural Physics, Technische Universität Dresden, D-01062 Dresden (Germany); Hofmann, Georg [Institute for Chemical Technology and Polymer Chemistry, Karlsruhe Institute of Technology, D-76131 Karlsruhe (Germany); Damsgaard, Christian D. [Center for Electron Nanoscopy and Department of Physics, Technical University of Denmark, DK-2800 Lyngby (Denmark); Patommel, Jens; Baumbach, Christoph [Institute of Structural Physics, Technische Universität Dresden, D-01062 Dresden (Germany); Baier, Sina; Rochet, Amélie; Grunwaldt, Jan-Dierk [Institute for Chemical Technology and Polymer Chemistry, Karlsruhe Institute of Technology, D-76131 Karlsruhe (Germany); Falkenberg, Gerald [Deutsches Elektronen-Synchrotron DESY, D-22607 Hamburg (Germany); Schroer, Christian G. [Deutsches Elektronen-Synchrotron DESY, D-22607 Hamburg (Germany); Department Physik, Universität Hamburg, Luruper Chaussee 149, D-22761 Hamburg (Germany)
2017-02-15
In recent years, X-ray ptychography has been established as a valuable tool for high-resolution imaging. Nevertheless, the spatial resolution and sensitivity in coherent diffraction imaging are limited by the signal that is detected over noise and over background scattering. Especially, coherent imaging of weakly scattering specimens suffers from incoherent background that is generated by the interaction of the central beam with matter along its propagation path in particular close to and inside of the detector. Common countermeasures entail evacuated flight tubes or detector-side beamstops, which improve the experimental setup in terms of background reduction or better coverage of high dynamic range in the diffraction patterns. Here, we discuss an alternative approach: we combine two ptychographic scans with and without beamstop and reconstruct them simultaneously taking advantage of the complementary information contained in the two scans. We experimentally demonstrate the potential of this scheme for hard X-ray ptychography by imaging a weakly scattering object composed of catalytic nanoparticles and provide the analysis of the signal-to-background ratio in the diffraction patterns. - Highlights: • An opaque beamstop far-upstream of the detector reduces background scattering. • Increased signal-to-background ratio in the diffraction patterns. • Simultaneous ptychographic reconstruction of two data sets with and without beamstop. • Result shows high spatial resolution of 13 nm of a weakly scattering catalyst sample. • High sensitivity to less than 10{sup 5} atoms.
Tugendhat, Tim M.; Schäfer, Björn Malte
2018-02-01
We investigate a physical, composite alignment model for both spiral and elliptical galaxies and its impact on cosmological parameter estimation from weak lensing for a tomographic survey. Ellipticity correlation functions and angular ellipticity spectra for spiral and elliptical galaxies are derived on the basis of tidal interactions with the cosmic large-scale structure and compared to the tomographic weak lensing signal. We find that elliptical galaxies cause a contribution to the weak-lensing dominated ellipticity correlation on intermediate angular scales between ℓ ≃ 40 and ℓ ≃ 400 before that of spiral galaxies dominates on higher multipoles. The predominant term on intermediate scales is the negative cross-correlation between intrinsic alignments and weak gravitational lensing (GI-alignment). We simulate parameter inference from weak gravitational lensing with intrinsic alignments unaccounted; the bias induced by ignoring intrinsic alignments in a survey like Euclid is shown to be several times larger than the statistical error and can lead to faulty conclusions when comparing to other observations. The biases generally point into different directions in parameter space, such that in some cases one can observe a partial cancellation effect. Furthermore, it is shown that the biases increase with the number of tomographic bins used for the parameter estimation process. We quantify this parameter estimation bias in units of the statistical error and compute the loss of Bayesian evidence for a model due to the presence of systematic errors as well as the Kullback-Leibler divergence to quantify the distance between the true model and the wrongly inferred one.
Tugendhat, Tim M.; Schäfer, Björn Malte
2018-05-01
We investigate a physical, composite alignment model for both spiral and elliptical galaxies and its impact on cosmological parameter estimation from weak lensing for a tomographic survey. Ellipticity correlation functions and angular ellipticity spectra for spiral and elliptical galaxies are derived on the basis of tidal interactions with the cosmic large-scale structure and compared to the tomographic weak-lensing signal. We find that elliptical galaxies cause a contribution to the weak-lensing dominated ellipticity correlation on intermediate angular scales between ℓ ≃ 40 and ℓ ≃ 400 before that of spiral galaxies dominates on higher multipoles. The predominant term on intermediate scales is the negative cross-correlation between intrinsic alignments and weak gravitational lensing (GI-alignment). We simulate parameter inference from weak gravitational lensing with intrinsic alignments unaccounted; the bias induced by ignoring intrinsic alignments in a survey like Euclid is shown to be several times larger than the statistical error and can lead to faulty conclusions when comparing to other observations. The biases generally point into different directions in parameter space, such that in some cases one can observe a partial cancellation effect. Furthermore, it is shown that the biases increase with the number of tomographic bins used for the parameter estimation process. We quantify this parameter estimation bias in units of the statistical error and compute the loss of Bayesian evidence for a model due to the presence of systematic errors as well as the Kullback-Leibler divergence to quantify the distance between the true model and the wrongly inferred one.
Brilli, Lorenzo; Bechini, Luca; Bindi, Marco; Carozzi, Marco; Cavalli, Daniele; Conant, Richard; Dorich, Cristopher D; Doro, Luca; Ehrhardt, Fiona; Farina, Roberta; Ferrise, Roberto; Fitton, Nuala; Francaviglia, Rosa; Grace, Peter; Iocola, Ileana; Klumpp, Katja; Léonard, Joël; Martin, Raphaël; Massad, Raia Silvia; Recous, Sylvie; Seddaiu, Giovanna; Sharp, Joanna; Smith, Pete; Smith, Ward N; Soussana, Jean-Francois; Bellocchi, Gianni
2017-11-15
Biogeochemical simulation models are important tools for describing and quantifying the contribution of agricultural systems to C sequestration and GHG source/sink status. The abundance of simulation tools developed over recent decades, however, creates a difficulty because predictions from different models show large variability. Discrepancies between the conclusions of different modelling studies are often ascribed to differences in the physical and biogeochemical processes incorporated in equations of C and N cycles and their interactions. Here we review the literature to determine the state-of-the-art in modelling agricultural (crop and grassland) systems. In order to carry out this study, we selected the range of biogeochemical models used by the CN-MIP consortium of FACCE-JPI (http://www.faccejpi.com): APSIM, CERES-EGC, DayCent, DNDC, DSSAT, EPIC, PaSim, RothC and STICS. In our analysis, these models were assessed for the quality and comprehensiveness of underlying processes related to pedo-climatic conditions and management practices, but also with respect to time and space of application, and for their accuracy in multiple contexts. Overall, it emerged that there is a possible impact of ill-defined pedo-climatic conditions in the unsatisfactory performance of the models (46.2%), followed by limitations in the algorithms simulating the effects of management practices (33.1%). The multiplicity of scales in both time and space is a fundamental feature, which explains the remaining weaknesses (i.e. 20.7%). Innovative aspects have been identified for future development of C and N models. They include the explicit representation of soil microbial biomass to drive soil organic matter turnover, the effect of N shortage on SOM decomposition, the improvements related to the production and consumption of gases and an adequate simulations of gas transport in soil. On these bases, the assessment of trends and gaps in the modelling approaches currently employed to
An, Rui; Feng, Chang; Wang, Bin
2018-02-01
We constrain interacting dark matter and dark energy (IDMDE) models using a 450-degree-square cosmic shear data from the Kilo Degree Survey (KiDS) and the angular power spectra from Planck's latest cosmic microwave background measurements. We revisit the discordance problem in the standard Lambda cold dark matter (ΛCDM) model between weak lensing and Planck datasets and extend the discussion by introducing interacting dark sectors. The IDMDE models are found to be able to alleviate the discordance between KiDS and Planck as previously inferred from the ΛCDM model, and moderately favored by a combination of the two datasets.
Global existence of a weak solution for a model in radiation magnetohydrodynamics
Czech Academy of Sciences Publication Activity Database
Ducomet, B.; Kobera, M.; Nečasová, Šárka
2017-01-01
Roč. 150, č. 1 (2017), s. 43-65 ISSN 0167-8019 R&D Projects: GA ČR GA13-00522S Institutional support: RVO:67985840 Keywords : radiation magnetohydrodynamics * Navier-Stokes-Fourier system * weak solutio Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 0.702, year: 2016 https://link.springer.com/article/10.1007%2Fs10440-016-0093-y
Global existence of a weak solution for a model in radiation magnetohydrodynamics
Czech Academy of Sciences Publication Activity Database
Ducomet, B.; Kobera, M.; Nečasová, Šárka
2017-01-01
Roč. 150, č. 1 (2017), s. 43-65 ISSN 0167-8019 R&D Projects: GA ČR GA13-00522S Institutional support: RVO:67985840 Keywords : radiation magnetohydrodynamics * Navier-Stokes- Fourier system * weak solutio Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 0.702, year: 2016 https://link.springer.com/article/10.1007%2Fs10440-016-0093-y
Anderer, Carolin; Delwa de Alarcón, Natalie; Näther, Christian; Bensch, Wolfgang
2014-12-15
By following a new synthetic approach, which is based on the in situ formation of a basic medium by the reaction between the strong base Sb(V)S4 (3-) and the weak acid H2 O, it was possible to prepare three layered thioantimonate(III) compounds of composition [TM(2,2'-bipyridine)3 ][Sb6 S10 ] (TM=Ni, Fe) and [Ni(4,4'-dimethyl-2,2'-bipyridine)3 ][Sb6 S10 ] under hydrothermal conditions featuring two different thioantimonate(III) network topologies. The antimony source, Na3 SbS4 ⋅ 9 H2 O, undergoes several decomposition reactions and produces the Sb(III) S3 species, which condenses to generate the layered anion. The application of transition-metal complexes avoids crystallization of dense phases. The reactions are very fast compared to conventional hydrothermal/solvothermal syntheses and are much less sensitive to changes of the reaction parameters. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Sawada, A.; Koga, T.
2017-02-01
We have developed a method to calculate the weak localization and antilocalization corrections based on the real-space simulation, where we provide 147 885 predetermined return orbitals of quasi-two-dimensional electrons with up to 5000 scattering events that are repeatedly used. Our model subsumes that of Golub [L. E. Golub, Phys. Rev. B 71, 235310 (2005), 10.1103/PhysRevB.71.235310] when the Rashba spin-orbit interaction (SOI) is assumed. Our computation is very simple, fast, and versatile, where the numerical results, obtained all at once, cover wide ranges of the magnetic field under various one-electron interactions H' exactly. Thus, it has straightforward extensibility to incorporate interactions other than the Rashba SOI, such as the linear and cubic Dresselhaus SOIs, Zeeman effect, and even interactions relevant to the valley and pseudo spin degrees of freedom, which should provide a unique tool to study new classes of materials like emerging 2D materials. Using our computation, we also demonstrate the robustness of a persistent spin helix state against the cubic Dresselhaus SOI.
International Nuclear Information System (INIS)
Bjorken, J.D.
1978-01-01
Weak interactions are studied from a phenomenological point of view, by using a minimal number of theoretical hypotheses. Charged-current phenomenology, and then neutral-current phenomenology are discussed. This all is described in terms of a global SU(2) symmetry plus an electromagnetic correction. The intermediate-boson hypothesis is introduced and lower bounds on the range of the weak force are inferred. This phenomenology does not yet reconstruct all the predictions of the conventional SU(2)xU(1) gauge theory. To do that requires an additional assumption of restoration of SU(2) symmetry at asymptotic energies
Xiu, Xiao-Ming; Cui, Cen; Lin, Yan-Fang; Dong, Li; Dong, Hai-Kuan; Gao, Ya-Jun
2018-01-01
With the assistance of weak cross-Kerr nonlinear interaction between photons and coherent states via Kerr media, we propose a scheme to split and acquire quantum information with three-photon perfect W states. By means of a fault-tolerant circuit, the perfect W states are distributed to the participants without being affected by the collective noise. And on this basis we present a scheme for splitting and acquiring a single-photon state with the shared perfect W states. Together with the mature techniques of classical feed-forward, simple and available linear optical elements are applied in the procedure, afford enhancing the feasibility of the theoretical scheme proposed here.
DEFF Research Database (Denmark)
Chen, Shuheng; Hu, Weihao; Chen, Zhe
2014-01-01
Based on generalized chain-table storage structure (GCTSS), a novel power flow method is proposed, which can be used to solve the power flow of weakly meshed distribution networks with multiple distributed generators (DGs). GCTSS is designed based on chain-table technology and its target...... done on the modified version of the IEEE 69-bus distribution system. The results verify that the proposed method can keep a good efficiency level. Hence, it is promising to calculate the power flow of weakly meshed distribution networks with multiple DGs....... is to describe the topology of radial distribution networks with a clear logic and a small memory size. The strategies of compensating the equivalent currents of break-point branches and the reactive power outputs of PV-type DGs are presented on the basis of superposition theorem. Their formulations...
Ottenheijm, Coen A. C.; Buck, Danielle; de Winter, Josine M.; Ferrara, Claudia; Piroddi, Nicoletta; Tesi, Chiara; Jasper, Jeffrey R.; Malik, Fady I.; Meng, Hui; Stienen, Ger J. M.; Beggs, Alan H.; Labeit, Siegfried; Poggesi, Corrado; Lawlor, Michael W.; Granzier, Henk
2013-01-01
Nebulin—a giant sarcomeric protein—plays a pivotal role in skeletal muscle contractility by specifying thin filament length and function. Although mutations in the gene encoding nebulin (NEB) are a frequent cause of nemaline myopathy, the most common non-dystrophic congenital myopathy, the mechanisms by which mutations in NEB cause muscle weakness remain largely unknown. To better understand these mechanisms, we have generated a mouse model in which Neb exon 55 is deleted (NebΔExon55) to repl...
Distributed Weak Fiber Bragg Grating Vibration Sensing System Based on 3 × 3 Fiber Coupler
Li, Wei; Zhang, Jian
2018-03-01
A novel distributed weak fiber Bragg gratings (FBGs) vibration sensing system has been designed to overcome the disadvantages of the conventional methods for optical fiber sensing networking, which are: low signal intensity in the usually adopted time-division multiplexing (TDM) technology, insufficient quantity of multiplexed FBGs in the wavelength-division multiplexing (WDM) technology, and that the mixed WDM/TDM technology measures only the physical parameters of the FBG locations but cannot perform distributed measurement over the whole optical fiber. This novel system determines vibration events in the optical fiber line according to the intensity variation of the interference signals between the adjacent weak FBG reflected signals and locates the vibration points accurately using the TDM technology. It has been proven by tests that this system performs vibration signal detection and demodulation in a way more convenient than the conventional methods for the optical fiber sensing system. It also measures over the whole optical fiber, therefore, distributed measurement is fulfilled, and the system locating accuracy is up to 20 m, capable of detecting any signals of whose drive signals lower limit voltage is 0.2 V while the frequency range is 3 Hz‒1 000 Hz. The system has the great practical significance and application value for perimeter surveillance systems.
Landstreet, J. D.; Bagnulo, S.; Valyavin, G.; Valeev, A. F.
2017-11-01
Magnetic fields are detected in a few percent of white dwarfs. The number of such magnetic white dwarfs known is now some hundreds. Fields range in strength from a few kG to several hundred MG. Almost all the known magnetic white dwarfs have a mean field modulus ≥1 MG. We are trying to fill a major gap in observational knowledge at the low field limit (≤200 kG) using circular spectro-polarimetry. In this paper we report the discovery and monitoring of strong, periodic magnetic variability in two previously discovered "super-weak field" magnetic white dwarfs, WD 2047+372 and WD 2359-434. WD 2047+372 has a mean longitudinal field that reverses between about -12 and + 15 kG, with a period of 0.243 d, while its mean field modulus appears nearly constant at 60 kG. The observations can be interpreted in terms of a dipolar field tilted with respect to the stellar rotation axis. WD 2359-434 always shows a weak positive longitudinal field with values between about 0 and + 12 kG, varying only weakly with stellar rotation, while the mean field modulus varies between about 50 and 100 kG. The rotation period is found to be 0.112 d using the variable shape of the Hα line core, consistent with available photometry. The field of this star appears to be much more complex than a dipole, and is probably not axisymmetric. Available photometry shows that WD 2359-434 is a light variable with an amplitude of only 0.005 mag; our own photometry shows that if WD 2047+372 is photometrically variable, the amplitude is below about 0.01 mag. These are the first models for magnetic white dwarfs with fields below about 100 kG based on magnetic measurements through the full stellar rotation. They reveal two very different magnetic surface configurations, and that, contrary to simple ohmic decay theory, WD 2359-434 has a much more complex surface field than the much younger WD 2047+372. Based, in part, on observations collected at the European Organisation for Astronomical Research in the
DEFF Research Database (Denmark)
Piligkos, S.; Slep, L.D.; Weyhermuller, T.
2009-01-01
bands of the minority spin Ni(II) ligand field bands were observed to change sign relative to the parent complex 2. This behavior has been analyzed. The present work hence provides a benchmark study for the application of MCD spectroscopy to weakly interacting transition metal dinners. (C) 2008 Elsevier......A detailed study of the magnetic circular dichroism (MCD) spectra of weakly exchange coupled transition metal heterodimers is reported. The systems consist of three isostructural complexes of the type [LM(III)(PyA)(3)M(II)](ClO4)(2) where L represents 1,4,7-trimethyl-1,4,7-triazacyclonanane and Py......A- is the monoanion of pyridine-2-aldoxime. The trivalent metal ion M(III) is either diamagnetic Ga(III) or paramagnetic Cr(III) (S-Cr = 3/2). The divalent metal ion M(II) is either diamagnetic Zn(II) or paramagnetic Ni(II) (S-Ni = 1). The three systems 1 (CrZn), 2 (GaNi) and 3 (CrNi) have been structurally...
A systems perspective of waste and energy - Strengths and weaknesses of the ORWARE model
Energy Technology Data Exchange (ETDEWEB)
Eriksson, Ola
2000-11-01
Waste management of today in Sweden is a complex phenomenon that demands for a scientific and systematic approach. The complexity is a result of a wide variety of actors, technologies, and impact on the environment, health, and the economy. Waste management also has a high relevance with respect to energy. There are direct connections as e.g. energy recovery from waste, but also indirect as the systems complexity and the environmental and economical impacts. Helpful tools in the planning of waste management are different types of models of which ORWARE is one. Based on principles from Life Cycle Analysis (LCA) and complemented with a simple Cost Benefit Analysis (CBA) ORWARE can provide some help in finding environmentally sound solutions for waste management systems. The model does not answer all questions raised by practitioners but can still be used for advisory purposes. The model does not include sociological or political aspects but it covers the area of physical flows with impacts on environment, society and economy. Other impacts have to be considered with other methods. The experiences from using ORWARE in Swedish municipalities during more than a half decade clearly shows the advantages and disadvantages of the tool. The model is very flexible when it comes to the possibility of site-specific adjustments of input data and process functions. With help of the model the complexity of the studied system can be illustrated by e.g. a map of the number of connections between different types of information. In this way ORWARE supports dialogue between different stakeholders and collects knowledge in a unique way. On the other hand, modelling such an extensive and complex system often leads to errors that takes time to find and correct. The model can not be considered as user friendly and does not cover all aspects wanted by the society. There are also educational problems with different time frames and space boundaries in the analysis that make the results hard
Savchenko, M. L.; Kozlov, D. A.; Kvon, Z. D.; Mikhailov, N. N.; Dvoretsky, S. A.
2016-09-01
The anomalous magnetoresistance (AMR) caused by the weak antilocalization effects in a three-dimensional topological insulator based on a strained mercury telluride film is experimentally studied. It is demonstrated that the obtained results are in a good agreement with the universal theory of Zduniak, Dyakonov, and Knap. It is found that the AMR in the bulk band gap is far below that expected for the system of Dirac fermions. Such a discrepancy can assumingly be related to a nonzero effective mass of Dirac fermions. The filling of energy bands in the bulk is accompanied by a pronounced increase in the AMR. This is a signature of the weak coupling between the surface and bulk charge carriers.
Buividovich, P. V.; Davody, A.
2017-12-01
We develop numerical tools for diagrammatic Monte Carlo simulations of non-Abelian lattice field theories in the t'Hooft large-N limit based on the weak-coupling expansion. First, we note that the path integral measure of such theories contributes a bare mass term in the effective action which is proportional to the bare coupling constant. This mass term renders the perturbative expansion infrared-finite and allows us to study it directly in the large-N and infinite-volume limits using the diagrammatic Monte Carlo approach. On the exactly solvable example of a large-N O (N ) sigma model in D =2 dimensions we show that this infrared-finite weak-coupling expansion contains, in addition to powers of bare coupling, also powers of its logarithm, reminiscent of resummed perturbation theory in thermal field theory and resurgent trans-series without exponential terms. We numerically demonstrate the convergence of these double series to the manifestly nonperturbative dynamical mass gap. We then develop a diagrammatic Monte Carlo algorithm for sampling planar diagrams in the large-N matrix field theory, and apply it to study this infrared-finite weak-coupling expansion for large-N U (N ) ×U (N ) nonlinear sigma model (principal chiral model) in D =2 . We sample up to 12 leading orders of the weak-coupling expansion, which is the practical limit set by the increasingly strong sign problem at high orders. Comparing diagrammatic Monte Carlo with conventional Monte Carlo simulations extrapolated to infinite N , we find a good agreement for the energy density as well as for the critical temperature of the "deconfinement" transition. Finally, we comment on the applicability of our approach to planar QCD at zero and finite density.
Zaika, Yury V.; Kostikova, Ekaterina K.
2017-11-01
One of the technological challenges for hydrogen materials science (including the ITER project) is the currently active search for structural materials with various potential applications that will have predetermined limits of hydrogen permeability. One of the experimental methods is thermal desorption spectrometry (TDS). A hydrogen-saturated sample is degassed under vacuum and monotone heating. The desorption flux is measured by mass spectrometer to determine the character of interactions of hydrogen isotopes with the solid. We are interested in such transfer parameters as the coefficients of diffusion, dissolution, desorption. The paper presents a thermal desorption functional differential equations of neutral type with integrable weak singularity and a numerical method for TDS spectrum simulation, where only integration of a nonlinear system of low order ordinary differential equations (ODE) is required. This work is supported by the Russian Foundation for Basic Research (project 15-01-00744).
Reznikov, Roman; Diwan, Mustansir; Nobrega, José N; Hamani, Clement
2015-02-01
Most of the available preclinical models of PTSD have focused on isolated behavioural aspects and have not considered individual variations in response to stress. We employed behavioural criteria to identify and characterize a subpopulation of rats that present several features analogous to PTSD-like states after exposure to classical fear conditioning. Outbred Sprague-Dawley rats were segregated into weak- and strong-extinction groups on the basis of behavioural scores during extinction of conditioned fear responses. Animals were subsequently tested for anxiety-like behaviour in the open-field test (OFT), novelty suppressed feeding (NSF) and elevated plus maze (EPM). Baseline plasma corticosterone was measured prior to any behavioural manipulation. In a second experiment, rats underwent OFT, NSF and EPM prior to being subjected to fear conditioning to ascertain whether or not pre-stress levels of anxiety-like behaviours could predict extinction scores. We found that 25% of rats exhibit low extinction rates of conditioned fear, a feature that was associated with increased anxiety-like behaviour across multiple tests in comparison to rats showing strong extinction. In addition, weak-extinction animals showed low levels of corticosterone prior to fear conditioning, a variable that seemed to predict extinction recall scores. In a separate experiment, anxiety measures taken prior to fear conditioning were not predictive of a weak-extinction phenotype, suggesting that weak-extinction animals do not show detectable traits of anxiety in the absence of a stressful experience. These findings suggest that extinction impairment may be used to identify stress-vulnerable rats, thus providing a useful model for elucidating mechanisms and investigating potential treatments for PTSD. Copyright © 2014 Elsevier Ltd. All rights reserved.
Weak neutral-current interactions
International Nuclear Information System (INIS)
Barnett, R.M.
1978-08-01
The roles of each type of experiment in establishing uniquely the values of the the neutral-current couplings of u and d quarks are analyzed together with their implications for gauge models of the weak and electromagnetic interactions. An analysis of the neutral-current couplings of electrons and of the data based on the assumption that only one Z 0 boson exists is given. Also a model-independent analysis of parity violation experiments is discussed. 85 references
Weak neutral-current interactions
Energy Technology Data Exchange (ETDEWEB)
Barnett, R.M.
1978-08-01
The roles of each type of experiment in establishing uniquely the values of the the neutral-current couplings of u and d quarks are analyzed together with their implications for gauge models of the weak and electromagnetic interactions. An analysis of the neutral-current couplings of electrons and of the data based on the assumption that only one Z/sup 0/ boson exists is given. Also a model-independent analysis of parity violation experiments is discussed. 85 references. (JFP)
Directory of Open Access Journals (Sweden)
O. G. Isaeva
2009-01-01
Full Text Available We formulate the dynamical model for the anti-tumour immune response based on intercellular cytokine-mediated interactions with the interleukin-2 (IL-2 taken into account. The analysis shows that the expression level of tumour antigens on antigen presenting cells has a distinct influence on the tumour dynamics. At low antigen presentation, a progressive tumour growth takes place to the highest possible value. At high antigen presentation, there is a decrease in tumour size to some value when the dynamical equilibrium between the tumour and the immune system is reached. In the case of the medium antigen presentation, both these regimes can be realized depending on the initial tumour size and the condition of the immune system. A pronounced immunomodulating effect (the suppression of tumour growth and the normalization of IL-2 concentration is established by considering the influence of low-intensity electromagnetic microwaves as a parametric perturbation of the dynamical system. This finding is in qualitative agreement with the recent experimental results on immunocorrective effects of centimetre electromagnetic waves in tumour-bearing mice.
Weakly coupled heat bath models for Gibbs-like invariant states in nonlinear wave equations
Bajars, J.; Frank, J. E.; Leimkuhler, B. J.
2013-07-01
Thermal bath coupling mechanisms as utilized in molecular dynamics are applied to partial differential equation models. Working from a semi-discrete (Fourier mode) formulation for the Burgers-Hopf or Korteweg-de Vries equation, we introduce auxiliary variables and stochastic perturbations in order to drive the system to sample a target ensemble which may be a Gibbs state or, more generally, any smooth distribution defined on a constraint manifold. We examine the ergodicity of approaches based on coupling of the heat bath to the high wave numbers, with the goal of controlling the ensemble through the fast modes. We also examine different thermostat methods in the extent to which dynamical properties are corrupted in order to accurately compute the average of a desired observable with respect to the invariant distribution. The principal observation of this paper is that convergence to the invariant distribution can be achieved by thermostatting just the highest wave number, while the evolution of the slowest modes is little affected by such a thermostat.
Energy Technology Data Exchange (ETDEWEB)
Arroyo-Urena, M.A.; Tavares-Velasco, G. [Benemerita Universidad Autonoma de Puebla, Facultad de Ciencias Fisico-Matematicas, Puebla, PUE (Mexico); Hernandez-Tome, G. [Benemerita Universidad Autonoma de Puebla, Facultad de Ciencias Fisico-Matematicas, Puebla, PUE (Mexico); Centro de Investigacion y de Estudios Avanzados del Instituto Politecnico Nacional, Departamento de Fisica, Mexico City (Mexico)
2017-04-15
We obtain analytical expressions, both in terms of parametric integrals and Passarino-Veltman scalar functions, for the one-loop contributions to the anomalous weak magnetic dipole moment (AWMDM) of a charged lepton in the framework of the simplest little Higgs model (SLHM). Our results are general and can be useful to compute the weak properties of a charged lepton in other extensions of the standard model (SM). As a by-product we obtain generic contributions to the anomalous magnetic dipole moment (AMDM), which agree with previous results. We then study numerically the potential contributions from this model to the τ lepton AMDM and AWMDM for values of the parameter space consistent with current experimental data. It is found that they depend mainly on the energy scale f at which the global symmetry is broken and the t{sub β} parameter, whereas there is little sensitivity to a mild change in the values of other parameters of the model. While the τ AMDM is of the order of 10{sup -9}, the real (imaginary) part of its AWMDM is of the order of 10{sup -9} (10{sup -10}). These values seem to be out of the reach of the expected experimental sensitivity of future experiments. (orig.)
National Research Council Canada - National Science Library
Basu, Bamandas
2008-01-01
Linear dispersion relations for electrostatic waves in spatially inhomogeneous, current-carrying anisotropic plasma, where the equilibrium particle velocity distributions are modeled by various Lorentzian (kappa...
WEAK SOLVABILITY FOR A CLASS OF CONTACT PROBLEMS
Directory of Open Access Journals (Sweden)
Andaluzia Matei
2010-07-01
Full Text Available A unilateral frictionless contact model, under the small deformationshypothesis, for static processes is considered. We model the behaviorof the material by a constitutive law stated in a subdifferentialform. The contact is described with Signorini's condition. Our studyfocuses on the weak solvability of the model, based on a weak formulation with dual Lagrange multipliers
Zhang, Shangbin; He, Qingbo; Ouyang, Kesai; Xiong, Wei
2018-02-01
The wayside Acoustic Defective Bearing Detector (ADBD) system plays an important role in ensuring the safety of railway transportation. However, Doppler distortion and multi-bearing source aliasing in the acquired acoustic bearing signals significantly decrease the accuracy of bearing diagnosis. Traditional multisource separation schemes using time-frequency filters constructed by a single microphone signal always show poor performance on weak signal separation. Inspired by an assumption that the spatial location of different sources is different, this paper proposes a novel time-varying spatial filtering rearrangement (TSFR) scheme based on a microphone array to overcome current difficulties. In the scheme, a zero-angle spatial filter and peak searching are proposed to obtain the time-centers of corresponding sources. Based on these time-centers, several time-varying spatial filters are designed to extract different source signals. Then interpolation and rearrangement are used to correct the Doppler distortion and reconstruct the corresponding separated signals. Finally, the train bearing fault diagnosis is implemented by analyzing the envelope spectrum of the corrected signals. Because the time-varying spatial filter construction is only dependent on the source location and has little relationship with the signal energy, the proposed TSFR scheme has significant advantages in weak signal separation and diagnosis in comparison with traditional ones. With the verifications by both simulation and experiment cases, the proposed array-based TSFR scheme shows a good performance on multiple fault source separation and is expected to be used in the ADBD system.
On the weaknesses of the valence shell electron pair repulsion (VSEPR) model
Røeggen, Inge
1986-07-01
The validity of the valence shell electron pair repulsion model (VSEPR) is discussed within the framework of an antisymmetric product of strongly orthogonal geminals (APSG). It is shown that when a molecule is partitioned onto fragments consisting of a central fragment, lone pairs, bond pairs, and ligands, the total APSG energy including the nuclear repulsion terms, can be written as a sum of intra- and interfragment energies. The VSEPR terms can be identified as three out of 13 different energy components. The analysis is applied to the water molecule. Six of the neglected energy components in the VSEPR model have a larger variation with the bond angle than the terms which are included in the model. According to this analysis it is difficult to consider the VSEPR model as a valid framework for discussing molecular equilibrium geometries. It is suggested that energy fragment analysis might represent an alternative model.
Mendonça, J. R. G.
2018-04-01
We propose and investigate a one-parameter probabilistic mixture of one-dimensional elementary cellular automata under the guise of a model for the dynamics of a single-species unstructured population with nonoverlapping generations in which individuals have smaller probability of reproducing and surviving in a crowded neighbourhood but also suffer from isolation and dispersal. Remarkably, the first-order mean field approximation to the dynamics of the model yields a cubic map containing terms representing both logistic and weak Allee effects. The model has a single absorbing state devoid of individuals, but depending on the reproduction and survival probabilities can achieve a stable population. We determine the critical probability separating these two phases and find that the phase transition between them is in the directed percolation universality class of critical behaviour.
Unified SU(2)xU(1)xU'(1)-models of weak and electromagnetic interactions of leptons
International Nuclear Information System (INIS)
Guliev, N.A.; Dzhafarov, I.G.; Mekhtiev, B.I.; Yakh'yaev, R.Sh.
1981-01-01
The possibility of constructing uniform models of weak and electromagnetic lepton interactions in the framework of violated calibrated SU(2)xU(1)xV'(1)-symmetry, is considered. In the first part, the violation of the SU(2)xV(1)xU'(1)-symmetry in different variants of combination of two types of goldstone-Higgs field is analyzed: phi isodublet and PHI isosinglet, anti xi and PHI; phi and anti xi complex isovector. In the second part, possible lepton models are constructed in which masses of introduced leptons can generate in the case of symmetry violations considered in the first part. The third part concerns the problem of acceptability of the models constructed from the positions of existing experimental data relatively to the cross sections of the scattering processes of muon neutrino and antineutrino on the electron [ru
International Nuclear Information System (INIS)
Mudry, Christopher; Wen Xiaogang
1999-01-01
Effective theories for random critical points are usually non-unitary, and thus may contain relevant operators with negative scaling dimensions. To study the consequences of the existence of negative-dimensional operators, we consider the random-bond XY model. It has been argued that the XY model on a square lattice, when weakly perturbed by random phases, has a quasi-long-range ordered phase (the random spin wave phase) at sufficiently low temperatures. We show that infinitely many relevant perturbations to the proposed critical action for the random spin wave phase were omitted in all previous treatments. The physical origin of these perturbations is intimately related to the existence of broadly distributed correlation functions. We find that those relevant perturbations do enter the Renormalization Group equations, and affect critical behavior. This raises the possibility that the random XY model has no quasi-long-range ordered phase and no Kosterlitz-Thouless (KT) phase transition
Casalderrey-Solana, Jorge; Milhano, Jose Guilherme; Pablos, Daniel; Rajagopal, Krishna
2016-06-11
We confront a hybrid strong/weak coupling model for jet quenching to data from LHC heavy ion collisions. The model combines the perturbative QCD physics at high momentum transfer and the strongly coupled dynamics of non- abelian gauge theories plasmas in a phenomenological way. By performing a full Monte Carlo simulation, and after fitting one single parameter, we successfully describe several jet observables at the LHC, including dijet and photon jet measurements. Within current theoretical and experimental uncertainties, we find that such observables show little sensitivity to the specifics of the microscopic energy loss mechanism. We also present a new observable, the ratio of the fragmentation function of inclusive jets to that of the associated jets in dijet pairs, which can discriminate among different medium models. Finally, we discuss the importance of plasma response to jet passage in jet shapes.
Directory of Open Access Journals (Sweden)
Natarajan Raghunand
2001-01-01
Full Text Available Uptake of weak acid and weak base chemotherapeutic drugs by tumors is greatly influenced by the tumor extracellular/interstitial pH (pHe, the intracellular pH (pHi maintained by the tumor cells, and by the ionization properties of the drug itself. The acid-outside plasmalemmal pH gradient in tumors acts to exclude weak base drugs like the anthracyclines, anthraquinones, and vinca alkaloids from the cells, leading to a substantial degree of “physiological drug resistance” in tumors. We have induced acute metabolic alkalosis in C3H tumor-bearing C3H/hen mice, by gavage and by intraperitoneal (i.p. administration of NaHCO3. 31P magnetic resonance spectroscopic measurements of 3-aminopropylphosphonate show increases of up to 0.6 pH units in tumor pHe, and 0.2 to 0.3 pH units in hind leg tissue pHe, within 2 hours of i.p. administration of NaHCO3. Theoretical calculations of mitoxantrone uptake into tumor and normal (hind leg tissue at the measured pH, and pHI values indicate that a gain in therapeutic index of up to 3.3-fold is possible with NaHCO3 pretreatment. Treatment of C3H tumor-bearing mice with 12 mg/kg mitoxantrone resulted in a tumor growth delay of 9 days, whereas combined NaHCO3mitoxantrone therapy resulted in an enhancement of the TGD to 16 days.
Global weak solutions for a compressible gas-liquid model with well-formation interaction
Evje, Steinar
The objective of this work is to explore a compressible gas-liquid model designed for modeling of well flow processes. We build into the model well-reservoir interaction by allowing flow of gas between well and formation (surrounding reservoir). Inflow of gas and subsequent expansion of gas as it ascends towards the top of the well (a so-called gas kick) represents a major concern for various well operations in the context of petroleum engineering. We obtain a global existence result under suitable assumptions on the regularity of initial data and the rate function that controls the flow of gas between well and formation. Uniqueness is also obtained by imposing more regularity on the initial data. The key estimates are to obtain appropriate lower and upper bounds on the gas and liquid masses. For that purpose we introduce a transformed version of the original model that is highly convenient for analysis of the original model. In particular, in the analysis of the transformed model additional terms, representing well-formation interaction, can be treated by natural extensions of arguments that previously have been employed for the single-phase Navier-Stokes model. The analysis ensures that transition to single-phase regions do not appear when the initial state is a true gas-liquid mixture.
Stabilizing the weak scale with conformal dynamics: A survey of model building approaches
Galloway, Jamison Robert
The Standard Model of particle physics stands as the most accurate description we have of our observed phenomena. It accommodates the experimental data collected to date, and provides an economical and predictive framework for understanding nature on small scales. The model can in fact be consistently extrapolated to the smallest length scales we can imagine, where the concept of spacetime itself is believed to require modification. As such, the Standard Model stands as a truly monumental achievement of scientific pursuits. The successes of the model, however, present some equally profound questions. The model, for instance, can in fact be extrapolated to very high scales, but only at the cost of introducing a highly unstable hierarchy. This thesis addresses the possibilities afforded by conformal field theories in addressing this problem. There are three classes of models discussed: four-dimensional composite models, five-dimensional composite models, and unparticle models. The foundations of each scenario are reviewed, and new approaches to solving some of their problems are presented. In each case, conformality plays a central role. Generically this is due to the fact that nontrivial scale dependence is at the heart of conformal field theories: we will see the common occurrence of large anomalous dimensions in strongly-coupled conformal field theories, which allow a softening of the dependence of relatively low-energy physics on unknown physics at higher energies. Finding a mechanism to achieve a stable separation of low-scale physics presents many challenges, but typically gives concrete predictions for new phenomena to be observed at the Large Hadron Collider, which recently became the world's highest energy particle accelerator. What will be revealed there in coming years is still very uncertain, so knowing in advance how each theoretical construction will be manifested physically is the immediate concern of particle physicists. The goal of this work is to
Zou, Yonghong; Zheng, Wei
2013-05-21
Field application of livestock manure introduces animal hormones and veterinary antibiotics into the environment. Colloids present in manure may potentially intensify the environmental risk of groundwater pollution by colloid-facilitated contaminant transport. The transport behavior of the veterinary antibiotic florfenicol in saturated homogeneously packed soil columns has been investigated in both the presence and absence of manure colloids. Results show that facilitated transport of florfenicol is significant in the presence of manure colloids. Multiple chemical and physical processes caused by the presence of manure colloids were considered to contribute to facilitated transport. Florfenicol breakthrough curves (BTCs) were fit well by two models. The two-site nonequilibrium adsorption contaminant transport model suggested the mechanisms for facilitated florfenicol transport are as follows: manure colloids decrease the sorption capacity of florfenicol to soil, enhance the instantaneous equilibrium adsorption, and suppress the time-dependent kinetic adsorption processes. The colloid-facilitated model further evaluated the partition coefficient of florfenicol to colloids and indicated that cotransport has little contribution. A stepwise inverse model fitting approach resulted in robust parameter estimation. The adoption of the nonlinear Freundlich adsorption equation in the two-site nonequilibrium model significantly increased the fit of the model to the breakthrough curves.
International Nuclear Information System (INIS)
Deshpande, N.G.
1980-01-01
By electro-weak theory is meant the unified field theory that describes both weak and electro-magnetic interactions. The development of a unified electro-weak theory is certainly the most dramatic achievement in theoretical physics to occur in the second half of this century. It puts weak interactions on the same sound theoretical footing as quantum elecrodynamics. Many theorists have contributed to this development, which culminated in the works of Glashow, Weinberg and Salam, who were jointly awarded the 1979 Nobel Prize in physics. Some of the important ideas that contributed to this development are the theory of beta decay formulated by Fermi, Parity violation suggested by Lee and Yang, and incorporated into immensely successful V-A theory of weak interactions by Sudarshan and Marshak. At the same time ideas of gauge invariance were applied to weak interaction by Schwinger, Bludman and Glashow. Weinberg and Salam then went one step further and wrote a theory that is renormalizable, i.e., all higher order corrections are finite, no mean feat for a quantum field theory. The theory had to await the development of the quark model of hadrons for its completion. A description of the electro-weak theory is given
Tsume, Yasuhiro; Takeuchi, Susumu; Matsui, Kazuki; Amidon, Gregory E; Amidon, Gordon L
2015-08-30
dissolution of BCS class IIb drugs, dasatinib as a model drug, including the different gastric condition. The maximum dissolution of dasatinib with USP dissolution apparatus II was less than 1% in pH 6.5 SIF, while the one with mGIS (pH 1.2 SGF/pH 6.5 SIF) reached almost 100%. The supersaturation and precipitation of dasatinib were observed in the in vitro dissolution studies with mGIS but not with USP apparatus II. Additionally, dasatinib dissolution with mGIS was reduced to less than 10% when the gastric pH was elevated, suggesting the co-administration of acid reducing agents will decrease the oral bioavailability of dasatinib. Accurate prediction of in vivo drug dissolution would be beneficial for assuring product safety and efficacy for patients. To this end, we have created a new in vitro dissolution system, mGIS, to predict the in vivo dissolution phenomena of a weak base drug, dasatinib. The experimental results when combined with in silico simulation suggest that the mGIS predicted the in vivo dissolution well due to the elevation of gastric pH. Thus, mGIS might be suitable to predict in vivo dissolution of weak basic drugs. This mGIS methodology is expected to significantly advance the prediction of in vivo drug dissolution. It is also expected to assist in optimizing product development and drug formulation design in support of Quality by Design (QbD) initiatives. Copyright © 2015 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
O’Carroll, Michael
2012-01-01
We consider the interaction of particles in weakly correlated lattice quantum field theories. In the imaginary time functional integral formulation of these theories there is a relative coordinate lattice Schroedinger operator H which approximately describes the interaction of these particles. Scalar and vector spin, QCD and Gross-Neveu models are included in these theories. In the weakly correlated regime H=H o +W where H o =−γΔ l , 0 l is the d-dimensional lattice Laplacian: γ=β, the inverse temperature for spin systems and γ=κ 3 where κ is the hopping parameter for QCD. W is a self-adjoint potential operator which may have non-local contributions but obeys the bound ‖W(x, y)‖⩽cexp ( −a(‖x‖+‖y‖)), a large: exp−a=β/β o (1/2) (κ/κ o ) for spin (QCD) models. H o , W, and H act in l 2 (Z d ), d⩾ 1. The spectrum of H below zero is known to be discrete and we obtain bounds on the number of states below zero. This number depends on the short range properties of W, i.e., the long range tail does not increase the number of states.
Lin, Chen; Reppert, Mike; Feng, Ximao; Jankowiak, Ryszard
2014-07-01
This work describes simple analytical formulas to describe the fluorescence line-narrowed (FLN) spectra of weakly coupled chromophores in the presence of excitation energy transfer (EET). Modeling studies for dimer systems (assuming low fluence and weak coupling) show that the FLN spectra (including absorption and emission spectra) calculated for various dimers using our model are in good agreement with spectra calculated by: (i) the simple convolution method and (ii) the more rigorous treatment using the Redfield approach [T. Renger and R. A. Marcus, J. Chem. Phys. 116, 9997 (2002)]. The calculated FLN spectra in the presence of EET of all three approaches are very similar. We argue that our approach provides a simplified and computationally more efficient description of FLN spectra in the presence of EET. This method also has been applied to FLN spectra obtained for the CP47 antenna complex of Photosystem II reported by Neupane et al. [J. Am. Chem. Soc. 132, 4214 (2010)], which indicated the presence of uncorrelated EET between pigments contributing to the two lowest energy (overlapping) exciton states, each mostly localized on a single chromophore. Calculated and experimental FLN spectra for CP47 complex show very good qualitative agreement.
Influence of Weak Base Addition to Hole-Collecting Buffer Layers in Polymer:Fullerene Solar Cells
Directory of Open Access Journals (Sweden)
Jooyeok Seo
2017-02-01
Full Text Available We report the effect of weak base addition to acidic polymer hole-collecting layers in normal-type polymer:fullerene solar cells. Varying amounts of the weak base aniline (AN were added to solutions of poly(3,4-ethylenedioxythiophene:poly(styrenesulfonate (PEDOT:PSS. The acidity of the aniline-added PEDOT:PSS solutions gradually decreased from pH = 1.74 (AN = 0 mol% to pH = 4.24 (AN = 1.8 mol %. The electrical conductivity of the PEDOT:PSS-AN films did not change much with the pH value, while the ratio of conductivity between out-of-plane and in-plane directions was dependent on the pH of solutions. The highest power conversion efficiency (PCE was obtained at pH = 2.52, even though all devices with the PEDOT:PSS-AN layers exhibited better PCE than those with the pristine PEDOT:PSS layers. Atomic force microscopy investigation revealed that the size of PEDOT:PSS domains became smaller as the pH increased. The stability test for 100 h illumination under one sun condition disclosed that the PCE decay was relatively slower for the devices with the PEDOT:PSS-AN layers than for those with pristine PEDOT:PSS layers.
Strengths and Weaknesses in a Human Rights-Based Approach to International Development
DEFF Research Database (Denmark)
Broberg, Morten; Sano, Hans-Otto
2017-01-01
The human rights based approach to development cooperation has found recent support from both development cooperation actors and NGOs active in developing countries. We set out to define this approach, how it is applied, and to identify its central agents and principal components. Through examples...
Church, Lewis
2010-01-01
This dissertation answers three research questions: (1) What are the characteristics of a combinatoric measure, based on the Average Search Length (ASL), that performs the same as a probabilistic version of the ASL?; (2) Does the combinatoric ASL measure produce the same performance result as the one that is obtained by ranking a collection of…
Fuguet, Elisabet; Ràfols, Clara; Bosch, Elisabeth; Rosés, Martí
2009-04-24
A new and fast method to determine acidity constants of monoprotic weak acids and bases by capillary zone electrophoresis based on the use of an internal standard (compound of similar nature and acidity constant as the analyte) has been developed. This method requires only two electrophoretic runs for the determination of an acidity constant: a first one at a pH where both analyte and internal standard are totally ionized, and a second one at another pH where both are partially ionized. Furthermore, the method is not pH dependent, so an accurate measure of the pH of the buffer solutions is not needed. The acidity constants of several phenols and amines have been measured using internal standards of known pK(a), obtaining a mean deviation of 0.05 pH units compared to the literature values.
Chasing equilibrium: measuring the intrinsic solubility of weak acids and bases.
Stuart, Martin; Box, Karl
2005-02-15
A novel procedure is described for rapid (20-80 min) measurement of intrinsic solubility values of organic acids, bases, and ampholytes. In this procedure, a quantity of substance was first dissolved at a pH where it exists predominantly in its ionized form, and then a precipitate of the neutral (un-ionized) species was formed by changing the pH. Subsequently, the rate of change of pH due to precipitation or dissolution was monitored and strong acid and base titrant were added to adjust the pH to discover its equilibrium conditions, and the intrinsic solubility of the neutral form of the compound could then be determined. The procedure was applied to a variety of monoprotic and diprotic pharmaceutical compounds. The results were highly repeatable and had a good correlation to available published values. Data collected during the procedure provided good diagnostic information. Kinetic solubility data were also collected but provided a poor guide to the intrinsic solubility.
Yao, Yiqing; Chen, Shulin; Kafle, Gopi Krishna
2017-05-15
Failure of methane yield is common for anaerobic digestion (AD) of "weak-acid/acid" wastes alone. In order to verify the importance of pH of materials on the process performance and the methane yield, the "weak-base" wastes-poplar wastes (PW) were used as substrate of solid-state AD (SS-AD). The results show that PW could be used for efficient methane yield after NaOH treatment, the total methane yield was 81.1 L/kg volatile solids (VS). PW also could be used for anaerobic co-digestion with high-pH cattle slurry (CM). For the group with NaOH pretreatment, time used for reaching stable state was 2 days earlier than that of the group without NaOH pretreatment. The maximal methane yield of 98.2 L/kg VS was obtained on conditions of 1:1 of PW-to-CM (P/C) ratio and NaOH pretreatment, which was 21.1% (p methane yield. The results indicate that PW could be alone used for efficient SS-AD for methane yield after NaOH treatment. Copyright © 2017 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Shuyan Wang
2016-05-01
Full Text Available This paper proposes a new method to improve the resolution of the seismic signal and to compensate the energy of weak seismic signal based on matching pursuit. With a dictionary of Morlet wavelets, matching pursuit algorithm can decompose a seismic trace into a series of wavelets. We abstract complex-trace attributes from analytical expressions to shrink the search range of amplitude, frequency and phase. In addition, considering the level of correlation between constituent wavelets and average wavelet abstracted from well-seismic calibration, we can obtain the search range of scale which is an important adaptive parameter to control the width of wavelet in time and the bandwidth of frequency. Hence, the efficiency of selection of proper wavelets is improved by making first a preliminary estimate and refining a local selecting range. After removal of noise wavelets, we integrate useful wavelets which should be firstly executed by adaptive spectral whitening technique. This approach can improve the resolutions of seismic signal and enhance the energy of weak wavelets simultaneously. The application results of real seismic data show this method has a good perspective of application.
Witteveen, Esther; Hoogland, Inge C M; Wieske, Luuk; Weber, Nina C; Verhamme, Camiel; Schultz, Marcus J; van Schaik, Ivo N; Horn, Janneke
2016-01-01
There are few reports of in vivo muscle strength measurements in animal models of ICU-acquired weakness (ICU-AW). In this study we investigated whether the Escherichia coli (E. coli) septic peritonitis mouse model may serve as an ICU-AW model using in vivo strength measurements and myosin/actin assays, and whether development of ICU-AW is age-dependent in this model. Young and old mice were injected intraperitoneally with E. coli and treated with ceftriaxone. Forelimb grip strength was measured at multiple time points, and the myosin/actin ratio in muscle was determined. E. coli administration was not associated with grip strength decrease, neither in young nor in old mice. In old mice, the myosin/actin ratio was lower in E. coli mice at t = 48 h and higher at t = 72 h compared with controls. This E. coli septic peritonitis mouse model did not induce decreased grip strength. In its current form, it seems unsuitable as a model for ICU-AW. © 2015 The Authors. Muscle & Nerve Published by Wiley Periodicals, Inc.
A weakly informative default prior distribution for logistic and other regression models
Gelman, Andrew; Jakulin, Aleks; Pittau, Maria Grazia; Su, Yu-Sung
2008-01-01
We propose a new prior distribution for classical (nonhierarchical) logistic regression models, constructed by first scaling all nonbinary variables to have mean 0 and standard deviation 0.5, and then placing independent Student-$t$ prior distributions on the coefficients. As a default choice, we recommend the Cauchy distribution with center 0 and scale 2.5, which in the simplest setting is a longer-tailed version of the distribution attained by assuming one-half additional success and one-ha...
Effects of weak and strong localization in tunnel characteristics of contacts on HTSC base
International Nuclear Information System (INIS)
Revenko, Yu.V.; Svistunov, V.M.; Grigut', O.V.; Belogolovskij, M.A.; Khachaturov, A.I.
1992-01-01
It is found that a phenomena governed by the electronic processes in the disordered surface normal layer of material are observed in the tunnel contatcs bases on metal oxide superconductors of 1-2-3 group. Measured characteristics σ(U)=dI/dU ore determined both by contact's barrier properties and conductivity in the disordered region of metal oxides in the vicinity of a barrier. As regards high-temperature contacts σ(U) value at high temperatures us determined by the Schottky barrier and at low temperatures - by activation processes of charge transfer over strongly localized states in near-the-barrier region of the contact. Crossing over towards logarithmic dependence in the tunnel conductuvity σ(U) of low-Ohmic transitions are attributed to the occurrence of 2D state density conditions in the tunnel surface layers of metal oxides
Strengths and weaknesses of EST-based prediction of tissue-specific alternative splicing
Directory of Open Access Journals (Sweden)
Vingron Martin
2004-09-01
Full Text Available Abstract Background Alternative splicing contributes significantly to the complexity of the human transcriptome and proteome. Computational prediction of alternative splice isoforms are usually based on EST sequences that also allow to approximate the expression pattern of the related transcripts. However, the limited number of tissues represented in the EST data as well as the different cDNA construction protocols may influence the predictive capacity of ESTs to unravel tissue-specifically expressed transcripts. Methods We predict tissue and tumor specific splice isoforms based on the genomic mapping (SpliceNest of the EST consensus sequences and library annotation provided in the GeneNest database. We further ascertain the potentially rare tissue specific transcripts as the ones represented only by ESTs derived from normalized libraries. A subset of the predicted tissue and tumor specific isoforms are then validated via RT-PCR experiments over a spectrum of 40 tissue types. Results Our strategy revealed 427 genes with at least one tissue specific transcript as well as 1120 genes showing tumor specific isoforms. While our experimental evaluation of computationally predicted tissue-specific isoforms revealed a high success rate in confirming the expression of these isoforms in the respective tissue, the strategy frequently failed to detect the expected restricted expression pattern. The analysis of putative lowly expressed transcripts using normalized cDNA libraries suggests that our ability to detect tissue-specific isoforms strongly depends on the expression level of the respective transcript as well as on the sensitivity of the experimental methods. Especially splice isoforms predicted to be disease-specific tend to represent transcripts that are expressed in a set of healthy tissues rather than novel isoforms. Conclusions We propose to combine the computational prediction of alternative splice isoforms with experimental validation for
Simioni, Stephan; Sidler, Rolf; Dual, Jürg; Schweizer, Jürg
2015-04-01
Avalanche control by explosives is among the key temporary preventive measures. Yet, little is known about the mechanism involved in releasing avalanches by the effect of an explosion. Here, we test the hypothesis that the stress induced by acoustic waves exceeds the strength of weak snow layers. Consequently the snow fails and the onset of rapid crack propagation might finally lead to the release of a snow slab avalanche. We performed experiments with explosive charges over a snowpack. We installed microphones above the snowpack to measure near-surface air pressure and accelerometers within three snow pits. We also recorded pit walls of each pit with high speed cameras to detect weak layer failure. Empirical relationships and a priori information from ice and air were used to characterize a porous layered model from density measurements of snow profiles in the snow pits. This model was used to perform two-dimensional numerical simulations of wave propagation in Biot-type porous material. Locations of snow failure were identified in the simulation by comparing the axial and deviatoric stress field of the simulation to the corresponding snow strength. The identified snow failure locations corresponded well with the observed failure locations in the experiment. The acceleration measured in the snowpack best correlated with the modeled acceleration of the fluid relative to the ice frame. Even though the near field of the explosion is expected to be governed by non-linear effects as for example the observed supersonic wave propagation in the air above the snow surface, the results of the linear poroelastic simulation fit well with the measured air pressure and snowpack accelerations. The results of this comparison are an important step towards quantifying the effectiveness of avalanche control by explosives.
International Nuclear Information System (INIS)
Goebel, M.
2011-09-01
In this thesis the global Standard Model (SM) fit to the electroweak precision observables is revisted with respect to newest experimental results. Various consistency checks are performed showing no significant deviation from the SM. The Higgs boson mass is estimated by the electroweak fit to be M H =94 -24 +30 GeV without any information from direct Higgs searches at LEP, Tevatron, and the LHC and the result is M H =125 -10 +8 GeV when including the direct Higgs mass constraints. The strong coupling constant is extracted at fourth perturbative order as α s (M Z 2 )=0.1194±0.0028(exp)±0.0001 (theo). From the fit including the direct Higgs constraints the effective weak mixing angle is determined indirectly to be sin 2 θ l eff =0.23147 -0.00010 +0.00012 . For the W mass the value of M W =80.360 -0.011 +0.012 GeV is obtained indirectly from the fit including the direct Higgs constraints. The electroweak precision data is also exploited to constrain new physics models by using the concept of oblique parameters. In this thesis the following models are investigated: models with a sequential fourth fermion generation, the inert-Higgs doublet model, the littlest Higgs model with T-parity conservation, and models with large extra dimensions. In contrast to the SM, in these models heavy Higgs bosons are in agreement with the electroweak precision data. The forward-backward asymmetry as a function of the invariant mass is measured for pp→ Z/γ * →e + e - events collected with the ATLAS detector at the LHC. The data taken in 2010 at a center-of-mass energy of √(s)=7 TeV corresponding to an integrated luminosity of 37.4 pb -1 is analyzed. The measured forward-backward asymmetry is in agreement with the SM expectation. From the measured forward-backward asymmetry the effective weak mixing angle is extracted as sin 2 θ l eff =0.2204±.0071(stat) -0.0044 +0.0039 (syst). The impact of unparticles and large extra dimensions on the forward-backward asymmetry at large
Energy Technology Data Exchange (ETDEWEB)
Goebel, M.
2011-09-15
In this thesis the global Standard Model (SM) fit to the electroweak precision observables is revisted with respect to newest experimental results. Various consistency checks are performed showing no significant deviation from the SM. The Higgs boson mass is estimated by the electroweak fit to be M{sub H}=94{sub -24}{sup +30} GeV without any information from direct Higgs searches at LEP, Tevatron, and the LHC and the result is M{sub H}=125{sub -10}{sup +8} GeV when including the direct Higgs mass constraints. The strong coupling constant is extracted at fourth perturbative order as {alpha}{sub s}(M{sub Z}{sup 2})=0.1194{+-}0.0028(exp){+-}0.0001 (theo). From the fit including the direct Higgs constraints the effective weak mixing angle is determined indirectly to be sin{sup 2} {theta}{sup l}{sub eff}=0.23147{sub -0.00010}{sup +0.00012}. For the W mass the value of M{sub W}=80.360{sub -0.011}{sup +0.012} GeV is obtained indirectly from the fit including the direct Higgs constraints. The electroweak precision data is also exploited to constrain new physics models by using the concept of oblique parameters. In this thesis the following models are investigated: models with a sequential fourth fermion generation, the inert-Higgs doublet model, the littlest Higgs model with T-parity conservation, and models with large extra dimensions. In contrast to the SM, in these models heavy Higgs bosons are in agreement with the electroweak precision data. The forward-backward asymmetry as a function of the invariant mass is measured for pp{yields} Z/{gamma}{sup *}{yields}e{sup +}e{sup -} events collected with the ATLAS detector at the LHC. The data taken in 2010 at a center-of-mass energy of {radical}(s)=7 TeV corresponding to an integrated luminosity of 37.4 pb{sup -1} is analyzed. The measured forward-backward asymmetry is in agreement with the SM expectation. From the measured forward-backward asymmetry the effective weak mixing angle is extracted as sin{sup 2} {theta}{sup l
Weak temperature dependence of ageing of structural properties in atomistic model glassformers
Jenkinson, Thomas; Crowther, Peter; Turci, Francesco; Royall, C. Patrick
2017-08-01
Ageing phenomena are investigated from a structural perspective in two binary Lennard-Jones glassformers, the Kob-Andersen and Wahnström mixtures. In both, the geometric motif assumed by the glassformer upon supercooling, the locally favoured structure (LFS), has been established. The Kob-Andersen mixture forms bicapped square antiprisms; the Wahnström model forms icosahedra. Upon ageing, we find that the structural relaxation time has a time-dependence consistent with a power law. However, the LFS population and potential energy increase and decrease, respectively, in a logarithmic fashion. Remarkably, over the time scales investigated, which correspond to a factor of 104 change in relaxation times, the rate at which these quantities age appears almost independent of temperature. Only at temperatures far below the Vogel-Fulcher-Tamman temperature do the ageing dynamics slow.
Osborne, Hamish R; Quinlan, John F; Allison, Garry T
2012-01-01
Abstract Background Hip abduction weakness has never been documented on a population basis as a common finding in a healthy group of athletes and would not normally be found in an elite adolescent athlete. This study aimed to show that hip abduction weakness not only occurs in this group but also is common and easy to correct with an unsupervised home based program. Methods A prospective sports team cohort based study was performed with thirty elite adolescent under-17 Australian Rules Footba...
One loop electro-weak radiative corrections in the standard model
International Nuclear Information System (INIS)
Kalyniak, P.; Sundaresan, M.K.
1987-01-01
This paper reports on the effect of radiative corrections in the standard model. A sensitive test of the three gauge boson vertices is expected to come from the work in LEPII in which the reaction e + e - → W + W - can occur. Two calculations of radiative corrections to the reaction e + e - → W + W - exist at present. The results of the calculations although very similar disagree with one another as to the actual magnitude of the correction. Some of the reasons for the disagreement are understood. However, due to the reasons mentioned below, another look must be taken at these lengthy calculations to resolve the differences between the two previous calculations. This is what is being done in the present work. There are a number of reasons why we must take another look at the calculation of the radiative corrections. The previous calculations were carried out before the UA1, UA2 data on W and Z bosons were obtained. Experimental groups require a computer program which can readily calculate the radiative corrections ab initio for various experimental conditions. The normalization of sin 2 θ w in the previous calculations was done in a way which is not convenient for use in the experimental work. It would be desirable to have the analytical expressions for the corrections available so that the renormalization scheme dependence of the corrections could be studied
Model analysis of molecular conformations in terms of weak interactions between non bonded atoms
International Nuclear Information System (INIS)
Lombardi, E.
1988-01-01
The aim of the present paper is to establish a reliable basis for the evaluation of stable conformations and rotational barriers for molecules, with possible applications to systems of biological interest. It is proceeded in two steps: first, the effect of chemical environment on orbitals of a given atom is studied for diatomic units, adopting a valence-bond approach and considering, as prototypes, the two simplest series of diatomic molecules with one valence electron each, i.e. the alkali diatomics and the alkali hydrides. In the model, the orbital of the hydrogen atom by a simple (''1S'') gaussian function, the valence orbital of an alkali atom by a function (r 2 -a 2 ) times a simple gaussian (''2S'' gaussian). Dissociation energies D e and equilibrium distances R e are calculated using a scanning procedure. Agreement with experiment is quantitative for the alkali diatomics. For alkali hydrides, good agreement is obtained only if validity of a rule β e R e =constant, for the two atoms separately, is postulated; β e is the characteristic parameter of a ''1S'' gaussian (hydrogen) or a ''2S'' gaussian (alkali atom) function. In a second step, the authors assume validity of the same rule in conformational analysis for any single bonded A-B molecule with A=C, O, N, P, Si, Ge and B=H, or a halogen atom. Gauge β e values for H, F and C are obtained by fitting experimental rotational barriers in C 2 H 6 , C 2 F 6 and C 3 H 8 . Stable conformation of, and barriers to rotation in, ethane-like rotors are determined, applying first-order exchange perturbation theory, in terms of two- and many-center exchange interactions in cluster of non-bonded atoms. Some 60 molecules are analyzed. Agreement with experiments is strikngly good except for a few systematic deviation. Reasons for such discrepancies are discussed
Directory of Open Access Journals (Sweden)
Narges Neyazi
2016-06-01
Full Text Available Purpose: Objective of this research is to find out weaknesses of undergraduate programs in terms of personnel and financial, organizational management and facilities in view of faculty and library staff, and determining factors that may facilitate program quality–improvement. Methods: This is a descriptive analytical survey research and from purpose aspect is an application evaluation study that undergraduate groups of selected faculties (Public Health, Nursing and Midwifery, Allied Medical Sciences and Rehabilitation at Tehran University of Medical Sciences (TUMS have been surveyed using context input process product model in 2014. Statistical population were consist of three subgroups including department head (n=10, faculty members (n=61, and library staff (n=10 with total population of 81 people. Data collected through three researcher-made questionnaires which were based on Likert scale. The data were then analyzed using descriptive and inferential statistics. Results: Results showed desirable and relatively desirable situation for factors in context, input, process, and product fields except for factors of administration and financial; and research and educational spaces and equipment which were in undesirable situation. Conclusion: Based on results, researcher highlighted weaknesses in the undergraduate programs of TUMS in terms of research and educational spaces and facilities, educational curriculum, administration and financial; and recommended some steps in terms of financial, organizational management and communication with graduates in order to improve the quality of this system.
Energy Technology Data Exchange (ETDEWEB)
O' Carroll, Michael [Departamento de Matematica Aplicada e Estatistica, ICMC-USP, C.P. 668,13560-970 Sao Carlos, Sao Paulo (Brazil)
2012-07-15
We consider the interaction of particles in weakly correlated lattice quantum field theories. In the imaginary time functional integral formulation of these theories there is a relative coordinate lattice Schroedinger operator H which approximately describes the interaction of these particles. Scalar and vector spin, QCD and Gross-Neveu models are included in these theories. In the weakly correlated regime H=H{sub o}+W where H{sub o}=-{gamma}{Delta}{sub l}, 0 < {gamma} Much-Less-Than 1 and {Delta}{sub l} is the d-dimensional lattice Laplacian: {gamma}={beta}, the inverse temperature for spin systems and {gamma}={kappa}{sup 3} where {kappa} is the hopping parameter for QCD. W is a self-adjoint potential operator which may have non-local contributions but obeys the bound Double-Vertical-Line W(x, y) Double-Vertical-Line Less-Than-Or-Slanted-Equal-To cexp ( -a( Double-Vertical-Line x Double-Vertical-Line + Double-Vertical-Line y Double-Vertical-Line )), a large: exp-a={beta}/{beta}{sub o}{sup (1/2)}({kappa}/{kappa}{sub o}) for spin (QCD) models. H{sub o}, W, and H act in l{sub 2}(Z{sup d}), d Greater-Than-Or-Slanted-Equal-To 1. The spectrum of H below zero is known to be discrete and we obtain bounds on the number of states below zero. This number depends on the short range properties of W, i.e., the long range tail does not increase the number of states.
Litou, Chara; Vertzoni, Maria; Xu, Wei; Kesisoglou, Filippos; Reppas, Christos
2017-06-01
To propose media for simulating the intragastric environment under reduced gastric acid secretion in the fasted state at three levels of simulation of the gastric environment and evaluate their usefulness in evaluating the intragastric dissolution of salts of weak bases. To evaluate the importance of bicarbonate buffer in biorelevant in vitro dissolution testing when using Level II biorelevant media simulating the environment in the fasted upper small intestine, regardless of gastric acid secretions. Media for simulating the hypochlorhydric and achlorhydric conditions in stomach were proposed using phosphates, maleates and bicarbonates buffers. The impact of bicarbonates in Level II biorelevant media simulating the environment in upper small intestine was evaluated so that pH and bulk buffer capacity were maintained. Dissolution data were collected using two model compounds, pioglitazone hydrochloride and semifumarate cocrystal of Compound B, and the mini-paddle dissolution apparatus in biorelevant media and in human aspirates. Simulated gastric fluids proposed in this study were in line with pH, buffer capacity, pepsin content, total bile salt/lecithin content and osmolality of the fasted stomach under partial and under complete inhibition of gastric acid secretion. Fluids simulating the conditions under partial inhibition of acid secretion were useful in simulating concentrations of both model compounds in gastric aspirates. Bicarbonates in Level III biorelevant gastric media and in Level II biorelevant media simulating the composition in the upper intestinal lumen did not improve simulation of concentrations in human aspirates. Level III biorelevant media for simulating the intragastric environment under hypochlorhydric conditions were proposed and their usefulness in the evaluation of concentrations of two model salts of weak bases in gastric aspirates was shown. Level II biorelevant media for simulating the environment in upper intestinal lumen led to
Energy Technology Data Exchange (ETDEWEB)
Casalderrey-Solana, Jorge [Departament d' Estructura i Constituents de la Matèria and Institut de Ciències del Cosmos (ICCUB), Universitat de Barcelona, Martí i Franquès 1, 08028 Barcelona (Spain); Rudolf Peierls Centre for Theoretical Physics, University of Oxford, 1 Keble Road, Oxford OX1 3NP (United Kingdom); Gulhan, Doga Can [Laboratory for Nuclear Science and Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Milhano, José Guilherme [CENTRA, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais, P-1049-001 Lisboa (Portugal); Physics Department, Theory Unit, CERN, CH-1211 Genève 23 (Switzerland); Pablos, Daniel [Departament d' Estructura i Constituents de la Matèria and Institut de Ciències del Cosmos (ICCUB), Universitat de Barcelona, Martí i Franquès 1, 08028 Barcelona (Spain); Rajagopal, Krishna [Laboratory for Nuclear Science and Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)
2016-12-15
Within a hybrid strong/weak coupling model for jets in strongly coupled plasma, we explore jet modifications in ultra-relativistic heavy ion collisions. Our approach merges the perturbative dynamics of hard jet evolution with the strongly coupled dynamics which dominates the soft exchanges between the fast partons in the jet shower and the strongly coupled plasma itself. We implement this approach in a Monte Carlo, which supplements the DGLAP shower with the energy loss dynamics as dictated by holographic computations, up to a single free parameter that we fit to data. We then augment the model by incorporating the transverse momentum picked up by each parton in the shower as it propagates through the medium, at the expense of adding a second free parameter. We use this model to discuss the influence of the transverse broadening of the partons in a jet on intra-jet observables. In addition, we explore the sensitivity of such observables to the back-reaction of the plasma to the passage of the jet.
Neustupa, Tomáš
2017-07-01
The paper presents the mathematical model of a steady 2-dimensional viscous incompressible flow through a radial blade machine. The corresponding boundary value problem is studied in the rotating frame. We provide the classical and weak formulation of the problem. Using a special form of the so called "artificial" or "natural" boundary condition on the outflow, we prove the existence of a weak solution for an arbitrarily large inflow.
The ΔS=1 weak chiral lagrangian as the effective theory of the chiral quark model
International Nuclear Information System (INIS)
Antonelli, V.; Bertolini, S.; Eeg, J.O.; Lashin, E.I.
1996-01-01
We use the chiral quark model to construct the complete O(p 2 ) weak ΔS = 1 chiral lagrangian via the bosonization of the ten relevant operators of the effective quark lagrangian. The chiral coefficients are given in terms of f π , the quark and gluon condensates and the scale-dependent NLO Wilson coefficients of the corresponding operators; in addition, they depend on the constituent quark mass M, a parameter characteristic of the model. All contributions of order N c 2 as well as N c and α s N c are included. The γ 5 -scheme dependence of the chiral coefficients, computed via dimensional regularization, and the Fierz transformation properties of the operator basis are discussed in detail. We apply our results to the evaluation of the hadronic matrix elements for the decays K →2 π, consistently including the renormalization induced by the meson loops. The effect of this renormalization is sizable and introduces a long-distance scale dependence that matches in the physical amplitudes the short-distance scale dependence of the Wilson coefficients. (orig.)
International Nuclear Information System (INIS)
Wang, Yudong; Liu, Li
2010-01-01
This paper extends the work in Tabak and Cajueiro (Are the crude oil markets becoming weakly efficient over time, Energy Economics 29 (2007) 28-36) and Alvarez-Ramirez et al. (Short-term predictability of crude oil markets: a detrended fluctuation analysis approach, Energy Economics 30 (2008) 2645-2656). In this paper, we test for the efficiency of WTI crude oil market through observing the dynamic of local Hurst exponents employing the method of rolling window based on multiscale detrended fluctuation analysis. Empirical results show that short-term, medium-term and long-term behaviors were generally turning into efficient behavior over time. However, in this way, the results also show that the market did not evolve along stable conditions for long times. Multiscale analysis is also implemented based on multifractal detrended fluctuation analysis. We found that the small fluctuations of WTI crude oil market were persistent; however, the large fluctuations had high instability, both in the short- and long-terms. Our discussion is also extended by incorporating arguments from the crude oil market structure for explaining the different correlation dynamics. (author)
Kiss, S.; Sarfraz, M.
2004-01-01
Presents a method to connect VRML (Virtual Reality Modeling Language) and Java components in a Web page using EAI (External Authoring Interface), which makes it possible to interactively generate and edit VRML meshes. The meshes used are based on regular grids, to provide an interaction and modeling
Kiss, S.; Banissi, E.; Khosrowshahi, F.; Sarfraz, M.; Ursyn, A.
2001-01-01
Presents a method to connect VRML (Virtual Reality Modeling Language) and Java components in a Web page using EAI (External Authoring Interface), which makes it possible to interactively generate and edit VRML meshes. The meshes used are based on regular grids, to provide an interaction and modeling
A new approach for power quality improvement of DFIG based wind farms connected to weak utility grid
Directory of Open Access Journals (Sweden)
Hossein Mahvash
2017-09-01
Full Text Available Most of power quality problems for grid connected doubly fed induction generators (DFIGs with wind turbine include flicker, variations of voltage RMS profile, and injected harmonics due to switching in DFIG converters. Flicker phenomenon is the most important problem in wind power systems. This paper described an effective method for mitigating flicker emission and power quality improvement for a fairly weak grid connected to a wind farm with DFIGs. The method was applied in the rotor side converter (RSC of the DFIG to control the output reactive power. q axis reference current was directly derived according to the mathematical relation between rotor q axis current and DFIG output reactive power without using PI controller. To extract the reference reactive power, the stator voltage control loop with the droop coefficient was proposed to regulate the grid voltage level in each operational condition. The DFIG output active power was separately controlled in d axis considering the stator voltage orientation control (SVOC. Different simulations were carried out on the test system and the flicker short term severity index (Pst was calculated for each case study using the discrete flickermeter model according to IEC 61400 standard. The obtained results validated flicker mitigation and power quality enhancement for the grid.
Directory of Open Access Journals (Sweden)
Dagny Butler
2013-10-01
Full Text Available We study the positive solutions to the steady state reaction diffusion equations with Dirichlet boundary conditions of the form $$displaylines{ -u''= cases{ lambda[u - frac{1}{K}u^2 - c frac{u^2}{1+u^2}], & $x in (L,1-L$,cr lambda[u - frac{1}{K}u^2], & $x in (0,Lcup(1-L,1$, } cr u(0=0, quad u(1=0 }$$ and displaylines{ -u''= cases{ lambda[u(u+1(b-u - c frac{u^2}{1+u^2}], & $x in (L,1-L$, cr lambda[u(u+1(b-u], & $x in (0,Lcup(1-L,1$, } cr u(0=0,quad u(1=0. }$$ Here, $lambda, b, c, K, L$ are positive constants with 0
Energy Technology Data Exchange (ETDEWEB)
Zhuk, Alexander [The International Center of Future Science of the Jilin University, Changchun City (China); Odessa National University, Astronomical Observatory, Odessa (Ukraine); Chopovsky, Alexey; Fakhr, Seyed Hossein [Odessa National University, Astronomical Observatory, Odessa (Ukraine); Shulga, Valerii [The International Center of Future Science of the Jilin University, Changchun City (China); Institut of Radio Astronomy of National Academy of Sciences of Ukraine, Kharkov (Ukraine); Wei, Han [The International Center of Future Science of the Jilin University, Changchun City (China)
2017-11-15
In a multidimensional Kaluza-Klein model with Ricci-flat internal space, we study the gravitational field in the weak-field limit. This field is created by two coupled sources. First, this is a point-like massive body which has a dust-like equation of state in the external space and an arbitrary parameter Ω of equation of state in the internal space. The second source is a static spherically symmetric massive scalar field centered at the origin where the point-like massive body is. The found perturbed metric coefficients are used to calculate the parameterized post-Newtonian (PPN) parameter γ. We define under which conditions γ can be very close to unity in accordance with the relativistic gravitational tests in the solar system. This can take place for both massive or massless scalar fields. For example, to have γ ∼ 1 in the solar system, the mass of scalar field should be μ >or similar 5.05 x 10{sup -49} g ∝ 2.83 x 10{sup -16} eV. In all cases, we arrive at the same conclusion that to be in agreement with the relativistic gravitational tests, the gravitating mass should have tension: Ω = -1/2. (orig.)
Lind, O; Delhey, K
2015-03-01
Birds have sophisticated colour vision mediated by four cone types that cover a wide visual spectrum including ultraviolet (UV) wavelengths. Many birds have modest UV sensitivity provided by violet-sensitive (VS) cones with sensitivity maxima between 400 and 425 nm. However, some birds have evolved higher UV sensitivity and a larger visual spectrum given by UV-sensitive (UVS) cones maximally sensitive at 360-370 nm. The reasons for VS-UVS transitions and their relationship to visual ecology remain unclear. It has been hypothesized that the evolution of UVS-cone vision is linked to plumage colours so that visual sensitivity and feather coloration are 'matched'. This leads to the specific prediction that UVS-cone vision enhances the discrimination of plumage colours of UVS birds while such an advantage is absent or less pronounced for VS-bird coloration. We test this hypothesis using knowledge of the complex distribution of UVS cones among birds combined with mathematical modelling of colour discrimination during different viewing conditions. We find no support for the hypothesis, which, combined with previous studies, suggests only a weak relationship between UVS-cone vision and plumage colour evolution. Instead, we suggest that UVS-cone vision generally favours colour discrimination, which creates a nonspecific selection pressure for the evolution of UVS cones. © 2015 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2015 European Society For Evolutionary Biology.
Schwartz, Matthias; Meyer, Björn; Wirnitzer, Bernhard; Hopf, Carsten
2015-03-01
Conventional mass spectrometry image preprocessing methods used for denoising, such as the Savitzky-Golay smoothing or discrete wavelet transformation, typically do not only remove noise but also weak signals. Recently, memory-efficient principal component analysis (PCA) in conjunction with random projections (RP) has been proposed for reversible compression and analysis of large mass spectrometry imaging datasets. It considers single-pixel spectra in their local context and consequently offers the prospect of using information from the spectra of adjacent pixels for denoising or signal enhancement. However, little systematic analysis of key RP-PCA parameters has been reported so far, and the utility and validity of this method for context-dependent enhancement of known medically or pharmacologically relevant weak analyte signals in linear-mode matrix-assisted laser desorption/ionization (MALDI) mass spectra has not been explored yet. Here, we investigate MALDI imaging datasets from mouse models of Alzheimer's disease and gastric cancer to systematically assess the importance of selecting the right number of random projections k and of principal components (PCs) L for reconstructing reproducibly denoised images after compression. We provide detailed quantitative data for comparison of RP-PCA-denoising with the Savitzky-Golay and wavelet-based denoising in these mouse models as a resource for the mass spectrometry imaging community. Most importantly, we demonstrate that RP-PCA preprocessing can enhance signals of low-intensity amyloid-β peptide isoforms such as Aβ1-26 even in sparsely distributed Alzheimer's β-amyloid plaques and that it enables enhanced imaging of multiply acetylated histone H4 isoforms in response to pharmacological histone deacetylase inhibition in vivo. We conclude that RP-PCA denoising may be a useful preprocessing step in biomarker discovery workflows.
Czech Academy of Sciences Publication Activity Database
Ehala, Sille; Grishina, Anastasia; Sheshenev, Andrey; Lyapkalo, Ilya; Kašička, Václav
2010-01-01
Roč. 1217, - (2010), s. 8048-8053 ISSN 0021-9673 R&D Projects: GA ČR(CZ) GA203/08/1428; GA ČR(CZ) GA203/09/0675 Institutional research plan: CEZ:AV0Z40550506 Keywords : acidity constant * capillary zone electrophoresis * zwitterionic heterocyclic bases Subject RIV: CC - Organic Chemistry Impact factor: 4.194, year: 2010
Diaz-Torres, Alexis
2011-04-01
A self-contained Fortran-90 program based on a three-dimensional classical dynamical reaction model with stochastic breakup is presented, which is a useful tool for quantifying complete and incomplete fusion, and breakup in reactions induced by weakly-bound two-body projectiles near the Coulomb barrier. The code calculates (i) integrated complete and incomplete fusion cross sections and their angular momentum distribution, (ii) the excitation energy distribution of the primary incomplete-fusion products, (iii) the asymptotic angular distribution of the incomplete-fusion products and the surviving breakup fragments, and (iv) breakup observables, such as angle, kinetic energy and relative energy distributions. Program summaryProgram title: PLATYPUS Catalogue identifier: AEIG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 332 342 No. of bytes in distributed program, including test data, etc.: 344 124 Distribution format: tar.gz Programming language: Fortran-90 Computer: Any Unix/Linux workstation or PC with a Fortran-90 compiler Operating system: Linux or Unix RAM: 10 MB Classification: 16.9, 17.7, 17.8, 17.11 Nature of problem: The program calculates a wide range of observables in reactions induced by weakly-bound two-body nuclei near the Coulomb barrier. These include integrated complete and incomplete fusion cross sections and their spin distribution, as well as breakup observables (e.g. the angle, kinetic energy, and relative energy distributions of the fragments). Solution method: All the observables are calculated using a three-dimensional classical dynamical model combined with the Monte Carlo sampling of probability-density distributions. See Refs. [1,2] for further details. Restrictions: The
Stigson, Helena; Hill, Julian
2009-10-01
The objective of this study was to evaluate a model for a safe road transport system, based on some safety performance indicators regarding the road user, the vehicle, and the road, by using crashes with fatally and seriously injured car occupants. The study also aimed to evaluate whether the model could be used to identify system weaknesses and components (road user, vehicles, and road) where improvements would yield the highest potential for further reductions in serious injuries. Real-life car crashes with serious injury outcomes (Maximum Abbreviated Injury Scale 2+) were classified according to the vehicle's safety rating by Euro NCAP (European New Car Assessment Programme) and whether the vehicle was fitted with ESC (Electronic Stability Control). For each crash, the road was also classified according to EuroRAP (European Road Assessment Programme) criteria, and human behavior in terms of speeding, seat belt use, and driving under the influence of alcohol/drugs. Each crash was compared and classified according to the model criteria. Crashes where the safety criteria were not met in more than one of the 3 components were reclassified to identify whether all the components were correlated to the injury outcome. In-depth crash injury data collected by the UK On The Spot (OTS) accident investigation project was used in this study. All crashes in the OTS database occurring between 2000 and 2005 with a car occupant with injury rated MAIS2+ were included, for a total of 101 crashes with 120 occupants. It was possible to classify 90 percent of the crashes according to the model. Eighty-six percent of the occupants were injured when more than one of the 3 components were noncompliant with the safety criteria. These cases were reclassified to identify whether all of the components were correlated to the injury outcome. In 39 of the total 108 cases, at least two components were still seen to interact. The remaining cases were only related to one of the safety criteria
Model Based Temporal Reasoning
Rabin, Marla J.; Spinrad, Paul R.; Fall, Thomas C.
1988-03-01
Systems that assess the real world must cope with evidence that is uncertain, ambiguous, and spread over time. Typically, the most important function of an assessment system is to identify when activities are occurring that are unusual or unanticipated. Model based temporal reasoning addresses both of these requirements. The differences among temporal reasoning schemes lies in the methods used to avoid computational intractability. If we had n pieces of data and we wanted to examine how they were related, the worst case would be where we had to examine every subset of these points to see if that subset satisfied the relations. This would be 2n, which is intractable. Models compress this; if several data points are all compatible with a model, then that model represents all those data points. Data points are then considered related if they lie within the same model or if they lie in models that are related. Models thus address the intractability problem. They also address the problem of determining unusual activities if the data do not agree with models that are indicated by earlier data then something out of the norm is taking place. The models can summarize what we know up to that time, so when they are not predicting correctly, either something unusual is happening or we need to revise our models. The model based reasoner developed at Advanced Decision Systems is thus both intuitive and powerful. It is currently being used on one operational system and several prototype systems. It has enough power to be used in domains spanning the spectrum from manufacturing engineering and project management to low-intensity conflict and strategic assessment.
Xiao, Xian; Sun, Juanzhen; Chen, Mingxuan; Qie, Xiushu; Wang, Yingchun; Ying, Zhuming
2017-03-01
The metropolis of Beijing in China is located on a plain adjacent to high mountains to its northwest and the gulf of the Bohai Sea to its southeast. One of the most challenging forecast problems for Beijing is to predict whether thunderstorms initiating over the mountains will propagate to the adjacent plains and intensify. In this study, 18 warm season convective cases between 2008 and 2013 initiating on the mountains and intensifying on the plains under weak synoptic forcing were analyzed to gain an understanding of their characteristics. The statistical analysis was based on mosaic reflectivity data from six operational Doppler radars and reanalysis data produced by the Four-Dimensional Variational Doppler Radar Analysis System (VDRAS). The analysis of the radar reflectivity data shows that convective precipitation strengthened on the plains at certain preferred locations. To investigate the environmental conditions favoring the strengthening of the mountain-to-plain convective systems, statistical diagnoses of the rapid-update (12 min) 3 km reanalyses from VDRAS for the 18 cases were performed by computing the horizontal and temporal means of convective available potential energy, convective inhibition, vertical wind shear, and low-level wind for the plain and mountain regions separately. The results were compared with those from a baseline representing the warm season average and from a set of null cases and found considerable differences in these fields between the three data sets. The mean distributions of VDRAS reanalysis fields were also examined. The results suggest that the convergence between the low-level outflows associated with cold pools and the south-southeasterly environmental flows corresponds well with the preferred locations of convective intensification on the plains.
Evidence of weak pair coupling in the penetration depth of bi-based high-Tc superconductors
International Nuclear Information System (INIS)
Thompson, J.R.; Sun, Yang Ren; Ossandon, J.G.; Christen, D.K.; Chakoumakos, B.C.; Sales, B.C.; Kerchner, H.R.; Sonder, E.
1990-01-01
The magnetic penetration depth λ(T) has been investigated in Bi(Pb)SrCaCuO high-T c compounds having 2- and 3-layers of copper-oxygen per unit cell. Studies of the magnetization in the vortex state were employed and the results were compared with weak and strong coupling calculations. The temperature dependence of λ is described well by BCS theory in the clean limit, giving evidence for weak pair coupling in this family of materials. For the short component of the λ tensor, we obtain values of 292 and 220 nm (T = 0) for Bi-2212 and (BiPb)-2223, respectively
International Nuclear Information System (INIS)
Scadron, M.D.; Visinescu, M.
1983-01-01
By employing the current-algebra--PCAC (partial conservation of axial-vector current) program at the hadron level, the three decays Ω - →Ψ 0 π - , Ψ - π 0 , ΛK - are reasonably described in terms of only one fitted (ΔI = (1/2))/(ΔI = (3/2)) parameter of expected small 6% magnitude. Other parameters needed in the analysis, the baryon octet and decuplet weak transitions , , and , are completely constrained from B→B'π weak decays and independently from the quark model. The Σ + →pγ radiative decay amplitude and asymmetry parameters are then determined in terms of no free parameters
DEFF Research Database (Denmark)
Rendal, Cecilie; Kusk, Kresten Ole; Trapp, Stefan
2011-01-01
, and therefore a higher toxicity can be expected. The current study examines the pHdependent toxicity and bioaccumulation of the bivalent weak base chloroquine (pKa: 10.47 and 6.33, log KOW 4.67) tested on Salix viminalis (basket willow) and Daphnia magna (water flea). The transpiration rates of hydroponically...
International Nuclear Information System (INIS)
Sharma, B.K.; Mohanakrishnan, G.; Anand Babu, C.; Krishna Prabhu, R.
2008-01-01
Experiments were undertaken to study the feasibility of using weakly basic anion exchange resin for enrichment of isotopes of boron by ion exchange chromatography and water as eluent. The results of experiments carried out to determine total chloride capacity (TCC), strong base capacity (SBC) of the resin at different concentrations of boric acid and enrichment profiles are reported in this paper. (author)
Electromagnetic current in weak interactions
International Nuclear Information System (INIS)
Ma, E.
1983-01-01
In gauge models which unify weak and electromagnetic interactions, the weak neutral-current interaction also involves the electromagnetic current. The exact nature of such a component can be explored using e + e - experimental data. In recent years, the existence of a new component of the weak interaction has become firmly established, i.e., the neutral-current interaction. As such, it competes with the electromagnetic interaction whenever the particles involved are also charged, but at a very much lower rate because its effective strength is so small. Hence neutrino processes are best for the detection of the neutral-current interaction. However, in any gauge model which unifies weak and electromagnetic interactions, the weak neutral-current interaction also involves the electromagnetic current
Gritti, Fabrice; Guiochon, Georges
2009-03-06
The overloaded band profiles of five acido-basic compounds were measured, using weakly buffered mobile phases. Low buffer concentrations were selected to provide a better understanding of the band profiles recorded in LC/MS analyses, which are often carried out at low buffer concentrations. In this work, 10 microL samples of a 50 mM probe solution were injected into C(18)-bonded columns using a series of five buffered mobile phases at (SW)pH between 2 and 12. The retention times and the shapes of the bands were analyzed based on thermodynamic arguments. A new adsorption model that takes into account the simultaneous adsorption of the acidic and the basic species onto the endcapped adsorbent, predicts accurately the complex experimental profiles recorded. The adsorption mechanism of acido-basic compounds onto RPLC phases seems to be consistent with the following microscopic model. No matter whether the acid or the base is the neutral or the basic species, the neutral species adsorbs onto a large number of weak adsorption sites (their saturation capacity is several tens g/L and their equilibrium constant of the order of 0.1 L/g). In contrast, the ionic species adsorbs strongly onto fewer active sites (their saturation capacity is about 1g/L and their equilibrium constant of the order of a few L/g). From a microscopic point of view and in agreement with the adsorption isotherm of the compound measured by frontal analysis (FA) and with the results of Monte-Carlo calculations performed by Schure et al., the first type of adsorption sites are most likely located in between C(18)-bonded chains and the second type of adsorption sites are located deeper in contact with the silica surface. The injected concentration (50 mM) was too low to probe the weakest adsorption sites (saturation capacity of a few hundreds g/L with an equilibrium constant of one hundredth of L/g) that are located at the very interface between the C(18)-bonded layer and the bulk phase.
Understanding the influence of pH on uptake and accumulation of ionizable pharmaceuticals by fish was recently identified as a major research need. In the present study, fathead minnows were exposed to diphenhydramine (DPH), a weakly basic pharmaceutical (pKa = 9.1). Fish were ...
Gragnaniello, Cristian; Nader, Remi; van Doormaal, Tristan; Kamel, Mahmoud; Voormolen, Eduard H J; Lasio, Giovanni; Aboud, Emad; Regli, Luca; Tulleken, Cornelius A F; Al-Mefty, Ossama
2010-11-01
Resident duty-hours restrictions have now been instituted in many countries worldwide. Shortened training times and increased public scrutiny of surgical competency have led to a move away from the traditional apprenticeship model of training. The development of educational models for brain anatomy is a fascinating innovation allowing neurosurgeons to train without the need to practice on real patients and it may be a solution to achieve competency within a shortened training period. The authors describe the use of Stratathane resin ST-504 polymer (SRSP), which is inserted at different intracranial locations to closely mimic meningiomas and other pathological entities of the skull base, in a cadaveric model, for use in neurosurgical training. Silicone-injected and pressurized cadaveric heads were used for studying the SRSP model. The SRSP presents unique intrinsic metamorphic characteristics: liquid at first, it expands and foams when injected into the desired area of the brain, forming a solid tumorlike structure. The authors injected SRSP via different passages that did not influence routes used for the surgical approach for resection of the simulated lesion. For example, SRSP injection routes included endonasal transsphenoidal or transoral approaches if lesions were to be removed through standard skull base approach, or, alternatively, SRSP was injected via a cranial approach if the removal was planned to be via the transsphenoidal or transoral route. The model was set in place in 3 countries (US, Italy, and The Netherlands), and a pool of 13 physicians from 4 different institutions (all surgeons and surgeons in training) participated in evaluating it and provided feedback. All 13 evaluating physicians had overall positive impressions of the model. The overall score on 9 components evaluated--including comparison between the tumor model and real tumor cases, perioperative requirements, general impression, and applicability--was 88% (100% being the best possible
Weak interactions of elementary particles
Okun, Lev Borisovich
1965-01-01
International Series of Monographs in Natural Philosophy, Volume 5: Weak Interaction of Elementary Particles focuses on the composition, properties, and reactions of elementary particles and high energies. The book first discusses elementary particles. Concerns include isotopic invariance in the Sakata model; conservation of fundamental particles; scheme of isomultiplets in the Sakata model; universal, unitary-symmetric strong interaction; and universal weak interaction. The text also focuses on spinors, amplitudes, and currents. Wave function, calculation of traces, five bilinear covariants,
Czech Academy of Sciences Publication Activity Database
Neustupa, Jiří; Penel, P.
2012-01-01
Roč. 350, 11-12 (2012), s. 597-602 ISSN 1631-073X R&D Projects: GA ČR GA201/08/0012 Institutional support: RVO:67985840 Keywords : Navier-Stokes equations * weak solution * regularity Subject RIV: BA - General Mathematics Impact factor: 0.477, year: 2012 http://www.sciencedirect.com/science/article/pii/S1631073X12001926#
Rowe, Sidney E.
2010-01-01
In September 2007, the Engineering Directorate at the Marshall Space Flight Center (MSFC) created the Design System Focus Team (DSFT). MSFC was responsible for the in-house design and development of the Ares 1 Upper Stage and the Engineering Directorate was preparing to deploy a new electronic Configuration Management and Data Management System with the Design Data Management System (DDMS) based upon a Commercial Off The Shelf (COTS) Product Data Management (PDM) System. The DSFT was to establish standardized CAD practices and a new data life cycle for design data. Of special interest here, the design teams were to implement Model Based Definition (MBD) in support of the Upper Stage manufacturing contract. It is noted that this MBD does use partially dimensioned drawings for auxiliary information to the model. The design data lifecycle implemented several new release states to be used prior to formal release that allowed the models to move through a flow of progressive maturity. The DSFT identified some 17 Lessons Learned as outcomes of the standards development, pathfinder deployments and initial application to the Upper Stage design completion. Some of the high value examples are reviewed.
International Nuclear Information System (INIS)
Androic, D.; Armstrong, D. S.; Asaturyan, A.; Averett, T.; Balewski, J.; Beaufait, J.; Beminiwattha, R. S.; Benesch, J.; Benmokhtar, F.; Birchall, J.; Carlini, R. D.; Cornejo, J. C.; Covrig, S.; Dalton, M. M.; Davis, C. A.; Deconinck, W.; Diefenbach, J.; Dow, K.; Dowd, J. F.; Dunne, J. A.
2013-01-01
In May 2012, the Q p Weak collaboration completed a two year measurement program to determine the weak charge of the proton Q W p = ( 1 - 4sin 2 θ W ) at the Thomas Jefferson National Accelerator Facility (TJNAF). The experiment was designed to produce a 4.0 % measurement of the weak charge, via a 2.5 % measurement of the parity violating asymmetry in the number of elastically scattered 1.165 GeV electrons from protons, at forward angles. At the proposed precision, the experiment would produce a 0.3 % measurement of the weak mixing angle at a momentum transfer of Q 2 = 0.026 GeV 2 , making it the most precise stand alone measurement of the weak mixing angle at low momentum transfer. In combination with other parity measurements, Q p Weak will also provide a high precision determination of the weak charges of the up and down quarks. At the proposed precision, a significant deviation from the Standard Model prediction could be a signal of new physics at mass scales up to ≃ 6 TeV, whereas agreement would place new and significant constraints on possible Standard Model extensions at mass scales up to ≃ 2 TeV. This paper provides an overview of the physics and the experiment, as well as a brief look at some preliminary diagnostic and analysis data.
Fairchild, Ian J.; Tuckwell, George W.; Baker, Andy; Tooth, Anna F.
2006-04-01
A better knowledge of dripwater hydrology in karst systems is needed to understand the palaeoclimate implications of temporal variations in Mg/Ca and Sr/Ca of calcareous cave deposits. Quantitative modelling of drip hydrology and hydrochemistry was undertaken at a disused limestone mine (Brown's Folly Mine) in SW England overlain by 15 m of poorly karstified Jurassic limestones, with sub-vertical fracturing enhanced by proximity to an escarpment. Discharge was monitored at 15 sites intermittently from the beginning of 1996, and every 10-20 days from later 1996 to early 1998. Samples for hydrochemical parameters (pH, alkalinity, cations, anions, fluorescence) were taken corresponding to a sub-set of these data and supplemented by bedrock and soil sampling, limited continuously logged discharge, and soil water observations. Three sites, covering the range of discharge (approximately 1 μL s -1 to 1 ml s -1 maximum discharge) and hydrochemical behaviours, were studied in more detail. A quantitative flow model was constructed, based on two parallel unit hydrographs: responsive and relatively unresponsive to discharge events, respectively. The linear response and conservative mixing assumptions of the model were tested with hydrogeochemical data. Dripwaters at many of sites are characterized by evidence of prior calcite precipitation in the flowpath above the mine, which in the higher discharging sites diminishes at high flow. Also at low flow rates, dripwaters may access seepage reservoirs enriched in Mg and/or Sr, dependent on the site. The discharge at all three sites can be approximated by the flow model, but in each case, hydrochemical data show violations of the model assumptions. All sites show evidence of non-conservative mixing, and there are temporal discontinuities in behaviour, which may be stimulated by airlocks generated at low flow. Enhanced Mg/Ca and Sr/Ca often do relate to low-flow conditions, but the relationships between climate and hydrogeochemical
Energy Technology Data Exchange (ETDEWEB)
Kawaguchi, Hiroshi; Hirano, Yoshiyuki; Kershaw, Jeff; Yoshida, Eiji [Molecular Imaging Center, National Institute of Radiological Sciences, Chiba (Japan); Shiraishi, Takahiro [Molecular Imaging Center, National Institute of Radiological Sciences, Chiba (Japan); Research Center for Charged Particle Therapy, National Institute of Radiological Sciences, Chiba (Japan); Suga, Mikio [Molecular Imaging Center, National Institute of Radiological Sciences, Chiba (Japan); Center for Frontier Medical Engineering, Chiba University (Japan); Obata, Takayuki [Molecular Imaging Center, National Institute of Radiological Sciences, Chiba (Japan); Research Center for Charged Particle Therapy, National Institute of Radiological Sciences, Chiba (Japan); Ito, Hiroshi; Yamaya, Taiga [Molecular Imaging Center, National Institute of Radiological Sciences, Chiba (Japan)
2014-07-29
In recent work, we proposed an MRI-based attenuation-coefficient (μ-value) estimation method that uses a weak fixed-position external radiation source to construct an attenuation map for PET/MRI. In this presentation we refer to this method as FixER, and perform a series of simulations to investigate the duration of the transmission scan required to accurately estimate μ-values.
Weak Deeply Virtual Compton Scattering
Energy Technology Data Exchange (ETDEWEB)
Ales Psaker; Wolodymyr Melnitchouk; Anatoly Radyushkin
2007-03-01
We extend the analysis of the deeply virtual Compton scattering process to the weak interaction sector in the generalized Bjorken limit. The virtual Compton scattering amplitudes for the weak neutral and charged currents are calculated at the leading twist within the framework of the nonlocal light-cone expansion via coordinate space QCD string operators. Using a simple model, we estimate cross sections for neutrino scattering off the nucleon, relevant for future high intensity neutrino beam facilities.
Simulated impacts of climate on hydrology can vary greatly as a function of the scale of the input data, model assumptions, and model structure. Four models are commonly used to simulate streamflow in model assumptions, and model structure. Four models are commonly used to simu...
A WEAK ALKALI BOND IN (N, K–A–S–H GELS: EVIDENCE FROM LEACHING AND MODELING
Directory of Open Access Journals (Sweden)
FRANTIŠEK ŠKVÁRA
2012-12-01
Full Text Available The alkali bond in (N, K–A–S–H gels presents an up-to-date insufficiently resolved issue with significant consequences for efflorescence in alkali-activated materials. A series of experiments shows nearly all alkalis are leachable from alkaliactivated fly-ash and metakaolin in excessive amounts of deionized water. A diffusion-based model describes well the alkali leaching process. Negligible changes of the (N, K–A–S–H gel nanostructure indicate that Na,K do not form the gel backbone and H3O+ is probably the easiest substitution for the leached alkalies. Small changes in the long-term compressive strength of leached specimens support this hypothesis.
Model-independent constraints on the weak phase α (or φ2) and QCD penguin pollution in B→ππ decays
International Nuclear Information System (INIS)
Xing Zhizhong; Zhang He
2005-01-01
We present an algebraic isospin approach towards a more straightforward and model-independent determination of the weak phase α (or φ 2 ) and QCD penguin pollution in B→ππ decays. The world averages of current experimental data allow us to impose some useful constraints on the isospin parameters of B→ππ transitions. We find that the magnitude of α (or φ 2 ) extracted from the indirect CP violation in the π + π - mode is in agreement with the standard-model expectation from other indirect measurements, but its fourfold discrete ambiguity has to be resolved in the near future
SDG and qualitative trend based model multiple scale validation
Gao, Dong; Xu, Xin; Yin, Jianjin; Zhang, Hongyu; Zhang, Beike
2017-09-01
Verification, Validation and Accreditation (VV&A) is key technology of simulation and modelling. For the traditional model validation methods, the completeness is weak; it is carried out in one scale; it depends on human experience. The SDG (Signed Directed Graph) and qualitative trend based multiple scale validation is proposed. First the SDG model is built and qualitative trends are added to the model. And then complete testing scenarios are produced by positive inference. The multiple scale validation is carried out by comparing the testing scenarios with outputs of simulation model in different scales. Finally, the effectiveness is proved by carrying out validation for a reactor model.
Weakly relativistic plasma expansion
Energy Technology Data Exchange (ETDEWEB)
Fermous, Rachid, E-mail: rfermous@usthb.dz; Djebli, Mourad, E-mail: mdjebli@usthb.dz [Theoretical Physics Laboratory, Faculty of Physics, USTHB, B.P. 32 Bab-Ezzouar, 16079 Algiers (Algeria)
2015-04-15
Plasma expansion is an important physical process that takes place in laser interactions with solid targets. Within a self-similar model for the hydrodynamical multi-fluid equations, we investigated the expansion of both dense and under-dense plasmas. The weakly relativistic electrons are produced by ultra-intense laser pulses, while ions are supposed to be in a non-relativistic regime. Numerical investigations have shown that relativistic effects are important for under-dense plasma and are characterized by a finite ion front velocity. Dense plasma expansion is found to be governed mainly by quantum contributions in the fluid equations that originate from the degenerate pressure in addition to the nonlinear contributions from exchange and correlation potentials. The quantum degeneracy parameter profile provides clues to set the limit between under-dense and dense relativistic plasma expansions at a given density and temperature.
Event-Based Conceptual Modeling
DEFF Research Database (Denmark)
Bækgaard, Lars
The paper demonstrates that a wide variety of event-based modeling approaches are based on special cases of the same general event concept, and that the general event concept can be used to unify the otherwise unrelated fields of information modeling and process modeling. A set of event-based mod......The paper demonstrates that a wide variety of event-based modeling approaches are based on special cases of the same general event concept, and that the general event concept can be used to unify the otherwise unrelated fields of information modeling and process modeling. A set of event......-based modeling approaches are analyzed and the results are used to formulate a general event concept that can be used for unifying the seemingly unrelated event concepts. Events are characterized as short-duration processes that have participants, consequences, and properties, and that may be modeled in terms...
Casalderrey-Solana, Jorge; Milhano, José Guilherme; Pablos, Daniel; Rajagopal, Krishna
2016-01-01
We have previously introduced a hybrid strong/weak coupling model for jet quenching in heavy ion collisions that describes the production and fragmentation of jets at weak coupling, using PYTHIA, and describes the rate at which each parton in the jet shower loses energy as it propagates through the strongly coupled plasma, dE/dx, using an expression computed holographically at strong coupling. The model has a single free parameter that we fit to a single experimental measurement. We then confront our model with experimental data on many other jet observables, focusing here on boson-jet observables, finding that it provides a good description of present jet data. Next, we provide the predictions of our hybrid model for many measurements to come, including those for inclusive jet, dijet, photon-jet and Z-jet observables in heavy ion collisions with energy $\\sqrt{s}=5.02$ ATeV coming soon at the LHC. As the statistical uncertainties on near-future measurements of photon-jet observables are expected to be much sm...
Gatschelhofer, Christina; Mautner, Agnes; Reiter, Franz; Pieber, Thomas R; Buchmeiser, Michael R; Sinner, Frank M
2009-03-27
Functionalized monolithic columns were prepared via ring-opening metathesis polymerization (ROMP) within silanized fused silica capillaries with an internal diameter of 200 microm by in situ grafting. This procedure is conducted in two steps, the first of which is the formation of the basic monolithic structure by polymerization of norborn-2-ene (NBE) and 1,4,4a,5,8,8a-hexahydro-1,4,5,8-exo,endo-dimethanonaphthalene (DMN-H6) in a porogenic system (toluene and 2-propanol) using RuCl(2)(PCy(3))(2)(CHPh) as ROMP initiator. In the second step the still active initiator sites located on the surface of the structure-forming microglobules were used as receptor groups for the attachment ("grafting") of functional groups onto the monolithic backbone by flushing the monolith with 7-oxanorborn-2-ene-5,6-carboxylic anhydride (ONDCA). Functionalization conditions were first defined that did not damage the backbone of low polymer content (20%) monoliths allowing high-throughput chromatographic separations. Variation of the functionalization conditions was then shown to provide a means of controlling the degree of functionalization and resulting ion-exchange capacity. The maximum level of in situ ONDCA grafting was obtained by a 3h polymerization in toluene at 40 degrees C. The weak cation-exchange monoliths obtained provided good separation of a standard peptide mixture comprising four synthetic peptides designed specifically for the evaluation of cation-exchange columns. An equivalent separation was also achieved using the lowest capacity column studied, indicative of a high degree of robustness of the functionalization procedure. As well as demonstrably bearing ionic functional groups enabling analyte separation in the cation-exchange mode, the columns exhibited additional hydrophobic characteristics which influenced the separation process. The functionalized monoliths thus represent useful tools for mixed-mode separations.
Are weak and electromagnetic interactions unified
International Nuclear Information System (INIS)
Dombey, N.
1983-01-01
This chapter examines how well the standard electroweak model agrees with experiment. Attempts to explain to a nonparticle physicist why weak and electromagnetic interactions are unified. Discusses the Glashow model (unified SU(2)xU(1)); some basic questions; an alternative viewpoint; unified theories; non-unified theories; and weak interactions as strong interactions. Concludes that SU(2)xU(1) is a good phenomenological model for weak and electromagnetic interactions in the energy region accessible to experiment
Directory of Open Access Journals (Sweden)
Osborne Hamish R
2012-10-01
Full Text Available Abstract Background Hip abduction weakness has never been documented on a population basis as a common finding in a healthy group of athletes and would not normally be found in an elite adolescent athlete. This study aimed to show that hip abduction weakness not only occurs in this group but also is common and easy to correct with an unsupervised home based program. Methods A prospective sports team cohort based study was performed with thirty elite adolescent under-17 Australian Rules Footballers in the Australian Institute of Sport/Australian Football League Under-17 training academy. The players had their hip abduction performance assessed and were then instructed in a hip abduction muscle training exercise. This was performed on a daily basis for two months and then they were reassessed. Results The results showed 14 of 28 athletes who completed the protocol had marked weakness or a side-to-side difference of more than 25% at baseline. Two months later ten players recorded an improvement of ≥ 80% in their recorded scores. The mean muscle performance on the right side improved from 151 Newton (N to 202 N (p Conclusions The baseline values show widespread profound deficiencies in hip abduction performance not previously reported. Very large performance increases can be achieved, unsupervised, in a short period of time to potentially allow large clinically significant gains. This assessment should be an integral part of preparticipation screening and assessed in those with lower limb injuries. This particular exercise should be used clinically and more research is needed to determine its injury prevention and performance enhancement implications.
Model-based Software Engineering
DEFF Research Database (Denmark)
Kindler, Ekkart
2010-01-01
The vision of model-based software engineering is to make models the main focus of software development and to automatically generate software from these models. Part of that idea works already today. But, there are still difficulties when it comes to behaviour. Actually, there is no lack in models...
On the Existence of a Weak Solution of a Half-Cell Model for PEM Fuel Cells
Directory of Open Access Journals (Sweden)
Shuh-Jye Chern
2010-01-01
Full Text Available A nonlinear boundary value problem (BVP from the modelling of the transport phenomena in the cathode catalyst layer of a one-dimensional half-cell single-phase model for proton exchange membrane (PEM fuel cells, derived from the 3D model of Zhou and Liu (2000, 2001, is studied. It is a BVP for a system of three coupled ordinary differential equations of second order. Schauder's fixed point theorem is applied to show the existence of a solution in the Sobolev space 1.
Principles of models based engineering
Energy Technology Data Exchange (ETDEWEB)
Dolin, R.M.; Hefele, J.
1996-11-01
This report describes a Models Based Engineering (MBE) philosophy and implementation strategy that has been developed at Los Alamos National Laboratory`s Center for Advanced Engineering Technology. A major theme in this discussion is that models based engineering is an information management technology enabling the development of information driven engineering. Unlike other information management technologies, models based engineering encompasses the breadth of engineering information, from design intent through product definition to consumer application.
Strečka, Jozef
2018-01-01
The mixed spin-1/2 and spin-S Ising model on the Union Jack (centered square) lattice with four different three-spin (triplet) interactions and the uniaxial single-ion anisotropy is exactly solved by establishing a rigorous mapping equivalence with the corresponding zero-field (symmetric) eight-vertex model on a dual square lattice. A rigorous proof of the aforementioned exact mapping equivalence is provided by two independent approaches exploiting either a graph-theoretical or spin representation of the zero-field eight-vertex model. An influence of the interaction anisotropy as well as the uniaxial single-ion anisotropy on phase transitions and critical phenomena is examined in particular. It is shown that the considered model exhibits a strong-universal critical behaviour with constant critical exponents when considering the isotropic model with four equal triplet interactions or the anisotropic model with one triplet interaction differing from the other three. The anisotropic models with two different triplet interactions, which are pairwise equal to each other, contrarily exhibit a weak-universal critical behaviour with critical exponents continuously varying with a relative strength of the triplet interactions as well as the uniaxial single-ion anisotropy. It is evidenced that the variations of critical exponents of the mixed-spin Ising models with the integer-valued spins S differ basically from their counterparts with the half-odd-integer spins S.
Zhang, Changxin; Fang, Bin; Wang, Bochong; Zeng, Zhongming
2018-04-01
This paper presents a steady auto-oscillation in a spin-torque oscillator using MgO-based magnetic tunnel junction (MTJ) with a perpendicular polarizer and a perpendicular free layer. As the injected d.c. current varied from 1.5 to 3.0 mA under a weak magnetic field of 290 Oe, the oscillation frequency decreased from 1.85 to 1.3 GHz, and the integrated power increased from 0.1 to 74 pW. A narrow linewidth down to 7 MHz corresponding to a high Q factor of 220 was achieved at 2.7 mA, which was ascribed to the spatial coherent procession of the free layer magnetization. Moreover, the oscillation frequency was quite sensitive to the applied field, about 3.07 MHz/Oe, indicating the potential applications as a weak magnetic field detector. These results suggested that the MgO-based MTJ with perpendicular magnetic easy axis could be helpful for developing spin-torque oscillators with narrow-linewidth and high sensitive.
Shiokawa, Koichiro; Aso, Mai; Kondo, Takeshi; Takai, Jun-Ichi; Yoshida, Junki; Mishina, Takamichi; Fuchimukai, Kota; Ogasawara, Tsukasa; Kariya, Taro; Tashiro, Kosuke; Igarashi, Kazuei
2010-02-01
We have been studying control mechanisms of gene expression in early embryogenesis in a South African clawed toad Xenopus laevis, especially during the period of midblastula transition (MBT), or the transition from the phase of active cell division (cleavage stage) to the phase of extensive morphogenesis (post-blastular stages). We first found that ribosomal RNA synthesis is initiated shortly after MBT in Xenopus embryos and those weak bases, such as amines and ammonium ion, selectively inhibit the initiation and subsequent activation of rRNA synthesis. We then found that rapidly labeled heterogeneous mRNA-like RNA is synthesized in embryos at pre-MBT stage. We then performed cloning and expression studies of several genes, such as those for activin receptors, follistatin and aldolases, and then reached the studies of S-adenosylmethionine decarboxylase (SAMDC), a key enzyme in polyamine metabolism. Here, we cloned a Xenopus SAMDC cDNA and performed experiments to overexpress the in vitro-synthesized SAMDC mRNA in Xenopus early embryos, and found that the maternally preset program of apoptosis occurs in cleavage stage embryos, which is executed when embryos reach the stage of MBT. In the present article, we first summarize results on SAMDC and the maternal program of apoptosis, and then describe our studies on small-molecular-weight substances like polyamines, amino acids, and amines in Xenopus embryos. Finally, we summarize our studies on weak bases, especially on ammonium ion, as the specific inhibitor of ribosomal RNA synthesis in Xenopus embryonic cells.
Rehren, K. -H.
1996-01-01
Weak C* Hopf algebras can act as global symmetries in low-dimensional quantum field theories, when braid group statistics prevents group symmetries. Possibilities to construct field algebras with weak C* Hopf symmetry from a given theory of local observables are discussed.
Thompson, Robert Q.
1988-01-01
Describes a laboratory exercise in which acid dissociation constants and molecular weights are extracted from sample data and the sample is identified. Emphasizes accurate volumetric work while bringing to practice the concepts of acid-base equilibria, activity coefficients, and thermodynamic constants. (CW)
Strong Plate, Weak Slab Dichotomy
Petersen, R. I.; Stegman, D. R.; Tackley, P.
2015-12-01
Models of mantle convection on Earth produce styles of convection that are not observed on Earth.Moreover non-Earth-like modes, such as two-sided downwellings, are the de facto mode of convection in such models.To recreate Earth style subduction, i.e. one-sided asymmetric recycling of the lithosphere, proper treatment of the plates and plate interface are required. Previous work has identified several model features that promote subduction. A free surface or pseudo-free surface and a layer of material with a relatively low strength material (weak crust) allow downgoing plates to bend and slide past overriding without creating undue stress at the plate interface. (Crameri, et al. 2012, GRL)A low viscosity mantle wedge, possibly a result of slab dehydration, decouples the plates in the system. (Gerya et al. 2007, Geo)Plates must be composed of material which, in the case of the overriding plate, are is strong enough to resist bending stresses imposed by the subducting plate and yet, as in the case of the subducting plate, be weak enough to bend and subduct when pulled by the already subducted slab. (Petersen et al. 2015, PEPI) Though strong surface plates are required for subduction such plates may present a problem when they encounter the lower mantle.As the subducting slab approaches the higher viscosity, lower mantle stresses are imposed on the tip.Strong slabs transmit this stress to the surface.There the stress field at the plate interface is modified and potentially modifies the style of convection. In addition to modifying the stress at the plate interface, the strength of the slab affects the morphology of the slab at the base of the upper mantle. (Stegman, et al 2010, Tectonophysics)Slabs that maintain a sufficient portion of their strength after being bent require high stresses to unbend or otherwise change their shape.On the other hand slabs that are weakened though the bending process are more amenable to changes in morphology. We present the results of
Plane waves with weak singularities
International Nuclear Information System (INIS)
David, Justin R.
2003-03-01
We study a class of time dependent solutions of the vacuum Einstein equations which are plane waves with weak null singularities. This singularity is weak in the sense that though the tidal forces diverge at the singularity, the rate of divergence is such that the distortion suffered by a freely falling observer remains finite. Among such weak singular plane waves there is a sub-class which does not exhibit large back reaction in the presence of test scalar probes. String propagation in these backgrounds is smooth and there is a natural way to continue the metric beyond the singularity. This continued metric admits string propagation without the string becoming infinitely excited. We construct a one parameter family of smooth metrics which are at a finite distance in the space of metrics from the extended metric and a well defined operator in the string sigma model which resolves the singularity. (author)
[Model-based biofuels system analysis: a review].
Chang, Shiyan; Zhang, Xiliang; Zhao, Lili; Ou, Xunmin
2011-03-01
Model-based system analysis is an important tool for evaluating the potential and impacts of biofuels, and for drafting biofuels technology roadmaps and targets. The broad reach of the biofuels supply chain requires that biofuels system analyses span a range of disciplines, including agriculture/forestry, energy, economics, and the environment. Here we reviewed various models developed for or applied to modeling biofuels, and presented a critical analysis of Agriculture/Forestry System Models, Energy System Models, Integrated Assessment Models, Micro-level Cost, Energy and Emission Calculation Models, and Specific Macro-level Biofuel Models. We focused on the models' strengths, weaknesses, and applicability, facilitating the selection of a suitable type of model for specific issues. Such an analysis was a prerequisite for future biofuels system modeling, and represented a valuable resource for researchers and policy makers.
Cluster Based Text Classification Model
DEFF Research Database (Denmark)
Nizamani, Sarwat; Memon, Nasrullah; Wiil, Uffe Kock
2011-01-01
We propose a cluster based classification model for suspicious email detection and other text classification tasks. The text classification tasks comprise many training examples that require a complex classification model. Using clusters for classification makes the model simpler and increases......, the classifier is trained on each cluster having reduced dimensionality and less number of examples. The experimental results show that the proposed model outperforms the existing classification models for the task of suspicious email detection and topic categorization on the Reuters-21578 and 20 Newsgroups...... datasets. Our model also outperforms A Decision Cluster Classification (ADCC) and the Decision Cluster Forest Classification (DCFC) models on the Reuters-21578 dataset....
Graph Model Based Indoor Tracking
DEFF Research Database (Denmark)
Jensen, Christian Søndergaard; Lu, Hua; Yang, Bin
2009-01-01
The tracking of the locations of moving objects in large indoor spaces is important, as it enables a range of applications related to, e.g., security and indoor navigation and guidance. This paper presents a graph model based approach to indoor tracking that offers a uniform data management...... infrastructure for different symbolic positioning technologies, e.g., Bluetooth and RFID. More specifically, the paper proposes a model of indoor space that comprises a base graph and mappings that represent the topology of indoor space at different levels. The resulting model can be used for one or several...... indoor positioning technologies. Focusing on RFID-based positioning, an RFID specific reader deployment graph model is built from the base graph model. This model is then used in several algorithms for constructing and refining trajectories from raw RFID readings. Empirical studies with implementations...
2012-12-01
USEPA 324–Volatile Organics μg/L All were ND USEPA 608–Chlorinated Pesticides and/or PCBs μg/L All were ND Oil & Grease mg/L ND Sulfide, soluble mg...perchlorate. Wastewater produced during regeneration is treated to remove perchlorate. This is performed using a small volume of strong base anion (SBA...regeneration. Wastewater produced during regeneration is treated to remove perchlorate. This can be done by using a small volume of scavenger resin, or
DEFF Research Database (Denmark)
Alshareef, Abdurrahman; Sarjoughian, Hessam S.; Zarrin, Bahram
2018-01-01
architecture and the UML concepts. In this paper, we further this work by grounding Activity-based DEVS modeling and developing a fully-fledged modeling engine to demonstrate applicability. We also detail the relevant aspects of the created metamodel in terms of modeling and simulation. A significant number...
Lim, Hyung-Gyu; Kim, Jong Hoon; Shin, Dong Ho; Woo, Seong Tak; Seong, Ki Woong; Lee, Jyung Hyun; Kim, Myoung Nam; Wei, Qun; Cho, Jin-Ho
2015-01-01
Many types of fully implantable hearing aids have been developed. Most of these devices are implanted behind the ear. To maintain the implanted device for a long period of time, a rechargeable battery and wireless power transmission are used. Because inductive coupling is the most renowned method for wireless power transmission, many types of fully implantable hearing aids are transcutaneously powered using inductively coupled coils. Some patients with an implantable hearing aid require a method for conveniently charging their hearing aid while they are resting or sleeping. To address this need, a wireless charging pillow has been developed that employs a circular array coil as one of its primary parts. In this device, all primary coils are simultaneously driven to maintain an effective charging area regardless of head motion. In this case, however, there may be a magnetic weak zone that cannot be charged at the specific secondary coil's location on the array coil. In this study, assuming that a maximum charging distance is 4 cm, a circular array coil-serving as a primary part of the charging pillow-was designed using finite element analysis. Based on experimental results, the proposed device can charge an implantable hearing aid without a magnetic weak zone within 4 cm of the perpendicular distance between the primary and secondary coils.
International Nuclear Information System (INIS)
Dely, N.
2005-10-01
Usually, the irradiation of polymers under ionising radiations occurs in air that is in the presence of oxygen. This leads to a radio oxidation process and to oxygen consumption. Our material is an EPDM elastomer (ethylene propylene 1,4 hexadiene) used as insulator in control-command cables in nuclear plants (Pressurised Water Reactor). A specific device has been conceived and built up during this PhD work for measuring very small oxygen consumptions with an accuracy of around 10%. Ionising radiations used are electrons at 1 MeV and carbon ions at 11 MeV per nucleon. Under both electron and ion irradiations, the influence of oxygen pressure on oxygen consumption has been studied in a very large range: between 1 and 200 mbar. In both cases, the yield of oxygen consumption is constant in-between 200 and 5 mbar. Then, at lower pressures, it decreases appreciably. On the other hand, the oxygen consumption during ion irradiation is four times smaller than during electron irradiation. This emphasizes the role of the heterogeneity of the energy deposition at a nano-metric scale. The adjustment of the experimental results obtained during electron irradiation with the general homogeneous steady-state kinetic model has allowed extracting all the values of the kinetic parameters for the chosen mechanism of radio oxidation. The knowledge of these numbers will allow us to face our results obtained during ion irradiation with a heterogeneous kinetic model under development. (author)
yan, LIU Jun; hua, SONG Xiang; Yan, LIU
2017-11-01
The article uses the Fast Lagrangian Analysis of Continua in 3 Dimensions (FLAC3D) to make an analysis of the deformation characteristics of the structural plane, which is based on a real rock foundation pit in Jinan city. It makes an inverse analysis of the strength of the surface structure and the occurrence of the parameters by Mohr-Coulomb strength criterion value criterion in the way of numerical simulation, which explores the change of stress field of x-z oblique section of pit wall and the relation between the exposed height of structural plane and the critical cohesion, the exposed height and critical inclination angle of the structure surface. We can find that when the foundation pit is in the critical stable state and the inclination angle of the structural plane is constant, the critical cohesive force of the structural plane increases with the increase of the exposed surface height. And when the foundation pit in the critical stability of the situation and the structural surface of the cohesive force is constant, the structural surface exposed height increases and the structural angle of inclination is declining. The conclusion can provide theoretical basis for the design and construction of the rock foundation pit with structural plane.
Halgamuge, Malka N; Yak, See Kye; Eberhardt, Jacob L
2015-02-01
The aim of this work was to study possible effects of environmental radiation pollution on plants. The association between cellular telephone (short duration, higher amplitude) and base station (long duration, very low amplitude) radiation exposure and the growth rate of soybean (Glycine max) seedlings was investigated. Soybean seedlings, pre-grown for 4 days, were exposed in a gigahertz transverse electromagnetic cell for 2 h to global system for mobile communication (GSM) mobile phone pulsed radiation or continuous wave (CW) radiation at 900 MHz with amplitudes of 5.7 and 41 V m(-1) , and outgrowth was studied one week after exposure. The exposure to higher amplitude (41 V m(-1)) GSM radiation resulted in diminished outgrowth of the epicotyl. The exposure to lower amplitude (5.7 V m(-1)) GSM radiation did not influence outgrowth of epicotyl, hypocotyls, or roots. The exposure to higher amplitude CW radiation resulted in reduced outgrowth of the roots whereas lower CW exposure resulted in a reduced outgrowth of the hypocotyl. Soybean seedlings were also exposed for 5 days to an extremely low level of radiation (GSM 900 MHz, 0.56 V m(-1)) and outgrowth was studied 2 days later. Growth of epicotyl and hypocotyl was found to be reduced, whereas the outgrowth of roots was stimulated. Our findings indicate that the observed effects were significantly dependent on field strength as well as amplitude modulation of the applied field. © 2015 Wiley Periodicals, Inc.
International Nuclear Information System (INIS)
Anon.
1979-01-01
The possibility of the production of weak bosons in the proton-antiproton colliding beam facilities which are currently being developed, is discussed. The production, decay and predicted properties of these particles are described. (W.D.L.).
Lee, T. D.
1970-07-01
While the phenomenon of beta-decay was discovered near the end of the last century, the notion that the weak interaction forms a separate field of physical forces evolved rather gradually. This became clear only after the experimental discoveries of other weak reactions such as muon-decay, muon-capture, etc., and the theoretical observation that all these reactions can be described by approximately the same coupling constant, thus giving rise to the notion of a universal weak interaction. Only then did one slowly recognize that the weak interaction force forms an independent field, perhaps on the same footing as the gravitational force, the electromagnetic force, and the strong nuclear and sub-nuclear forces.
2013-08-01
Stéphane Coen and Miro Erkintalo from the University of Auckland in New Zealand talk to Nature Photonics about their surprising findings regarding a weak long-range interaction they serendipitously stumbled upon while researching temporal cavity solitons.
Illangasekare, T. H.; Trautz, A. C.; Howington, S. E.; Cihan, A.
2017-12-01
It is a well-established fact that the land and atmosphere form a continuum in which the individual domains are coupled by heat and mass transfer processes such as bare-soil evaporation. Soil moisture dynamics can be simulated at the representative elementary volume (REV) scale using decoupled and fully coupled Darcy/Navier-Stokes models. Decoupled modeling is an asynchronous approach in which flow and transport in the soil and atmosphere is simulated independently; the two domains are coupled out of time-step via prescribed flux parameterizations. Fully coupled modeling in contrast, solves the governing equations for flow and transport in both domains simultaneously with the use of coupling interface boundary conditions. This latter approach, while being able to provide real-time two-dimensional feedbacks, is considerably more complex and computationally intensive. In this study, we investigate whether fully coupled models are necessary, or if the simpler decoupled models can sufficiently capture soil moisture dynamics under varying land preparations. A series of intermediate-scale physical and numerical experiments were conducted in which soil moisture distributions and evaporation estimates were monitored at high spatiotemporal resolutions for different heterogeneous packing and soil roughness scenarios. All experimentation was conducted at the newly developed Center for Experimental Study of Subsurface Environmental Processes (CESEP) wind tunnel-porous media user test-facility at the Colorado School of. Near-surface atmospheric measurements made during the experiments demonstrate that the land-atmosphere coupling was relatively weak and insensitive to the applied edaphic and surface conditions. Simulations with a decoupled multiphase heat and mass transfer model similarly show little sensitivity to local variations in atmospheric forcing; a single, simple flux parameterization can sufficiently capture the soil moisture dynamics (evaporation and redistribution
International Nuclear Information System (INIS)
Daumenov, T.D.; Alizarovskaya, I.M.; Khizirova, M.A.
2001-01-01
The method of the weakly oval electrical field getting generated by the axially-symmetrical field is shown. Such system may be designed with help of the cylindric form coaxial electrodes with the built-in quadrupole duplet. The singularity of the indicated weakly oval lense consists of that it provides the conducting both mechanical and electronic adjustment. Such lense can be useful for elimination of the near-axis astigmatism in the electron-optical system
Numerical test of weak turbulence theory
Payne, G. L.; Nicholson, D. R.; Shen, Mei-Mei
1989-01-01
The analytic theory of weak Langmuir turbulence is well known, but very little has previously been done to compare its predictions with numerical solutions of the basic dynamical evolution equations. In this paper, numerical solutions of the statistical weak turbulence theory are compared with numerical solutions of the Zakharov model of Langmuir turbulence, and good agreement in certain regimes of very weak field strength is found.
Is a weak violation of the Pauli principle possible?
International Nuclear Information System (INIS)
Ignat'ev, A.Y.; Kuz'min, V.A.
1987-01-01
We examine models in which there is a weak violation of the Pauli principle. A simple algebra of creation and annihilation operators is constructed which contains a parameter β and describes a weak violation of the Pauli principle (when β = 0 the Pauli principle is satisfied exactly). The commutation relations in this algebra turn out to be trilinear. A model based on this algebra is described. It allows transitions in which the Pauli principle is violated, but the probability of these transitions is suppressed by the quantity β 2 (even though the interaction Hamiltonian does not contain small parameters)
Event-Based Conceptual Modeling
DEFF Research Database (Denmark)
Bækgaard, Lars
2009-01-01
The purpose of the paper is to obtain insight into and provide practical advice for event-based conceptual modeling. We analyze a set of event concepts and use the results to formulate a conceptual event model that is used to identify guidelines for creation of dynamic process models and static...... information models. We characterize events as short-duration processes that have participants, consequences, and properties, and that may be modeled in terms of information structures. The conceptual event model is used to characterize a variety of event concepts and it is used to illustrate how events can...... be used to integrate dynamic modeling of processes and static modeling of information structures. The results are unique in the sense that no other general event concept has been used to unify a similar broad variety of seemingly incompatible event concepts. The general event concept can be used...
DEFF Research Database (Denmark)
Bækgaard, Lars
2004-01-01
We present and discuss a modeling approach that supports event-based modeling of information and activity in information systems. Interacting human actors and IT-actors may carry out such activity. We use events to create meaningful relations between information structures and the related...
Towards a theory of weak hadronic decays of charmed particles
International Nuclear Information System (INIS)
Blok, B.Yu.; Shifman, M.A.
1986-01-01
Weak decays of charmed mesons are considered. A new quantitative framework for theoretical analysis of nonleptonic two-body decays based on the QCD sum rules are proposed. This is the first of a series of papers devoted to the subject. Theoretical foundations of the approach ensuring model-independent predictions for the partial decay widths are discussed
Weak point disorder in strongly fluctuating flux-line liquids
Indian Academy of Sciences (India)
We consider the effect of weak uncorrelated quenched disorder (point defects) on a strongly fluctuating flux-line liquid. We use a hydrodynamic model which is based on mapping the flux-line system onto a quantum liquid of relativistic charged bosons in 2 + 1 dimensions [P Benetatos and M C Marchetti, Phys. Rev. B64 ...
Weak point disorder in strongly fluctuating flux-line liquids
Indian Academy of Sciences (India)
Abstract. We consider the effect of weak uncorrelated quenched disorder (point defects) on a strongly fluctuating flux-line liquid. We use a hydrodynamic model which is based on mapping the flux-line system onto a quantum liquid of relativistic charged bosons in 2 + 1 dimensions [P Benetatos and M C Marchetti, Phys. Rev.
International Nuclear Information System (INIS)
Han Shaoyang; Ke Dan; Hou Huiqun; Hu Shuiqing
2004-01-01
Weak information extraction and integrated evaluation for sandstone-type uranium deposits are currently one of the important research contents in uranium exploration. Through several years researches, the authors put forward the meaning of aeromagnetic and aeroradioactive weak information extraction, study the formation theories of aeromagnetic and aeroradioactive weak information and establish effective mathematic models for weak information extraction. Based on GIS software, models of weak information extraction are actualized and the expert-grading model for integrated evaluation is developed. The trial of aeromagnetic and aeroradioactive weak information and integrated evaluation of uranium resources are completed by using GIS software in the study area. The researchful results prove that techniques of weak information extraction and integrated evaluation may further delineate the prospective areas of sandstone-type uranium deposits rapidly and improve the predicitive precision. (authors)
International Nuclear Information System (INIS)
Huterer, Dragan
2002-01-01
We study the power of upcoming weak lensing surveys to probe dark energy. Dark energy modifies the distance-redshift relation as well as the matter power spectrum, both of which affect the weak lensing convergence power spectrum. Some dark-energy models predict additional clustering on very large scales, but this probably cannot be detected by weak lensing alone due to cosmic variance. With reasonable prior information on other cosmological parameters, we find that a survey covering 1000 sq deg down to a limiting magnitude of R=27 can impose constraints comparable to those expected from upcoming type Ia supernova and number-count surveys. This result, however, is contingent on the control of both observational and theoretical systematics. Concentrating on the latter, we find that the nonlinear power spectrum of matter perturbations and the redshift distribution of source galaxies both need to be determined accurately in order for weak lensing to achieve its full potential. Finally, we discuss the sensitivity of the three-point statistics to dark energy
Anisotropy in wavelet-based phase field models
Korzec, Maciek
2016-04-01
When describing the anisotropic evolution of microstructures in solids using phase-field models, the anisotropy of the crystalline phases is usually introduced into the interfacial energy by directional dependencies of the gradient energy coefficients. We consider an alternative approach based on a wavelet analogue of the Laplace operator that is intrinsically anisotropic and linear. The paper focuses on the classical coupled temperature/Ginzburg--Landau type phase-field model for dendritic growth. For the model based on the wavelet analogue, existence, uniqueness and continuous dependence on initial data are proved for weak solutions. Numerical studies of the wavelet based phase-field model show dendritic growth similar to the results obtained for classical phase-field models.
Model-based biosignal interpretation.
Andreassen, S
1994-03-01
Two relatively new approaches to model-based biosignal interpretation, qualitative simulation and modelling by causal probabilistic networks, are compared to modelling by differential equations. A major problem in applying a model to an individual patient is the estimation of the parameters. The available observations are unlikely to allow a proper estimation of the parameters, and even if they do, the task appears to have exponential computational complexity if the model is non-linear. Causal probabilistic networks have both differential equation models and qualitative simulation as special cases, and they can provide both Bayesian and maximum-likelihood parameter estimates, in most cases in much less than exponential time. In addition, they can calculate the probabilities required for a decision-theoretical approach to medical decision support. The practical applicability of causal probabilistic networks to real medical problems is illustrated by a model of glucose metabolism which is used to adjust insulin therapy in type I diabetic patients.
Bishop, Christopher M
2013-02-13
Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications.
International Nuclear Information System (INIS)
Roberts, B.L.; Booth, E.C.; Gall, K.P.; McIntyre, E.K.; Miller, J.P.; Whitehouse, D.A.; Bassalleck, B.; Hall, J.R.; Larson, K.D.; Wolfe, D.M.; Fickinger, W.J.; Robinson, D.K.; Hallin, A.L.; Hasinoff, M.D.; Measday, D.F.; Noble, A.J.; Waltham, C.E.; Hessey, N.P.; Lowe, J.; Horvath, D.; Salomon, M.
1990-01-01
New measurements of the Σ + and Λ weak radiative decays are discussed. The hyperons were produced at rest by the reaction K - p → Yπ where Y = Σ + or Λ. The monoenergetic pion was used to tag the hyperon production, and the branching ratios were determined from the relative amplitudes of Σ + → pγ to Σ + → pπ 0 and Λ → nγ to Λ → nπ 0 . The photons from weak radiative decays and from π 0 decays were detected with modular NaI arrays. (orig.)
Weak gravity conjecture and effective field theory
Saraswat, Prashant
2017-01-01
The weak gravity conjecture (WGC) is a proposed constraint on theories with gauge fields and gravity, requiring the existence of light charged particles and/or imposing an upper bound on the field theory cutoff Λ . If taken as a consistency requirement for effective field theories (EFTs), it rules out possibilities for model building including some models of inflation. I demonstrate simple models which satisfy all forms of the WGC, but which through Higgsing of the original gauge fields produce low-energy EFTs with gauge forces that badly violate the WGC. These models illustrate specific loopholes in arguments that motivate the WGC from a bottom-up perspective; for example the arguments based on magnetic monopoles are evaded when the magnetic confinement that occurs in a Higgs phase is accounted for. This indicates that the WGC should not be taken as a veto on EFTs, even if it turns out to be a robust property of UV quantum gravity theories. However, if the latter is true, then parametric violation of the WGC at low energy comes at the cost of nonminimal field content in the UV. I propose that only a very weak constraint is applicable to EFTs, Λ ≲(log 1/g )-1 /2Mpl , where g is the gauge coupling, motivated by entropy bounds. Remarkably, EFTs produced by Higgsing a theory that satisfies the WGC can saturate but not violate this bound.
Bajpai, Shailendra; Gupta, S K; Dey, Apurba; Jha, M K; Bajpai, Vidushi; Joshi, Saurabh; Gupta, Arvind
2012-08-15
In this paper, response surface methodology (RSM) approach using Central Composite Design (CCD) is applied to develop mathematical model and optimize process parameters for Cr (VI) removal from aqueous streams using weakly anionic resin Amberlite IRA 96. The individual and combined effect of four process parameters, i.e. contact time, initial solution pH, initial Cr (VI) concentration and resin dose on Cr adsorption were studied. Analysis of variance (ANOVA) showed the relative significance of process parameters in removal process. Initial solution pH and resin dose were found to be more significant than contact time and initial Cr (VI) concentration. The second-order regression model was developed to predict the removal efficiency using Design Expert software. The optimal conditions to remove Cr from aqueous solution at constant temperature of 30°C and stirring speed of 250 rpm were found to be contact time 62.5 min, pH 1.96, initial Cr (VI) concentration 145.4 mg/L, and resin dose 8.51 g/L. At these conditions, high removal efficiency (93.26%) was achieved. FTIR and EDX analysis were conducted to interpret the functional groups involved during the Cr-resin interaction. Copyright © 2012 Elsevier B.V. All rights reserved.
Horizontal mergers and weak and strong competition commissions
Directory of Open Access Journals (Sweden)
Ristić Bojan
2014-01-01
Full Text Available In this paper we analyse the horizontal merger of companies in an already concentrated industry. The participants in mergers are obliged to submit notification to the Competition Commission but they also have the option of rejecting the merger. At the time of the notification submission the participants do not know whether the Commission is strong or weak, and they can complain to the Court if the Commission prohibits the merger. We model the strategic interaction between Participants and Commission in a dynamic game of incomplete information and determine weak perfect Bayesian equilibria. The main finding of our paper is that Participants will base their decision to submit notification on their belief in a weak Commission decision and will almost completely ignore the possibility of a strong Commission decision. We also provide a detailed examination of one case from Serbian regulatory practice, which coincides with the results of our game theoretical model.
Computer-Based Modeling Environments
1989-01-01
1988). "An introduction to graph-based modeling Rich. E. (1983). Artificial Inteligence , McGraw-Hill, New York. systems", Working Paper 88-10-2...Hall, J., S. Lippman, and J. McCall. "Expected Utility Maximizing Job Search," Chapter 7 of Studies in the Economics of Search, 1979, North-Holland. WMSI...The same shape has been used theory, as knowledge representation in artificial for data sources and analytical models because, at intelligence, and as
Cécé, Raphaël; Bernard, Didier; Brioude, Jérome; Zahibo, Narcisse
2016-08-01
Tropical islands are characterized by thermal and orographical forcings which may generate microscale air mass circulations. The Lesser Antilles Arc includes small tropical islands (width lower than 50 km) where a total of one-and-a-half million people live. Air quality over this region is affected by anthropogenic and volcanic emissions, or saharan dust. To reduce risks for the population health, the atmospheric dispersion of emitted pollutants must be predicted. In this study, the dispersion of anthropogenic nitrogen oxides (NOx) is numerically modelled over the densely populated area of the Guadeloupe archipelago under weak trade winds, during a typical case of severe pollution. The main goal is to analyze how microscale resolutions affect air pollution in a small tropical island. Three resolutions of domain grid are selected: 1 km, 333 m and 111 m. The Weather Research and Forecasting model (WRF) is used to produce real nested microscale meteorological fields. Then the weather outputs initialize the Lagrangian Particle Dispersion Model (FLEXPART). The forward simulations of a power plant plume showed good ability to reproduce nocturnal peaks recorded by an urban air quality station. The increase in resolution resulted in an improvement of model sensitivity. The nesting to subkilometer grids helped to reduce an overestimation bias mainly because the LES domains better simulate the turbulent motions governing nocturnal flows. For peaks observed at two air quality stations, the backward sensitivity outputs identified realistic sources of NOx in the area. The increase in resolution produced a sharper inverse plume with a more accurate source area. This study showed the first application of the FLEXPART-WRF model to microscale resolutions. Overall, the coupling model WRF-LES-FLEXPART is useful to simulate the pollutant dispersion during a real case of calm wind regime over a complex terrain area. The forward and backward simulation results showed clearly that the
Fu, Yong-Bi
2012-02-01
Many plant disease resistance (R) genes have been cloned, but the potential of utilizing these plant R-gene genomic resources for genetic inferences of plant domestication history remains unexplored. A population-based resequencing analysis of the genomic region near the Rrs2 scald resistance gene was made in 51 accessions of wild and cultivated barley from 41 countries. Fifteen primer pairs were designed to sample the genomic region with a total length of 10 406 bp. More nucleotide diversity was found in wild (π = 0.01846) than cultivated (π = 0.01507) barley samples. Three distinct groups of 29 haplotypes were detected for all 51 samples, and they were well mixed with wild and cultivated barley samples from different countries and regions. The neutrality tests by Tajima's D were not significant, but a significant (P events was 16 in wild barley and 19 in cultivated barley. A coalescence simulation revealed a bottleneck intensity of 1.5 to 2 since barley domestication. Together, the domestication signal in the genomic region was weak both in human selection and domestication bottleneck.
Pehrsson, L; Ingman, F; Johansson, S
A general method for evaluating titration data for mixtures of acids and for acids in mixture with weak bases is presented. Procedures are given that do not require absolute [H]-data, i.e., relative [H]-data may be used. In most cases a very rough calibration of the electrode system is enough. Further, for simple systems, very approximate values of the stability constants are sufficient. As examples, the titration of the following are treated in some detail: a mixture of two acids, a diprotic acid, an acid in presence of its conjugate base, and an ampholyte.
Learning from Weak and Noisy Labels for Semantic Segmentation
Lu, Zhiwu
2016-04-08
A weakly supervised semantic segmentation (WSSS) method aims to learn a segmentation model from weak (image-level) as opposed to strong (pixel-level) labels. By avoiding the tedious pixel-level annotation process, it can exploit the unlimited supply of user-tagged images from media-sharing sites such as Flickr for large scale applications. However, these ‘free’ tags/labels are often noisy and few existing works address the problem of learning with both weak and noisy labels. In this work, we cast the WSSS problem into a label noise reduction problem. Specifically, after segmenting each image into a set of superpixels, the weak and potentially noisy image-level labels are propagated to the superpixel level resulting in highly noisy labels; the key to semantic segmentation is thus to identify and correct the superpixel noisy labels. To this end, a novel L1-optimisation based sparse learning model is formulated to directly and explicitly detect noisy labels. To solve the L1-optimisation problem, we further develop an efficient learning algorithm by introducing an intermediate labelling variable. Extensive experiments on three benchmark datasets show that our method yields state-of-the-art results given noise-free labels, whilst significantly outperforming the existing methods when the weak labels are also noisy.
Lu, Jun-Juan; Wang, Qing; Xie, Li Hua; Zhang, Qiang; Sun, Sheng Hua
2017-01-01
Background In chronic obstructive pulmonary disease (COPD), weakness and muscle mass loss of the quadriceps muscle has been demonstrated to predict survival and mortality rates of patients. Tumor necrosis factor (TNF)-like weak inducer of apoptosis (TWEAK), as a member of the TNF superfamily, has recently been identified as a key regulator of skeletal muscle wasting and metabolic dysfunction. So our aim was to study the role of TWEAK during quadriceps muscle atrophy and fiber-type transformat...
Liu, Bigang; Gong, Shuai; Li, Qiuhui; Chen, Xin; Moore, John; Suraneni, Mahipal V; Badeaux, Mark D; Jeter, Collene R; Shen, Jianjun; Mehmood, Rashid; Fan, Qingxia; Tang, Dean G
2017-08-08
This project was undertaken to address a critical cancer biology question: Is overexpression of the pluripotency molecule Nanog sufficient to initiate tumor development in a somatic tissue? Nanog1 is critical for the self-renewal and pluripotency of ES cells, and its retrotransposed homolog, NanogP8 is preferentially expressed in somatic cancer cells. Our work has shown that shRNA-mediated knockdown of NanogP8 in prostate, breast, and colon cancer cells inhibits tumor regeneration whereas inducible overexpression of NanogP8 promotes cancer stem cell phenotypes and properties. To address the key unanswered question whether tissue-specific overexpression of NanogP8 is sufficient to promote tumor development in vivo , we generated a NanogP8 transgenic mouse model, in which the ARR 2 PB promoter was used to drive NanogP8 cDNA. Surprisingly, the ARR 2 PB-NanogP8 transgenic mice were viable, developed normally, and did not form spontaneous tumors in >2 years. Also, both wild type and ARR 2 PB-NanogP8 transgenic mice responded similarly to castration and regeneration and castrated ARR 2 PB-NanogP8 transgenic mice also did not develop tumors. By crossing the ARR 2 PB-NanogP8 transgenic mice with ARR 2 PB-Myc (i.e., Hi-Myc) mice, we found that the double transgenic (i.e., ARR 2 PB-NanogP8; Hi-Myc) mice showed similar tumor incidence and histology to the Hi-Myc mice. Interestingly, however, we observed white dots in the ventral lobes of the double transgenic prostates, which were characterized as overgrown ductules/buds featured by crowded atypical Nanog-expressing luminal cells. Taken together, our present work demonstrates that transgenic overexpression of NanogP8 in the mouse prostate is insufficient to initiate tumorigenesis but weakly promotes tumor development in the Hi-Myc mouse model.
Gogoi-Tiwari, Jully; Williams, Vincent; Waryah, Charlene Babra; Costantino, Paul; Al-Salami, Hani; Mathavan, Sangeetha; Wells, Kelsi; Tiwari, Harish Kumar; Hegde, Nagendra; Isloor, Shrikrishna; Al-Sallami, Hesham; Mukkur, Trilochan
2017-01-01
Biofilm formation by Staphylococcus aureus is an important virulence attribute because of its potential to induce persistent antibiotic resistance, retard phagocytosis and either attenuate or promote inflammation, depending upon the disease syndrome, in vivo. This study was undertaken to evaluate the potential significance of strength of biofilm formation by clinical bovine mastitis-associated S. aureus in mammary tissue damage by using a mouse mastitis model. Two S. aureus strains of the same capsular phenotype with different biofilm forming strengths were used to non-invasively infect mammary glands of lactating mice. Biofilm forming potential of these strains were determined by tissue culture plate method, ica typing and virulence gene profile per detection by PCR. Delivery of the infectious dose of S. aureus was directly through the teat lactiferous duct without invasive scraping of the teat surface. Both bacteriological and histological methods were used for analysis of mammary gland pathology of mice post-infection. Histopathological analysis of the infected mammary glands revealed that mice inoculated with the strong biofilm forming S. aureus strain produced marked acute mastitic lesions, showing profuse infiltration predominantly with neutrophils, with evidence of necrosis in the affected mammary glands. In contrast, the damage was significantly less severe in mammary glands of mice infected with the weak biofilm-forming S. aureus strain. Although both IL-1β and TNF-α inflammatory biomarkers were produced in infected mice, level of TNF-α produced was significantly higher (pmastitis model, and offers an opportunity for the development of novel strategies for reduction of mammary tissue damage, with or without use of antimicrobials and/or anti-inflammatory compounds for the treatment of bovine mastitis.
International Nuclear Information System (INIS)
Gaillard, M.K.
1978-08-01
The properties that may help to identify the two additional quark flavors that are expected to be discovered. These properties are lifetime, branching ratios, selection rules, and lepton decay spectra. It is also noted that CP violation may manifest itself more strongly in heavy particle decays than elsewhere providing a new probe of its origin. The theoretical progress in the understanding of nonleptonic transitions among lighter quarks, nonleptonic K and hyperon decay amplitudes, omega minus and charmed particle decay predictions, and lastly the Kobayashi--Maskawa model for the weak coupling of heavy quarks together with the details of its implications for topology and bottomology are treated. 48 references
Weak Disposability in Nonparametric Production Analysis with Undesirable Outputs
Kuosmanen, T.K.
2005-01-01
Environmental Economics and Natural Resources Group at Wageningen University in The Netherlands Weak disposability of outputs means that firms can abate harmful emissions by decreasing the activity level. Modeling weak disposability in nonparametric production analysis has caused some confusion.
Measurement of weak radioactivity
Theodorsson , P
1996-01-01
This book is intended for scientists engaged in the measurement of weak alpha, beta, and gamma active samples; in health physics, environmental control, nuclear geophysics, tracer work, radiocarbon dating etc. It describes the underlying principles of radiation measurement and the detectors used. It also covers the sources of background, analyzes their effect on the detector and discusses economic ways to reduce the background. The most important types of low-level counting systems and the measurement of some of the more important radioisotopes are described here. In cases where more than one type can be used, the selection of the most suitable system is shown.
Directory of Open Access Journals (Sweden)
Ina Schieferdecker
2012-02-01
Full Text Available Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security testing (MBST is a relatively new field and especially dedicated to the systematic and efficient specification and documentation of security test objectives, security test cases and test suites, as well as to their automated or semi-automated generation. In particular, the combination of security modelling and test generation approaches is still a challenge in research and of high interest for industrial applications. MBST includes e.g. security functional testing, model-based fuzzing, risk- and threat-oriented testing, and the usage of security test patterns. This paper provides a survey on MBST techniques and the related models as well as samples of new methods and tools that are under development in the European ITEA2-project DIAMONDS.
Directory of Open Access Journals (Sweden)
Marta Ziemnicka-Sylwester
2013-05-01
Full Text Available TiB2-based ceramic matrix composites (CMCs were fabricated using elemental powders of Ti, B and C. The self-propagating high temperature synthesis (SHS was carried out for the highly exothermic “in situ” reaction of TiB2 formation and the “tailing” synthesis of boron carbide characterized by weak exothermicity. Two series of samples were fabricated, one of them being prepared with additional milling of raw materials. The effects of TiB2 vol fraction as well as grain size of reactant were investigated. The results revealed that combustion was not successful for a TiB2:B4C molar ratio of 0.96, which corresponds to 40 vol% of TiB2 in the composite, however the SHS reaction was initiated and self-propagated for the intended TiB2:B4C molar ratio of 2.16 or above. Finally B13C2 was formed as the matrix phase in each composite. Significant importance of the grain size of the C precursor with regard to the reaction completeness, which affected the microstructure homogeneity and hardness of investigated composites, was proved in this study. The grain size of Ti powder did not influence the microstructure of TiB2 grains. The best properties (HV = 25.5 GPa, average grain size of 9 μm and homogenous microstructure, were obtained for material containing 80 vol% of TiB2, fabricated using a graphite precursor of 2 μm.
Directory of Open Access Journals (Sweden)
Erik P Ensing
Full Text Available Long-term tracking using global positioning systems (GPS is widely used to study vertebrate movement ecology, including fine-scale habitat selection as well as large-scale migrations. These data have the potential to provide much more information about the behavior and ecology of wild vertebrates: here we explore the potential of using GPS datasets to assess timing of activity in a chronobiological context. We compared two different populations of deer (Cervus elaphus, one in the Netherlands (red deer, the other in Canada (elk. GPS tracking data were used to calculate the speed of the animals as a measure for activity to deduce unbiased daily activity rhythms over prolonged periods of time. Speed proved a valid measure for activity, this being validated by comparing GPS based activity data with head movements recorded by activity sensors, and the use of GPS locations was effective for generating long term chronobiological data. Deer showed crepuscular activity rhythms with activity peaks at sunrise (the Netherlands or after sunrise (Canada and at the end of civil twilight at dusk. The deer in Canada were mostly diurnal while the deer in the Netherlands were mostly nocturnal. On an annual scale, Canadian deer were more active during the summer months while deer in the Netherlands were more active during winter. We suggest that these differences were mainly driven by human disturbance (on a daily scale and local weather (on an annual scale. In both populations, the crepuscular activity peaks in the morning and evening showed a stable timing relative to dawn and dusk twilight throughout the year, but marked periods of daily a-rhythmicity occurred in the individual records. We suggest that this might indicate that (changes in light levels around twilight elicit a direct behavioral response while the contribution of an internal circadian timing mechanism might be weak or even absent.
Kuhlbrodt, T.; Jones, C.
2016-02-01
The UK Earth System Model (UKESM) is currently being developed by the UK Met Office and the academic community in the UK. The low-resolution version of UKESM has got a nominal grid cell size of 150 km in the atmosphere (Unified Model [UM], N96) and 1° in the ocean (NEMO, ORCA1). In several preliminary test configurations of UKESM-N96-ORCA1, we find a significant cold bias in the northern hemisphere in comparison with HadGEM2 (N96-ORCA025, i.e. 0.25° resolution in the ocean). The sea surface is too cold by more than 2 K, and up to 6 K, in large parts of the North Atlantic and the northwest Pacific. In addition to the cold bias, the maximum AMOC transport (diagnosed below 500 m depth) decreases in all the configurations, displaying values between 11 and 14 Sv after 50 years run length. Transport at 26°N is even smaller and hence too weak in relation to observed values (approx. 18 Sv). The mixed layer is too deep within the North Atlantic Current and the Kuroshio, but too shallow north of these currents. The cold bias extends to a depth of several hundred metres. In the North Atlantic, it is accompanied by a freshening of up to 1.5 psu, compared to present-day climatology, along the path of the North Atlantic Current. A core problem appears to be the cessation of deep-water formation in the Labrador Sea. Remarkably, using earlier versions of NEMO and the UM, the AMOC is stable at around 16 or 17 Sv in the N96-ORCA1 configuration. We report on various strategies to reduce the cold bias and enhance the AMOC transport. Changing various parameters that affect the vertical mixing in NEMO has no significant effect. Modifying the bathymetry to deepen and widen the channels across the Greenland-Iceland-Scotland sill leads to a short-term improvement in AMOC transport, but only for about ten years. Strikingly, in a configuration with longer time steps for the atmosphere model we find a climate that is even colder, but has got a more vigorous maximum AMOC transport (14 Sv
Model-Based Requirements Management in Gear Systems Design Based On Graph-Based Design Languages
Directory of Open Access Journals (Sweden)
Kevin Holder
2017-10-01
Full Text Available For several decades, a wide-spread consensus concerning the enormous importance of an in-depth clarification of the specifications of a product has been observed. A weak clarification of specifications is repeatedly listed as a main cause for the failure of product development projects. Requirements, which can be defined as the purpose, goals, constraints, and criteria associated with a product development project, play a central role in the clarification of specifications. The collection of activities which ensure that requirements are identified, documented, maintained, communicated, and traced throughout the life cycle of a system, product, or service can be referred to as “requirements engineering”. These activities can be supported by a collection and combination of strategies, methods, and tools which are appropriate for the clarification of specifications. Numerous publications describe the strategy and the components of requirements management. Furthermore, recent research investigates its industrial application. Simultaneously, promising developments of graph-based design languages for a holistic digital representation of the product life cycle are presented. Current developments realize graph-based languages by the diagrams of the Unified Modelling Language (UML, and allow the automatic generation and evaluation of multiple product variants. The research presented in this paper seeks to present a method in order to combine the advantages of a conscious requirements management process and graph-based design languages. Consequently, the main objective of this paper is the investigation of a model-based integration of requirements in a product development process by means of graph-based design languages. The research method is based on an in-depth analysis of an exemplary industrial product development, a gear system for so-called “Electrical Multiple Units” (EMU. Important requirements were abstracted from a gear system
Gritti, Fabrice; Guiochon, Georges
2009-01-02
We measured overloaded band profiles for a series of nine compounds (phenol, caffeine, 3-phenyl 1-propanol, 2-phenylbutyric acid, amphetamine, aniline, benzylamine, p-toluidine, and procainamidium chloride) on columns packed with four different C(18)-bonded packing materials: XTerra-C(18), Gemini-C(18), Luna-C(18)(2), and Halo-C(18), using buffered methanol-water mobile phases. The pHWS of the mobile phase was increased from 2.6 to 11.3. The buffer concentration (either phosphate, acetate, or carbonate buffers) was set constant at values below the maximum concentration of the sample in the band. The influence of the surface chemistry of the packing material on the retention and the shape of the peaks was investigated. Adsorbents having a hybrid inorganic/organic structure tend to give peaks exhibiting moderate or little tailing. The retention and the shape of the band profiles can easily be interpreted at pHsWS that are well above or well below the pKWS(a) of the compound studied. In contrast, the peak shapes in the intermediary pH range (i.e., close to the compound pKWS(a)) have rarely been studied. These shapes reveal the complexity of the competitive adsorption behavior of couples of acido-basic conjugated compounds at pHsWS that are close to their pKWS(a). They also reveal the role of the buffer capacity on the resulting peak shape. With increasing pHWS, the overloaded profiles are first langmuirian (isotherm type I) at low pHsWS, they become S-shaped (isotherm type II), then anti-langmuirian (isotherm type III), S-shaped again at intermediate pHsWS, and finally return to a langmuirian shape at high pHsWS. A new general adsorption isotherm model that takes into account the dissociation equilibrium of conjugated acidic and basic species in the bulk mobile phase accounts for these transient band shapes. An excellent agreement was achieved between experimental profiles and those calculated with a two-sites adsorption isotherm model at all pHsWS. The neutral
CROWDSOURCING BASED 3D MODELING
Directory of Open Access Journals (Sweden)
A. Somogyi
2016-06-01
Full Text Available Web-based photo albums that support organizing and viewing the users’ images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.
Computer Based Modelling and Simulation
Indian Academy of Sciences (India)
leaving students. It is a probabilistic model. In the next part of this article, two more models - 'input/output model' used for production systems or economic studies and a. 'discrete event simulation model' are introduced. Aircraft Performance Model.
Synchronization of weakly coupled canard oscillators
Köksal Ersöz, Elif; Desroches, Mathieu; Krupa, Martin
2017-01-01
International audience; Synchronization has been studied extensively in the context of weakly coupled oscillators using the so-called phase response curve (PRC) which measures how a change of the phase of an oscillator is affected by a small perturbation. This approach was based upon the work of Malkin, and it has been extended to relaxation oscillators. Namely, synchronization conditions were established under the weak coupling assumption, leading to a criterion for the existence of synchron...
Kalantzi, Lida; Persson, Eva; Polentarutti, Britta; Abrahamsson, Bertil; Goumas, Konstantinos; Dressman, Jennifer B; Reppas, Christos
2006-06-01
This study was conducted to assess the relative usefulness of canine intestinal contents and simulated media in the prediction of solubility of two weak bases (dipyridamole and ketoconazole) in fasted and fed human intestinal aspirates that were collected under conditions simulating those in bioavailability/bioequivalence studies. After administration of 250 mL of water or 500 mL of Ensure plus [both containing 10 mg/mL polyethylene glycol (PEG) 4000 as nonabsorbable marker], intestinal aspirates were collected from the fourth part of the duodenum of 12 healthy adults and from the mid-jejunum of four Labradors. Pooled samples were analyzed for PEG, pH, buffer capacity, osmolality, surface tension, pepsin, total carbohydrates, total protein content, bile salts, phospholipids, and neutral lipids. The shake-flask method was used to measure the solubility of dipyridamole and ketoconazole in pooled human and canine intestinal contents and in fasted-state-simulating intestinal fluid (FaSSIF) and fed-state-simulating intestinal fluid (FeSSIF) containing various bile salts and pH-buffering agents. For both compounds, solubility in canine contents may be predictive of human intralumenal solubility in the fasting state but not in the fed state. The poor agreement of results in canine and human aspirates can be attributed to the higher bile salt content in canine bile. Solubility in FaSSIF containing a mixture of bile salts from crude bile predicted satisfactorily the intralumenal solubility of both drugs in the fasted state in humans. Solubility in FeSSIF, regardless of the identity of bile salts or of the buffering species, deviated from intralumenal values in the fed human aspirates by up to 40%. This was attributed to the lack of lipolytic products in FeSSIF, the higher bile salt content of FeSSIF, and the lower pH of FeSSIF. FaSSIF containing a mixture of bile salts from crude bile, and FeSSIF containing lipolytic products and, perhaps, having lower bile salt content but
Contextuality under weak assumptions
International Nuclear Information System (INIS)
Simmons, Andrew W; Rudolph, Terry; Wallman, Joel J; Pashayan, Hakop; Bartlett, Stephen D
2017-01-01
The presence of contextuality in quantum theory was first highlighted by Bell, Kochen and Specker, who discovered that for quantum systems of three or more dimensions, measurements could not be viewed as deterministically revealing pre-existing properties of the system. More precisely, no model can assign deterministic outcomes to the projectors of a quantum measurement in a way that depends only on the projector and not the context (the full set of projectors) in which it appeared, despite the fact that the Born rule probabilities associated with projectors are independent of the context. A more general, operational definition of contextuality introduced by Spekkens, which we will term ‘probabilistic contextuality’, drops the assumption of determinism and allows for operations other than measurements to be considered contextual. Even two-dimensional quantum mechanics can be shown to be contextual under this generalised notion. Probabilistic noncontextuality represents the postulate that elements of an operational theory that cannot be distinguished from each other based on the statistics of arbitrarily many repeated experiments (they give rise to the same operational probabilities) are ontologically identical. In this paper, we introduce a framework that enables us to distinguish between different noncontextuality assumptions in terms of the relationships between the ontological representations of objects in the theory given a certain relation between their operational representations. This framework can be used to motivate and define a ‘possibilistic’ analogue, encapsulating the idea that elements of an operational theory that cannot be unambiguously distinguished operationally can also not be unambiguously distinguished ontologically. We then prove that possibilistic noncontextuality is equivalent to an alternative notion of noncontextuality proposed by Hardy. Finally, we demonstrate that these weaker noncontextuality assumptions are sufficient to prove
Weak lensing and cosmological investigation
Acquaviva, V
2005-01-01
In the last few years the scientific community has been dealing with the challenging issue of identifying the dark energy component. We regard weak gravitational lensing as a brand new, and extremely important, tool for cosmological investigation in this field. In fact, the features imprinted on the cosmic microwave background radiation by the lensing from the intervening distribution of matter represent a pretty unbiased estimator, and can thus be used for putting constraints on different dark energy models. This is true in particular for the magnetic-type B-modes of CMB polarization, whose unlensed spectrum at large multipoles (l approximately=1000) is very small even in presence of an amount of gravitational waves as large as currently allowed by the experiments: therefore, on these scales the lensing phenomenon is the only responsible for the observed power, and this signal turns out to be a faithful tracer of the dark energy dynamics. We first recall the formal apparatus of the weak lensing in extended t...
Peng, Guo-Hsuan; Chi, Yu-Chieh; Lin, Gong-Ru
2008-08-18
A novel optical TDM pulsed carrier with tunable mode spacing matching the ITU-T defined DWDM channels is demonstrated, which is generated from an optically injection-mode-locked weak-resonant-cavity Fabry-Perot laser diode (FPLD) with 10%-end-facet reflectivity. The FPLD exhibits relatively weak cavity modes and a gain spectral linewidth covering >33.5 nm. The least common multiple of the mode spacing determined by both the weak-resonant-cavity FPLD and the fiber-ring cavity can be tunable by adjusting length of the fiber ring cavity or the FPLD temperature to approach the desired 200GHz DWDM channel spacing of 1.6 nm. At a specific fiber-ring cavity length, such a least-common- multiple selection rule results in 12 lasing modes between 1532 and 1545 nm naturally and a mode-locking pulsewidth of 19 ps broadened by group velocity dispersion among different modes. With an additional intracavity bandpass filter, the operating wavelength can further extend from 1520 to 1553.5 nm. After channel filtering, each selected longitudinal mode gives rise to a shortened pulsewidth of 12 ps due to the reduced group velocity dispersion. By linear dispersion compensating with a 55-m long dispersion compensation fiber (DCF), the pulsewidth can be further compressed to 8 ps with its corresponding peak-to-peak chirp reducing from 9.7 to 4.3 GHz.
Cai, Jianfeng; Cheng, Lingping; Zhao, Jianchao; Fu, Qing; Jin, Yu; Ke, Yanxiong; Liang, Xinmiao
2017-11-17
A hydrophilic interaction liquid chromatography (HILIC) stationary phase was prepared by a two-step synthesis method, immobilizing polyacrylamide on silica sphere particles. The stationary phase (named PA, 5μm dia) was evaluated using a mixture of carbohydrates in HILIC mode and the column efficiency reached 121,000Nm -1 . The retention behavior of carbohydrates on PA stationary phase was investigated with three different organic solvents (acetonitrile, ethanol and methanol) employed as the weak eluent. The strongest hydrophilicity of PA stationary phase was observed in both acetonitrile and methanol as the weak eluent, when compared with another two amide stationary phases. Attributing to its high hydrophilicity, three oligosaccharides (xylooligosaccharide, fructooligosaccharide and chitooligosaccharides) presented good retention on PA stationary phase using alcohols/water as mobile phase. Finally, PA stationary phase was successfully applied for the purification of galactooligosaccharides and saponins of Paris polyphylla. It is feasible to use safer and cheaper alcohols to replace acetonitrile as the weak eluent for green analysis and purification of polar compounds on PA stationary phase. Copyright © 2017. Published by Elsevier B.V.
Lautze, Nicole C.; Taddeucci, Jacopo; Andronico, Daniele; Cannata, Chiara; Tornetta, Lauretta; Scarlato, Piergiorgio; Houghton, Bruce; Lo Castro, Maria Deborah
2012-01-01
We present results from a semi-automated field-emission scanning electron microscope investigation of basaltic ash from a variety of eruptive processes that occurred at Mount Etna volcano in 2006 and at Stromboli volcano in 2007. From a methodological perspective, the proposed techniques provide relatively fast (about 4 h per sample) information on the size distribution, morphology, and surface chemistry of several hundred ash particles. Particle morphology is characterized by compactness and elongation parameters, and surface chemistry data are shown using ternary plots of the relative abundance of several key elements. The obtained size distributions match well those obtained by an independent technique. The surface chemistry data efficiently characterize the chemical composition, type and abundance of crystals, and dominant alteration phases in the ash samples. From a volcanological perspective, the analyzed samples cover a wide spectrum of relatively minor ash-forming eruptive activity, including weak Hawaiian fountaining at Etna, and lava-sea water interaction, weak Strombolian explosions, vent clearing activity, and a paroxysm during the 2007 eruptive crisis at Stromboli. This study outlines subtle chemical and morphological differences in the ash deposited at different locations during the Etna event, and variable alteration patterns in the surface chemistry of the Stromboli samples specific to each eruptive activity. Overall, we show this method to be effective in quantifying the main features of volcanic ash particles from the relatively weak - and yet frequent - explosive activity occurring at basaltic volcanoes.
Gao, Yangde; Karimi, Mohammad; Kudreyko, Aleksey A; Song, Wanqing
2017-12-30
In the marine systems, engines represent the most important part of ships, the probability of the bearings fault is the highest in the engines, so in the bearing vibration analysis, early weak fault detection is very important for long term monitoring. In this paper, we propose a novel method to solve the early weak fault diagnosis of bearing. Firstly, we should improve the alternating direction method of multipliers (ADMM), structure of the traditional ADMM is changed, and then the improved ADMM is applied to the compressed sensing (CS) theory, which realizes the sparse optimization of bearing signal for a mount of data. After the sparse signal is reconstructed, the calculated signal is restored with the minimum entropy de-convolution (MED) to get clear fault information. Finally we adopt the sample entropy. Morphological mean square amplitude and the root mean square (RMS) to find the early fault diagnosis of bearing respectively, at the same time, we plot the Boxplot comparison chart to find the best of the three indicators. The experimental results prove that the proposed method can effectively identify the early weak fault diagnosis. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Political corruption and weak state
Directory of Open Access Journals (Sweden)
Stojiljković Zoran
2013-01-01
Full Text Available The author starts from the hypothesis that it is essential for the countries of the region to critically assess the synergy established between systemic, political corruption and a selectively weak, “devious” nature of the state. Moreover, the key dilemma is whether the expanded practice of political rent seeking supports the conclusion that the root of all corruption is in the very existence of the state - particularly in excessive, selective and deforming state interventions and benefits that create a fertile ground for corruption? The author argues that the destructive combination of weak government and rampant political corruption is based on scattered state intervention, while also rule the parties cartel in the executive branch subordinate to parliament, the judiciary and the police. Corrupt exchange takes place with the absence of strong institutional framework and the precise rules of the political and electoral games, control of public finances and effective political and anti-monopoly legislation and practice included. Exit from the current situation can be seen in the realization of effective anticorruption strategy that integrates preventive and repressive measures and activities and lead to the establishment of principles of good governance. [Projekat Ministarstva nauke Republike Srbije, br. 179076: Politički identitet Srbije u regionalnom i globalnom kontekstu
Gauge theories of the weak interactions
International Nuclear Information System (INIS)
Quinn, H.
1978-08-01
Two lectures are presented on the Weinberg--Salam--Glashow--Iliopoulos--Maiani gauge theory for weak interactions. An attempt is made to give some impressions of the generality of this model, how it was developed, variations found in the literature, and the status of the standard model. 21 references
Sensor-based interior modeling
International Nuclear Information System (INIS)
Herbert, M.; Hoffman, R.; Johnson, A.; Osborn, J.
1995-01-01
Robots and remote systems will play crucial roles in future decontamination and decommissioning (D ampersand D) of nuclear facilities. Many of these facilities, such as uranium enrichment plants, weapons assembly plants, research and production reactors, and fuel recycling facilities, are dormant; there is also an increasing number of commercial reactors whose useful lifetime is nearly over. To reduce worker exposure to radiation, occupational and other hazards associated with D ampersand D tasks, robots will execute much of the work agenda. Traditional teleoperated systems rely on human understanding (based on information gathered by remote viewing cameras) of the work environment to safely control the remote equipment. However, removing the operator from the work site substantially reduces his efficiency and effectiveness. To approach the productivity of a human worker, tasks will be performed telerobotically, in which many aspects of task execution are delegated to robot controllers and other software. This paper describes a system that semi-automatically builds a virtual world for remote D ampersand D operations by constructing 3-D models of a robot's work environment. Planar and quadric surface representations of objects typically found in nuclear facilities are generated from laser rangefinder data with a minimum of human interaction. The surface representations are then incorporated into a task space model that can be viewed and analyzed by the operator, accessed by motion planning and robot safeguarding algorithms, and ultimately used by the operator to instruct the robot at a level much higher than teleoperation
Qubit state tomography in a superconducting circuit via weak measurements
Qin, Lupei; Xu, Luting; Feng, Wei; Li, Xin-Qi
2017-03-01
In this work we present a study on a new scheme for measuring the qubit state in a circuit quantum electrodynamics (QED) system, based on weak measurement and the concept of weak value. To be applicable under generic parameter conditions, our formulation and analysis are carried out for finite-strength weak measurement, and in particular beyond the bad-cavity and weak-response limits. The proposed study is accessible to present state-of-the-art circuit QED experiments.
Differential geometry based multiscale models.
Wei, Guo-Wei
2010-08-01
Large chemical and biological systems such as fuel cells, ion channels, molecular motors, and viruses are of great importance to the scientific community and public health. Typically, these complex systems in conjunction with their aquatic environment pose a fabulous challenge to theoretical description, simulation, and prediction. In this work, we propose a differential geometry based multiscale paradigm to model complex macromolecular systems, and to put macroscopic and microscopic descriptions on an equal footing. In our approach, the differential geometry theory of surfaces and geometric measure theory are employed as a natural means to couple the macroscopic continuum mechanical description of the aquatic environment with the microscopic discrete atomistic description of the macromolecule. Multiscale free energy functionals, or multiscale action functionals are constructed as a unified framework to derive the governing equations for the dynamics of different scales and different descriptions. Two types of aqueous macromolecular complexes, ones that are near equilibrium and others that are far from equilibrium, are considered in our formulations. We show that generalized Navier-Stokes equations for the fluid dynamics, generalized Poisson equations or generalized Poisson-Boltzmann equations for electrostatic interactions, and Newton's equation for the molecular dynamics can be derived by the least action principle. These equations are coupled through the continuum-discrete interface whose dynamics is governed by potential driven geometric flows. Comparison is given to classical descriptions of the fluid and electrostatic interactions without geometric flow based micro-macro interfaces. The detailed balance of forces is emphasized in the present work. We further extend the proposed multiscale paradigm to micro-macro analysis of electrohydrodynamics, electrophoresis, fuel cells, and ion channels. We derive generalized Poisson-Nernst-Planck equations that are
Differential Geometry Based Multiscale Models
Wei, Guo-Wei
2010-01-01
Large chemical and biological systems such as fuel cells, ion channels, molecular motors, and viruses are of great importance to the scientific community and public health. Typically, these complex systems in conjunction with their aquatic environment pose a fabulous challenge to theoretical description, simulation, and prediction. In this work, we propose a differential geometry based multiscale paradigm to model complex macromolecular systems, and to put macroscopic and microscopic descriptions on an equal footing. In our approach, the differential geometry theory of surfaces and geometric measure theory are employed as a natural means to couple the macroscopic continuum mechanical description of the aquatic environment with the microscopic discrete atom-istic description of the macromolecule. Multiscale free energy functionals, or multiscale action functionals are constructed as a unified framework to derive the governing equations for the dynamics of different scales and different descriptions. Two types of aqueous macromolecular complexes, ones that are near equilibrium and others that are far from equilibrium, are considered in our formulations. We show that generalized Navier–Stokes equations for the fluid dynamics, generalized Poisson equations or generalized Poisson–Boltzmann equations for electrostatic interactions, and Newton's equation for the molecular dynamics can be derived by the least action principle. These equations are coupled through the continuum-discrete interface whose dynamics is governed by potential driven geometric flows. Comparison is given to classical descriptions of the fluid and electrostatic interactions without geometric flow based micro-macro interfaces. The detailed balance of forces is emphasized in the present work. We further extend the proposed multiscale paradigm to micro-macro analysis of electrohydrodynamics, electrophoresis, fuel cells, and ion channels. We derive generalized Poisson–Nernst–Planck equations that
Design Of Computer Based Test Using The Unified Modeling Language
Tedyyana, Agus; Danuri; Lidyawati
2017-12-01
The Admission selection of Politeknik Negeri Bengkalis through interest and talent search (PMDK), Joint Selection of admission test for state Polytechnics (SB-UMPN) and Independent (UM-Polbeng) were conducted by using paper-based Test (PBT). Paper Based Test model has some weaknesses. They are wasting too much paper, the leaking of the questios to the public, and data manipulation of the test result. This reasearch was Aimed to create a Computer-based Test (CBT) models by using Unified Modeling Language (UML) the which consists of Use Case diagrams, Activity diagram and sequence diagrams. During the designing process of the application, it is important to pay attention on the process of giving the password for the test questions before they were shown through encryption and description process. RSA cryptography algorithm was used in this process. Then, the questions shown in the questions banks were randomized by using the Fisher-Yates Shuffle method. The network architecture used in Computer Based test application was a client-server network models and Local Area Network (LAN). The result of the design was the Computer Based Test application for admission to the selection of Politeknik Negeri Bengkalis.
Energy Technology Data Exchange (ETDEWEB)
Godel, G.; Gold, N.; Hasse, J.; Bock, J.; Halbritter, J. [Phys. Inst., Karlsruhe Univ. (Germany)
1994-10-01
The granular structure dominates the RF properties of the material. Below T{sub c} the surface resistance at 11.27 GHz of Bi{sub 2}Sr{sub 2}CaCu{sub 2}O{sub 8} drops initially more slowly than BSC theory predicts. Below T{sub c}/2 it shows a linear temperature dependence and a quadratic frequency and field dependence with an RF critical magnetic field of <130 A m{sup -1} at 4.2 K. This behaviour is attributed to the existence of weak superconducting regions between crystallites, which provide a strikingly good description. The weak links with a boundary resistance R{sub bn} have to be regarded as Josephson junctions with reduced superconducting properties and normal conducting leakage currents. We conclude that the weak-link model gives a consistent description of the DC and microwave properties not only in the magnitude of the penetration depth and surface resistance but also in their temperature, field and frequency dependence. Conversely, it is possible to obtain from it quantitative information about weak links in the superconductor Bi{sub 2}Sr{sub 2}CaCu{sub 2}O{sub 8}. (author)
Attending to weak signals: the leader's challenge.
Kerfoot, Karlene
2005-12-01
Halverson and Isham (2003) quote sources that report the accidental death rate of simply being in a hospital is " ... four hundred times more likely than your risk of death from traveling by train, forty times higher than driving a car, and twenty times higher than flying in a commercial aircraft" (p. 13). High-reliability organizations such as nuclear power plants and aircraft carriers have been pioneers in the business of recognizing weak signals. Weike and Sutcliffe (2001) note that high-reliability organizations distinguish themselves from others because of their mindfulness which enables them to see the significance of weak signals and to give strong interventions to weak signals. To act mindfully, these organizations have an underlying mental model of continually updating, anticipating, and focusing the possibility of failure using the intelligence that weak signals provides. Much of what happens is unexpected in health care. However, with a culture that is continually looking for weak signals, and intervenes and rescues when these signals are detected, the unexpected happens less often. This is the epitome of how leaders can build a culture of safety that focuses on recognizing the weak signals to manage the unforeseen.
Observation-Based Modeling for Model-Based Testing
Kanstrén, T.; Piel, E.; Gross, H.G.
2009-01-01
One of the single most important reasons that modeling and modelbased testing are not yet common practice in industry is the perceived difficulty of making the models up to the level of detail and quality required for their automated processing. Models unleash their full potential only through
Weak interactions of the b quark
International Nuclear Information System (INIS)
Branco, G.C.; Mohapatra, R.N.
1978-01-01
In weak-interaction models with two charged W bosons of comparable mass, there exists a novel possibility for the weak interactions of the b quark, in which the (u-barb)/sub R/ current occurs with maximal strength. It is noted that multimuon production in e + e - annihilation at above Q 2 > or approx. = (12 GeV) 2 will distinguish this scheme from the conventional one. We also present a Higgs system that leads naturally to this type of coupling, in a class of gauge models
Image based 3D city modeling : Comparative study
Directory of Open Access Journals (Sweden)
S. P. Singh
2014-06-01
Full Text Available 3D city model is a digital representation of the Earth’s surface and it’s related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India. This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can’t do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good
Pereira, Maria E. S.; Soares-Santos, Marcelle; Makler, Martin; Annis, James; Lin, Huan; Palmese, Antonella; Vitorelli, André Z.; Welch, Brian; Caminha, Gabriel B.; Erben, Thomas; Moraes, Bruno; Shan, Huanyuan
2018-02-01
We present the first weak lensing calibration of μ⋆, a new galaxy cluster mass proxy corresponding to the total stellar mass of red and blue members, in two cluster samples selected from the SDSS Stripe 82 data: 230 red-sequence Matched-filter Probabilistic Percolation (redMaPPer) clusters at redshift 0.1 ≤ z proxy for VT clusters. Catalogues including μ⋆ measurements will enable its use in studies of galaxy evolution in clusters and cluster cosmology.
International Nuclear Information System (INIS)
Thomas, M.; Blank, H.; Wong, K.C.; Nguyen, C.; Kroemer, H.; Hu, E.L.
1996-01-01
InAs-AlSb quantum wells contacted with periodic gratings of superconducting Nb electrodes show Josephson-junction characteristics at low temperatures. When a nonzero resistance is reestablished by a weak magnetic field, the resistance shows a strong component periodic in the magnetic field. At fields above ∼300μT, the oscillation period corresponds to one flux quantum per grating cell; but in wide arrays (≥40μm), a frequency doubling takes place at low fields, indicating the formation of a staggered vortex superlattice at twice the lithographic period. copyright 1996 The American Physical Society
Individual based and mean-field modeling of direct aggregation
Burger, Martin
2013-10-01
We introduce two models of biological aggregation, based on randomly moving particles with individual stochasticity depending on the perceived average population density in their neighborhood. In the firstorder model the location of each individual is subject to a density-dependent random walk, while in the second-order model the density-dependent random walk acts on the velocity variable, together with a density-dependent damping term. The main novelty of our models is that we do not assume any explicit aggregative force acting on the individuals; instead, aggregation is obtained exclusively by reducing the individual stochasticity in response to higher perceived density. We formally derive the corresponding mean-field limits, leading to nonlocal degenerate diffusions. Then, we carry out the mathematical analysis of the first-order model, in particular, we prove the existence of weak solutions and show that it allows for measure-valued steady states. We also perform linear stability analysis and identify conditions for pattern formation. Moreover, we discuss the role of the nonlocality for well-posedness of the first-order model. Finally, we present results of numerical simulations for both the first- and second-order model on the individual-based and continuum levels of description. 2012 Elsevier B.V. All rights reserved.
Robins, Robert E.; Delisi, Donald P.
2008-01-01
In Robins and Delisi (2008), a linear decay model, a new IGE model by Sarpkaya (2006), and a series of APA-Based models were scored using data from three airports. This report is a guide to the APA-based models.
Joint and weak measurements on qubit systems
International Nuclear Information System (INIS)
O'Brien, J.L.; Pryde, G.J.; Bartlett, S.D.; Ralph, T.C.; Wiseman, H.M.; White, A.G.
2005-01-01
Full text: Along with the well-known concept of projective measurements, quantum mechanics allows various kinds of generalized measurement operators. Two important examples are: joint measurements on two or more quantum systems that cannot be achieved by local operations (LOCC); and weak measurements that obtain less information about a system than does a projective measurement, but with correspondingly less disturbance. Unlike the result of a strong measurement, the average value of a weak measurement of an observable (its weak value), when followed by projective postselection in a complementary basis, can lie outside the range of eigenvalues. This discrepancy is not observed in analogous classical measurements. Weak values aid the resolution of quantum paradoxes, and can simplify analysis of weakly coupled systems. We use a generalized measurement device to measure the weak value of a photon's polarization in the horizontal/vertical basis (the Stokes operator S1 = |H> weak up to 47, outside the usual range -1 ≤ S1 ≤ 1. Unlike previous observations of weak values, our measurement works by entangling two separate systems, and thus can only be described by quantum theory, not a classical wave theory. Also, we have used a two-qubit joint measurement based on a controlled-NOT gate by which certain twoqubit unentangled states can be more reliably distinguished than by using LOCC. We quantify this using a payoff function, for which the optimal LOCC measurement attains 2/3, and our experimental measurement attains 0.72 ± 0.02, close to the global optimum of 3/4. (author)
Inversion assuming weak scattering
DEFF Research Database (Denmark)
Xenaki, Angeliki; Gerstoft, Peter; Mosegaard, Klaus
2013-01-01
due to the complex nature of the field. A method based on linear inversion is employed to infer information about the statistical properties of the scattering field from the obtained cross-spectral matrix. A synthetic example based on an active high-frequency sonar demonstrates that the proposed...
Zilberberg, Oded; Romito, Alessandro; Gefen, Yuval
2013-01-01
Weak value (WV) is a quantum mechanical measurement protocol, proposed by Aharonov, Albert, and Vaidman. It consists of a weak measurement, which is weighed in, conditional on the outcome of a later, strong measurement. Here we define another two-step measurement protocol, null weak value (NVW), and point out its advantages as compared to WV. We present two alternative derivations of NWVs and compare them to the corresponding derivations of WVs.
Weak Measurement and Quantum Correlation
Indian Academy of Sciences (India)
Arun Kumar Pati
The concept of the weak measurements, for the first time, was introduced by Aharonov et al.1. Quantum state is preselected in |ψi〉 and allowed to interact weakly with apparatus. Measurement strength can be tuned and for “small g(t)” it is called 'weak measurement'. With postselection in |ψf 〉, apparatus state is shifted by an ...
Constraint-Based Model Weaving
White, Jules; Gray, Jeff; Schmidt, Douglas C.
Aspect-oriented modeling (AOM) is a promising technique for untangling the concerns of complex enterprise software systems. AOM decomposes the crosscutting concerns of a model into separate models that can be woven together to form a composite solution model. In many domains, such as multi-tiered e-commerce web applications, separating concerns is much easier than deducing the proper way to weave the concerns back together into a solution model. For example, modeling the types and sizes of caches that can be leveraged by a Web application is much easier than deducing the optimal way to weave the caches back into the solution architecture to achieve high system throughput.
Weak openness and almost openness
Directory of Open Access Journals (Sweden)
David A. Rose
1984-01-01
Full Text Available Weak openness and almost openness for arbitrary functions between topological spaces are defined as duals to the weak continuity of Levine and the almost continuity of Husain respectively. Independence of these two openness conditions is noted and comparison is made between these and the almost openness of Singal and Singal. Some results dual to those known for weak continuity and almost continuity are obtained. Nearly almost openness is defined and used to obtain an improved link from weak continuity to almost continuity.
Weak Weak Lensing : How Accurately Can Small Shears be Measured?
Kuijken, K.
2006-01-01
Abstract: Now that weak lensing signals on the order of a percent are actively being searched for (cosmic shear, galaxy-galaxy lensing, large radii in clusters...) it is important to investigate how accurately weak shears can be determined. Many systematic effects are present, and need to be
Gravitational Wave Detection via Weak Measurements Amplification
Hu, Meng-Jun; Zhang, Yong-Sheng
2017-01-01
A universal amplification scheme of ultra-small phase based on weak measurements is given and a weak measurements amplification based laser interferometer gravitational-wave observatory (WMA-LIGO) is suggested. The WMA-LIGO has potential to amplify the ultra-small phase signal to at least $10^{3}$ order of magnitude such that the sensitivity and bandwidth of gravitational-wave detector can be further improved. Our results not only shed a new light on the quantum measurement but also open a ne...
On agent-based modeling and computational social science.
Conte, Rosaria; Paolucci, Mario
2014-01-01
In the first part of the paper, the field of agent-based modeling (ABM) is discussed focusing on the role of generative theories, aiming at explaining phenomena by growing them. After a brief analysis of the major strengths of the field some crucial weaknesses are analyzed. In particular, the generative power of ABM is found to have been underexploited, as the pressure for simple recipes has prevailed and shadowed the application of rich cognitive models. In the second part of the paper, the renewal of interest for Computational Social Science (CSS) is focused upon, and several of its variants, such as deductive, generative, and complex CSS, are identified and described. In the concluding remarks, an interdisciplinary variant, which takes after ABM, reconciling it with the quantitative one, is proposed as a fundamental requirement for a new program of the CSS.
gis-based hydrological model based hydrological model upstream
African Journals Online (AJOL)
eobe
its effectiveness in terms of data representation quality of modeling results, hydrological models usually embedded in Geographical Information. (GIS) environment to simulate various parame attributed to a selected catchment. complex technology highly suitable for spatial temporal data analyses and information extractio.
Dark-Matter Particles without Weak-Scale Masses or Weak Interactions
International Nuclear Information System (INIS)
Feng, Jonathan L.; Kumar, Jason
2008-01-01
We propose that dark matter is composed of particles that naturally have the correct thermal relic density, but have neither weak-scale masses nor weak interactions. These models emerge naturally from gauge-mediated supersymmetry breaking, where they elegantly solve the dark-matter problem. The framework accommodates single or multiple component dark matter, dark-matter masses from 10 MeV to 10 TeV, and interaction strengths from gravitational to strong. These candidates enhance many direct and indirect signals relative to weakly interacting massive particles and have qualitatively new implications for dark-matter searches and cosmological implications for colliders
Rule-based decision making model
International Nuclear Information System (INIS)
Sirola, Miki
1998-01-01
A rule-based decision making model is designed in G2 environment. A theoretical and methodological frame for the model is composed and motivated. The rule-based decision making model is based on object-oriented modelling, knowledge engineering and decision theory. The idea of safety objective tree is utilized. Advanced rule-based methodologies are applied. A general decision making model 'decision element' is constructed. The strategy planning of the decision element is based on e.g. value theory and utility theory. A hypothetical process model is built to give input data for the decision element. The basic principle of the object model in decision making is division in tasks. Probability models are used in characterizing component availabilities. Bayes' theorem is used to recalculate the probability figures when new information is got. The model includes simple learning features to save the solution path. A decision analytic interpretation is given to the decision making process. (author)
Energy Technology Data Exchange (ETDEWEB)
Olsson, Marcus; Nordman, Roger; Taherzadeh, Mohammad
2011-07-01
Plants for bioethanol production have been planned in several cities in Sweden, including Boraas. This report provides answers to general questions regarding how such a facility's energy demand is affected by the external integration with a heat and power plant and the internal energy integration between process units. Heat integration of a bioethanol plant means that energy is reused as much as is technically possible; this sets a practical minimum level for the energy demand of the plant. In the study, ethanol production from cellulose has been simulated using Aspen Plus. Weak acid hydrolysis and enzymatic hydrolysis have been simulated, each with 50,000 and 100,000 tonnes of ethanol per year, resulting in four simulation cases. In all cases, heat integration is evaluated using pinch analysis. The steam in the ethanol plant has been covered by steam from a heat and power plant similar to that found today in Boraas. It is important to note that the energy quotas reported here includes energy use for upgrading the residual products. This leads to lower energy quotas than would be the case if the upgrading of residuals were allocated outside of the ethanol production. The conclusions from the project are: - The steam demand of the ethanol plant leads to a reduction in both the electricity and heat production of the heat and power plant. For the weak acid hydrolysis, the electricity loss is relatively high, 26-98%, which will affect the revenue significantly. The loss of electricity production is lower for the enzymatic process: 11-47%. - The difference in decreased electricity between the theoretical case of heating the raw material and the two alternative heating cases is about a factor of two, so the design of the heating of raw material is extremely important. - The reduced heat output of the power plant can, in most cases, be balanced by the surplus heat from the ethanol plant, but to completely balance the shortage, heat over 100 deg C must be used
DEFF Research Database (Denmark)
Luo, B.; Brandt, W. N.; Alexander, D. M.
2013-01-01
likely explanation. We also discuss the intrinsic X-ray weakness scenario based on a coronal-quenching model relevant to the shielding gas and disk wind of BAL quasars. Motivated by our NuSTAR results, we perform a Chandra stacking analysis with the Large Bright Quasar Survey BAL quasar sample and place...
(O’ Lee, Dominic J.
2018-02-01
At present, there have been suggested two types of physical mechanism that may facilitate preferential pairing between DNA molecules, with identical or similar base pair texts, without separation of base pairs. One mechanism solely relies on base pair specific patterns of helix distortion being the same on the two molecules, discussed extensively in the past. The other mechanism proposes that there are preferential interactions between base pairs of the same composition. We introduce a model, built on this second mechanism, where both thermal stretching and twisting fluctuations are included, as well as the base pair specific helix distortions. Firstly, we consider an approximation for weak pairing interactions, or short molecules. This yields a dependence of the energy on the square root of the molecular length, which could explain recent experimental data. However, analysis suggests that this approximation is no longer valid at large DNA lengths. In a second approximation, for long molecules, we define two adaptation lengths for twisting and stretching, over which the pairing interaction can limit the accumulation of helix disorder. When the pairing interaction is sufficiently strong, both adaptation lengths are finite; however, as we reduce pairing strength, the stretching adaptation length remains finite but the torsional one becomes infinite. This second state persists to arbitrarily weak values of the pairing strength; suggesting that, if the molecules are long enough, the pairing energy scales as length. To probe differences between the two pairing mechanisms, we also construct a model of similar form. However, now, pairing between identical sequences solely relies on the intrinsic helix distortion patterns. Between the two models, we see interesting qualitative differences. We discuss our findings, and suggest new work to distinguish between the two mechanisms.
QCD anomalies in hadronic weak decays
International Nuclear Information System (INIS)
Gerard, J.-M.; Trine, S.
2004-01-01
We consider the flavor-changing operators associated with the strong axial and trace anomalies. Their short-distance generation through penguinlike diagrams is obtained within the QCD external field formalism. Standard-model operator evolution exhibits a suppression of anomalous effects in K and B hadronic weak decays. A genuine set of dimension-eight ΔS=1 operators is also displayed
Efficient bootstrap with weakly dependent processes
Bravo, Francesco; Crudu, Federico
2012-01-01
The efficient bootstrap methodology is developed for overidentified moment conditions models with weakly dependent observation. The resulting bootstrap procedure is shown to be asymptotically valid and can be used to approximate the distributions of t-statistics, the J-statistic for overidentifying
AN AUTOMATIC FEATURE BASED MODEL FOR CELL SEGMENTATION FROM CONFOCAL MICROSCOPY VOLUMES
Delibaltov, Diana; Ghosh, Pratim; Veeman, Michael; Smith, William; Manjunath, B.S.
2011-01-01
We present a model for the automated segmentation of cells from confocal microscopy volumes of biological samples. The segmentation task for these images is exceptionally challenging due to weak boundaries and varying intensity during the imaging process. To tackle this, a two step pruning process based on the Fast Marching Method is first applied to obtain an over-segmented image. This is followed by a merging step based on an effective feature representation. The algorithm is applied on two...
Weak form factors of beauty baryons
International Nuclear Information System (INIS)
Ivanov, M.A.; Lyubovitskij, V.E.
1992-01-01
Full analysis of semileptonic decays of beauty baryons with J p =1/2 2 and J p =3/2 2 into charmed ones within the Quark Confinement Model is reported. Weak form factors and decay rates are calculated. Also the heavy quark limit m Q →∞ (Isgur-Wise symmetry) is examined. The weak heavy-baryon form factors in the Isgur-Wise limit and 1/m Q -corrections to them are computered. The Ademollo-Gatto theorem is spin-flavour symmetry of heavy quarks is checked. 33 refs.; 1 fig.; 9 tabs
Computer Based Modelling and Simulation
Indian Academy of Sciences (India)
Most systems involve parameters and variables, which are random variables due to uncertainties. Probabilistic meth- ods are powerful in modelling such systems. In this second part, we describe probabilistic models and Monte Carlo simulation along with 'classical' matrix methods and differ- ential equations as most real ...
Computer Based Modelling and Simulation
Indian Academy of Sciences (India)
A familiar example of a feedback loop is the business model in which part of the output or profit is fedback as input or additional capital - for instance, a company may choose to reinvest 10% of the profit for expansion of the business. Such simple models, like ..... would help scientists, engineers and managers towards better.
Model based design introduction: modeling game controllers to microprocessor architectures
Jungwirth, Patrick; Badawy, Abdel-Hameed
2017-04-01
We present an introduction to model based design. Model based design is a visual representation, generally a block diagram, to model and incrementally develop a complex system. Model based design is a commonly used design methodology for digital signal processing, control systems, and embedded systems. Model based design's philosophy is: to solve a problem - a step at a time. The approach can be compared to a series of steps to converge to a solution. A block diagram simulation tool allows a design to be simulated with real world measurement data. For example, if an analog control system is being upgraded to a digital control system, the analog sensor input signals can be recorded. The digital control algorithm can be simulated with the real world sensor data. The output from the simulated digital control system can then be compared to the old analog based control system. Model based design can compared to Agile software develop. The Agile software development goal is to develop working software in incremental steps. Progress is measured in completed and tested code units. Progress is measured in model based design by completed and tested blocks. We present a concept for a video game controller and then use model based design to iterate the design towards a working system. We will also describe a model based design effort to develop an OS Friendly Microprocessor Architecture based on the RISC-V.
The Culture Based Model: Constructing a Model of Culture
Young, Patricia A.
2008-01-01
Recent trends reveal that models of culture aid in mapping the design and analysis of information and communication technologies. Therefore, models of culture are powerful tools to guide the building of instructional products and services. This research examines the construction of the culture based model (CBM), a model of culture that evolved…
Weakly supervised classification in high energy physics
Dery, Lucio Mwinmaarong; Nachman, Benjamin; Rubbo, Francesco; Schwartzman, Ariel
2017-05-01
As machine learning algorithms become increasingly sophisticated to exploit subtle features of the data, they often become more dependent on simulations. This paper presents a new approach called weakly supervised classification in which class proportions are the only input into the machine learning algorithm. Using one of the most challenging binary classification tasks in high energy physics — quark versus gluon tagging — we show that weakly supervised classification can match the performance of fully supervised algorithms. Furthermore, by design, the new algorithm is insensitive to any mis-modeling of discriminating features in the data by the simulation. Weakly supervised classification is a general procedure that can be applied to a wide variety of learning problems to boost performance and robustness when detailed simulations are not reliable or not available.
Weak measurements in non-Hermitian systems
Matzkin, A.
2012-11-01
‘Weak measurements’—involving a weak unitary interaction between a quantum system and a meter followed by a projective measurement—are investigated when the system has a non-Hermitian Hamiltonian. We show in particular how the standard definition of the ‘weak value’ of an observable must be modified. These studies are undertaken in the context of bound-state scattering theory, a non-Hermitian formalism for which the involved Hilbert spaces are unambiguously defined and the metric operators can be explicitly computed. Numerical examples are given for a model system. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical devoted to ‘Quantum physics with non-Hermitian operators’.
Weak layer fracture: facets and depth hoar
Directory of Open Access Journals (Sweden)
I. Reiweger
2013-09-01
Full Text Available Understanding failure initiation within weak snow layers is essential for modeling and predicting dry-snow slab avalanches. We therefore performed laboratory experiments with snow samples containing a weak layer consisting of either faceted crystals or depth hoar. During these experiments the samples were loaded with different loading rates and at various tilt angles until fracture. The strength of the samples decreased with increasing loading rate and increasing tilt angle. Additionally, we took pictures of the side of four samples with a high-speed video camera and calculated the displacement using a particle image velocimetry (PIV algorithm. The fracture process within the weak layer could thus be observed in detail. Catastrophic failure started due to a shear fracture just above the interface between the depth hoar layer and the underlying crust.
Ximénez, Carmen
2015-01-01
This article extends previous research on the recovery of weak factor loadings in confirmatory factor analysis (CFA) by exploring the effects of adding the mean structure. This issue has not been examined in previous research. This study is based on the framework of Yung and Bentler (1999) and aims to examine the conditions that affect the recovery of weak factor loadings when the model includes the mean structure, compared to analyzing the covariance structure alone. A simulation study was conducted in which several constraints were defined for one-, two-, and three-factor models. Results show that adding the mean structure improves the recovery of weak factor loadings and reduces the asymptotic variances for the factor loadings, particularly for the models with a smaller number of factors and a small sample size. Therefore, under certain circumstances, modeling the means should be seriously considered for covariance models containing weak factor loadings.
Magnified Weak Lensing Cross Correlation Tomography
Energy Technology Data Exchange (ETDEWEB)
Ulmer, Melville P., Clowe, Douglas I.
2010-11-30
This project carried out a weak lensing tomography (WLT) measurement around rich clusters of galaxies. This project used ground based photometric redshift data combined with HST archived cluster images that provide the WLT and cluster mass modeling. The technique has already produced interesting results (Guennou et al, 2010,Astronomy & Astrophysics Vol 523, page 21, and Clowe et al, 2011 to be submitted). Guennou et al have validated that the necessary accuracy can be achieved with photometric redshifts for our purposes. Clowe et al titled "The DAFT/FADA survey. II. Tomographic weak lensing signal from 10 high redshift clusters," have shown that for the **first time** via this purely geometrical technique, which does not assume a standard rod or candle, that a cosmological constant is **required** for flat cosmologies. The intent of this project is not to produce the best constraint on the value of the dark energy equation of state, w. Rather, this project is to carry out a sustained effort of weak lensing tomography that will naturally feed into the near term Dark Energy Survey (DES) and to provide invaluable mass calibration for that project. These results will greatly advance a key cosmological method which will be applied to the top-rated ground-based project in the Astro2020 decadal survey, LSST. Weak lensing tomography is one of the key science drivers behind LSST. CO-I Clowe is on the weak lensing LSST committee, and senior scientist on this project, at FNAL James Annis, plays a leading role in the DES. This project has built on successful proposals to obtain ground-based imaging for the cluster sample. By 1 Jan, it is anticipated the project will have accumulated complete 5-color photometry on 30 (or about 1/3) of the targeted cluster sample (public webpage for the survey is available at http://cencos.oamp.fr/DAFT/ and has a current summary of the observational status of various clusters). In all, the project has now been awarded the equivalent of over 60
[Strength and weaknesses of the German digital health economy].
Leppert, Florian; Gerlach, Jan; Ostwald, Dennis A; Greiner, Wolfgang
2017-07-26
There are high expectations from digitalization of health care, ehealth and telemedicine. Nevertheless, the diffusion of these services falls short of expectations. This study analyses the strength and weaknesses of the German digital health economy. Thereby, we specially focus on small and medium-sized enterprises (SME). The study is based on a literature review, interviews of experts and a workshop. The digital health economy is influenced by a heterogeneous environment with both promotive and obstructive factors. One of the largest weaknesses results from a lack of business models. There is a lack of possibilities of reimbursement by the Statutory Health Insurance (SHI). In addition, private users only have a small willingness to pay for digital services. The large number of regulations makes the implementation even harder, especially for SMEs. Thus, the current environment hampers fast diffusion of digital services in the German health care market. © Georg Thieme Verlag KG Stuttgart · New York.
Base Flow Model Validation Project
National Aeronautics and Space Administration — The innovation is the systematic "building-block" validation of CFD/turbulence models employing a GUI driven CFD code (RPFM) and existing as well as new data sets to...
From Suitable Weak Solutions to Entropy Viscosity
Guermond, Jean-Luc
2010-12-16
This paper focuses on the notion of suitable weak solutions for the three-dimensional incompressible Navier-Stokes equations and discusses the relevance of this notion to Computational Fluid Dynamics. The purpose of the paper is twofold (i) to recall basic mathematical properties of the three-dimensional incompressible Navier-Stokes equations and to show how they might relate to LES (ii) to introduce an entropy viscosity technique based on the notion of suitable weak solution and to illustrate numerically this concept. © 2010 Springer Science+Business Media, LLC.
Model-Based Enterprise Summit Report
2014-02-01
Models Become Much More Efficient and Effective When Coupled With Knowledge Design Advisors CAD Fit Machine Motion KanBan Trigger Models Tolerance...Based Enterprise Geometry Kinematics Design Advisors Control Physics Planning System Models CAD Fit Machine Motion KanBan Trigger Models Tolerance
Hartman effect and weak measurements that are not really weak
International Nuclear Information System (INIS)
Sokolovski, D.; Akhmatskaya, E.
2011-01-01
We show that in wave packet tunneling, localization of the transmitted particle amounts to a quantum measurement of the delay it experiences in the barrier. With no external degree of freedom involved, the envelope of the wave packet plays the role of the initial pointer state. Under tunneling conditions such ''self-measurement'' is necessarily weak, and the Hartman effect just reflects the general tendency of weak values to diverge, as postselection in the final state becomes improbable. We also demonstrate that it is a good precision, or a 'not really weak' quantum measurement: no matter how wide the barrier d, it is possible to transmit a wave packet with a width σ small compared to the observed advancement. As is the case with all weak measurements, the probability of transmission rapidly decreases with the ratio σ/d.
Experimental investigations of weak definite and weak indefinite noun phrases.
Klein, Natalie M; Gegg-Harrison, Whitney M; Carlson, Greg N; Tanenhaus, Michael K
2013-08-01
Definite noun phrases typically refer to entities that are uniquely identifiable in the speaker and addressee's common ground. Some definite noun phrases (e.g., the hospital in Mary had to go the hospital and John did too) seem to violate this uniqueness constraint. We report six experiments that were motivated by the hypothesis that these "weak definite" interpretations arise in "incorporated" constructions. Experiments 1-3 compared nouns that seem to allow for a weak definite interpretation (e.g., hospital, bank, bus, radio) with those that do not (e.g., farm, concert, car, book). Experiments 1 and 2 used an instruction-following task and picture-judgment task, respectively, to demonstrate that a weak definite need not uniquely refer. In Experiment 3 participants imagined scenarios described by sentences such as The Federal Express driver had to go to the hospital/farm. Scenarios following weak definite noun phrases were more likely to include conventional activities associated with the object, whereas following regular nouns, participants were more likely to imagine scenarios that included typical activities associated with the subject; similar effects were observed with weak indefinites. Experiment 4 found that object-related activities were reduced when the same subject and object were used with a verb that does not license weak definite interpretations. In Experiment 5, a science fiction story introduced an artificial lexicon for novel concepts. Novel nouns that shared conceptual properties with English weak definite nouns were more likely to allow weak reference in a judgment task. Experiment 6 demonstrated that familiarity for definite articles and anti-familiarity for indefinite articles applies to the activity associated with the noun, consistent with predictions made by the incorporation analysis. Copyright © 2013 Elsevier B.V. All rights reserved.
Robust LHC Higgs Search in Weak Boson Fusion
Eboli, A A O; Rainwater, D L
2004-01-01
We demonstrate that an LHC Higgs search in weak boson fusion production with subsequent decay to weak boson pairs is robust against extensions of the Standard Model or MSSM involving a large number of Higgs doublets. We also show that the transverse mass distribution provides unambiguous discrimination of a continuum Higgs signal from the Standard Model.
Wang, Ting; Xu, Zhi-yong; Zhu, Yi-chen; Wu, Li-guang; Yuan, Hao-xuan; Li, Chang-chun; Liu, Ya-yu; Cai, Jing
2017-11-01
Graphene oxide (GO) was first employed as a support in preparing TiO2 nanoparticles by adsorbed-layer nanoreactor synthesis (ALNS). Both TiO2 crystallization and GO reduction simultaneously occurred during solvothermal treatment with alcohol as a solvent. By transmission electron microscopy, high resolution transmission electron microscopy, X-ray diffraction, X-ray photoelectron spectroscopy, Fourier transform infrared spectroscopy, and photoluminescence spectroscopy, the results showed that TiO2 nanoparticles with less than 10 nm of size distributed very homogeneously on the GO surface. Tight interaction between TiO2 particles and GO surface could effectively inhibit the aggregation of TiO2 particles, during solvothermal treatment for anatase TiO2 formation. Alcohol could also reduce oxygenated functional groups on GO surface after solvothermal treatment. TiO2 particles with small size and the decrease in oxygenated functional groups on the GO surface both caused high separation efficiency of photo-generated charge carriers, thus resulting in high photo-degradation performance of catalysts. Strong phenol adsorption on photocatalyst was key to enhancing photo-degradation efficiency for phenol in seawater. Moreover, the change in catalyst structure was minimal at different temperatures of solvothermal treatment. But, the degradation rate and efficiency for phenol in seawater were obviously enhanced because of the sensitive structure-activity relationship of catalysts under weak-light irradiation.
Energy Technology Data Exchange (ETDEWEB)
Pereira, Maria E.S. [Rio de Janeiro, CBPF; Soares-Santos, Marcelle [Fermilab; Makler, Martin [Rio de Janeiro, CBPF; Annis, James [Fermilab; Lin, Huan [Fermilab; Palmese, Antonella [Fermilab; Vitorelli, André Z. [Sao Paulo, Inst. Astron. Geofis.; Welch, Brian [Fermilab; Caminha, Gabriel B. [Bologna Observ.; Erben, Thomas [Argelander Inst. Astron.; Moraes, Bruno [University Coll. London; Shan, Huanyuan [Argelander Inst. Astron.
2017-08-10
We present the first weak lensing calibration of $\\mu_{\\star}$, a new galaxy cluster mass proxy corresponding to the total stellar mass of red and blue members, in two cluster samples selected from the SDSS Stripe 82 data: 230 redMaPPer clusters at redshift $0.1\\leq z<0.33$ and 136 Voronoi Tessellation (VT) clusters at $0.1 \\leq z < 0.6$. We use the CS82 shear catalog and stack the clusters in $\\mu_{\\star}$ bins to measure a mass-observable power law relation. For redMaPPer clusters we obtain $M_0 = (1.77 \\pm 0.36) \\times 10^{14}h^{-1} M_{\\odot}$, $\\alpha = 1.74 \\pm 0.62$. For VT clusters, we find $M_0 = (4.31 \\pm 0.89) \\times 10^{14}h^{-1} M_{\\odot}$, $\\alpha = 0.59 \\pm 0.54$ and $M_0 = (3.67 \\pm 0.56) \\times 10^{14}h^{-1} M_{\\odot}$, $\\alpha = 0.68 \\pm 0.49$ for a low and a high redshift bin, respectively. Our results are consistent, internally and with the literature, indicating that our method can be applied to any cluster finding algorithm. In particular, we recommend that $\\mu_{\\star}$ be used as the mass proxy for VT clusters. Catalogs including $\\mu_{\\star}$ measurements will enable its use in studies of galaxy evolution in clusters and cluster cosmology.
Traceability in Model-Based Testing
Directory of Open Access Journals (Sweden)
Mathew George
2012-11-01
Full Text Available The growing complexities of software and the demand for shorter time to market are two important challenges that face today’s IT industry. These challenges demand the increase of both productivity and quality of software. Model-based testing is a promising technique for meeting these challenges. Traceability modeling is a key issue and challenge in model-based testing. Relationships between the different models will help to navigate from one model to another, and trace back to the respective requirements and the design model when the test fails. In this paper, we present an approach for bridging the gaps between the different models in model-based testing. We propose relation definition markup language (RDML for defining the relationships between models.
Firm Based Trade Models and Turkish Economy
Directory of Open Access Journals (Sweden)
Nilüfer ARGIN
2015-12-01
Full Text Available Among all international trade models, only The Firm Based Trade Models explains firm’s action and behavior in the world trade. The Firm Based Trade Models focuses on the trade behavior of individual firms that actually make intra industry trade. Firm Based Trade Models can explain globalization process truly. These approaches include multinational cooperation, supply chain and outsourcing also. Our paper aims to explain and analyze Turkish export with Firm Based Trade Models’ context. We use UNCTAD data on exports by SITC Rev 3 categorization to explain total export and 255 products and calculate intensive-extensive margins of Turkish firms.
DEFF Research Database (Denmark)
Jónsdóttir, Kristjana Ýr; Schmiegel, Jürgen; Jensen, Eva Bjørn Vedel
2008-01-01
In the present paper, we give a condensed review, for the nonspecialist reader, of a new modelling framework for spatio-temporal processes, based on Lévy theory. We show the potential of the approach in stochastic geometry and spatial statistics by studying Lévy-based growth modelling of planar o...... objects. The growth models considered are spatio-temporal stochastic processes on the circle. As a by product, flexible new models for space–time covariance functions on the circle are provided. An application of the Lévy-based growth models to tumour growth is discussed....
Resisting Weakness of the Will.
Levy, Neil
2011-01-01
I develop an account of weakness of the will that is driven by experimental evidence from cognitive and social psychology. I will argue that this account demonstrates that there is no such thing as weakness of the will: no psychological kind corresponds to it. Instead, weakness of the will ought to be understood as depletion of System II resources. Neither the explanatory purposes of psychology nor our practical purposes as agents are well-served by retaining the concept. I therefore suggest that we ought to jettison it, in favour of the vocabulary and concepts of cognitive psychology.
Distributed Prognostics Based on Structural Model Decomposition
National Aeronautics and Space Administration — Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based...
A Universe without Weak Interactions
Energy Technology Data Exchange (ETDEWEB)
Harnik, Roni; Kribs, Graham D.; Perez, Gilad
2006-04-07
A universe without weak interactions is constructed that undergoes big-bang nucleosynthesis, matter domination, structure formation, and star formation. The stars in this universe are able to burn for billions of years, synthesize elements up to iron, and undergo supernova explosions, dispersing heavy elements into the interstellar medium. These definitive claims are supported by a detailed analysis where this hypothetical ''Weakless Universe'' is matched to our Universe by simultaneously adjusting Standard Model and cosmological parameters. For instance, chemistry and nuclear physics are essentially unchanged. The apparent habitability of the Weakless Universe suggests that the anthropic principle does not determine the scale of electroweak breaking, or even require that it be smaller than the Planck scale, so long as technically natural parameters may be suitably adjusted. Whether the multi-parameter adjustment is realized or probable is dependent on the ultraviolet completion, such as the string landscape. Considering a similar analysis for the cosmological constant, however, we argue that no adjustments of other parameters are able to allow the cosmological constant to raise up even remotely close to the Planck scale while obtaining macroscopic structure. The fine-tuning problems associated with the electroweak breaking scale and the cosmological constant therefore appear to be qualitatively different from the perspective of obtaining a habitable universe.
Computer Based Modelling and Simulation
Indian Academy of Sciences (India)
Modelling Deterministic Systems. N K Srinivasan gradu- ated from Indian. Institute of Science and obtained his Doctorate from Columbia Univer- sity, New York. He has taught in several universities, and later did system analysis, wargaming and simula- tion for defence. His other areas of interest are reliability engineer-.
Weakly infinite-dimensional spaces
International Nuclear Information System (INIS)
Fedorchuk, Vitalii V
2007-01-01
In this survey article two new classes of spaces are considered: m-C-spaces and w-m-C-spaces, m=2,3,...,∞. They are intermediate between the class of weakly infinite-dimensional spaces in the Alexandroff sense and the class of C-spaces. The classes of 2-C-spaces and w-2-C-spaces coincide with the class of weakly infinite-dimensional spaces, while the compact ∞-C-spaces are exactly the C-compact spaces of Haver. The main results of the theory of weakly infinite-dimensional spaces, including classification via transfinite Lebesgue dimensions and Luzin-Sierpinsky indices, extend to these new classes of spaces. Weak m-C-spaces are characterised by means of essential maps to Henderson's m-compacta. The existence of hereditarily m-strongly infinite-dimensional spaces is proved.
A weakly-compressible Cartesian grid approach for hydrodynamic flows
Bigay, P.; Oger, G.; Guilcher, P.-M.; Le Touzé, D.
2017-11-01
The present article aims at proposing an original strategy to solve hydrodynamic flows. In introduction, the motivations for this strategy are developed. It aims at modeling viscous and turbulent flows including complex moving geometries, while avoiding meshing constraints. The proposed approach relies on a weakly-compressible formulation of the Navier-Stokes equations. Unlike most hydrodynamic CFD (Computational Fluid Dynamics) solvers usually based on implicit incompressible formulations, a fully-explicit temporal scheme is used. A purely Cartesian grid is adopted for numerical accuracy and algorithmic simplicity purposes. This characteristic allows an easy use of Adaptive Mesh Refinement (AMR) methods embedded within a massively parallel framework. Geometries are automatically immersed within the Cartesian grid with an AMR compatible treatment. The method proposed uses an Immersed Boundary Method (IBM) adapted to the weakly-compressible formalism and imposed smoothly through a regularization function, which stands as another originality of this work. All these features have been implemented within an in-house solver based on this WCCH (Weakly-Compressible Cartesian Hydrodynamic) method which meets the above requirements whilst allowing the use of high-order (> 3) spatial schemes rarely used in existing hydrodynamic solvers. The details of this WCCH method are presented and validated in this article.
Detecting Weak Spectral Lines in Interferometric Data through Matched Filtering
Loomis, Ryan A.; Öberg, Karin I.; Andrews, Sean M.; Walsh, Catherine; Czekala, Ian; Huang, Jane; Rosenfeld, Katherine A.
2018-04-01
Modern radio interferometers enable observations of spectral lines with unprecedented spatial resolution and sensitivity. In spite of these technical advances, many lines of interest are still at best weakly detected and therefore necessitate detection and analysis techniques specialized for the low signal-to-noise ratio (S/N) regime. Matched filters can leverage knowledge of the source structure and kinematics to increase sensitivity of spectral line observations. Application of the filter in the native Fourier domain improves S/N while simultaneously avoiding the computational cost and ambiguities associated with imaging, making matched filtering a fast and robust method for weak spectral line detection. We demonstrate how an approximate matched filter can be constructed from a previously observed line or from a model of the source, and we show how this filter can be used to robustly infer a detection significance for weak spectral lines. When applied to ALMA Cycle 2 observations of CH3OH in the protoplanetary disk around TW Hya, the technique yields a ≈53% S/N boost over aperture-based spectral extraction methods, and we show that an even higher boost will be achieved for observations at higher spatial resolution. A Python-based open-source implementation of this technique is available under the MIT license at http://github.com/AstroChem/VISIBLE.
Weak lensing galaxy cluster field reconstruction
Jullo, E.; Pires, S.; Jauzac, M.; Kneib, J.-P.
2014-02-01
In this paper, we compare three methods to reconstruct galaxy cluster density fields with weak lensing data. The first method called FLens integrates an inpainting concept to invert the shear field with possible gaps, and a multi-scale entropy denoising procedure to remove the noise contained in the final reconstruction, that arises mostly from the random intrinsic shape of the galaxies. The second and third methods are based on a model of the density field made of a multi-scale grid of radial basis functions. In one case, the model parameters are computed with a linear inversion involving a singular value decomposition (SVD). In the other case, the model parameters are estimated using a Bayesian Monte Carlo Markov Chain optimization implemented in the lensing software LENSTOOL. Methods are compared on simulated data with varying galaxy density fields. We pay particular attention to the errors estimated with resampling. We find the multi-scale grid model optimized with Monte Carlo Markov Chain to provide the best results, but at high computational cost, especially when considering resampling. The SVD method is much faster but yields noisy maps, although this can be mitigated with resampling. The FLens method is a good compromise with fast computation, high signal-to-noise ratio reconstruction, but lower resolution maps. All three methods are applied to the MACS J0717+3745 galaxy cluster field, and reveal the filamentary structure discovered in Jauzac et al. We conclude that sensitive priors can help to get high signal-to-noise ratio, and unbiased reconstructions.
International Nuclear Information System (INIS)
Franklin, G.B.
1986-01-01
Hypernuclei whose ground states are stable against strong decay are used to study two-baryon weak interactions. A review of th existing experimental data, including recent results from the AGS on /sub Λ/ 12 C and /sub Λ/ 11 B, shows that the lifetimes and branching ratios can be used to test the effective weak Hamiltonians used in the rate calculations. 10 refs., 4 figs
Weakly Supervised Deep Detection Networks
Bilen, Hakan; Vedaldi, Andrea
2015-01-01
Weakly supervised learning of object detection is an important problem in image understanding that still does not have a satisfactory solution. In this paper, we address this problem by exploiting the power of deep convolutional neural networks pre-trained on large-scale image-level classification tasks. We propose a weakly supervised deep detection architecture that modifies one such network to operate at the level of image regions, performing simultaneously region selection and classificati...
Weakly compact operators and interpolation
Maligranda, Lech
1992-01-01
The class of weakly compact operators is, as well as the class of compact operators, a fundamental operator ideal. They were investigated strongly in the last twenty years. In this survey, we have collected and ordered some of this (partly very new) knowledge. We have also included some comments, remarks and examples. The class of weakly compact operators is, as well as the class of compact operators, a fundamental operator ideal. They were investigated strongly in the last twenty years. I...
Residual-based model diagnosis methods for mixture cure models.
Peng, Yingwei; Taylor, Jeremy M G
2017-06-01
Model diagnosis, an important issue in statistical modeling, has not yet been addressed adequately for cure models. We focus on mixture cure models in this work and propose some residual-based methods to examine the fit of the mixture cure model, particularly the fit of the latency part of the mixture cure model. The new methods extend the classical residual-based methods to the mixture cure model. Numerical work shows that the proposed methods are capable of detecting lack-of-fit of a mixture cure model, particularly in the latency part, such as outliers, improper covariate functional form, or nonproportionality in hazards if the proportional hazards assumption is employed in the latency part. The methods are illustrated with two real data sets that were previously analyzed with mixture cure models. © 2016, The International Biometric Society.
Acute muscular weakness in children
Directory of Open Access Journals (Sweden)
Ricardo Pablo Javier Erazo Torricelli
Full Text Available ABSTRACT Acute muscle weakness in children is a pediatric emergency. During the diagnostic approach, it is crucial to obtain a detailed case history, including: onset of weakness, history of associated febrile states, ingestion of toxic substances/toxins, immunizations, and family history. Neurological examination must be meticulous as well. In this review, we describe the most common diseases related to acute muscle weakness, grouped into the site of origin (from the upper motor neuron to the motor unit. Early detection of hyperCKemia may lead to a myositis diagnosis, and hypokalemia points to the diagnosis of periodic paralysis. Ophthalmoparesis, ptosis and bulbar signs are suggestive of myasthenia gravis or botulism. Distal weakness and hyporeflexia are clinical features of Guillain-Barré syndrome, the most frequent cause of acute muscle weakness. If all studies are normal, a psychogenic cause should be considered. Finding the etiology of acute muscle weakness is essential to execute treatment in a timely manner, improving the prognosis of affected children.
Model Validation in Ontology Based Transformations
Directory of Open Access Journals (Sweden)
Jesús M. Almendros-Jiménez
2012-10-01
Full Text Available Model Driven Engineering (MDE is an emerging approach of software engineering. MDE emphasizes the construction of models from which the implementation should be derived by applying model transformations. The Ontology Definition Meta-model (ODM has been proposed as a profile for UML models of the Web Ontology Language (OWL. In this context, transformations of UML models can be mapped into ODM/OWL transformations. On the other hand, model validation is a crucial task in model transformation. Meta-modeling permits to give a syntactic structure to source and target models. However, semantic requirements have to be imposed on source and target models. A given transformation will be sound when source and target models fulfill the syntactic and semantic requirements. In this paper, we present an approach for model validation in ODM based transformations. Adopting a logic programming based transformational approach we will show how it is possible to transform and validate models. Properties to be validated range from structural and semantic requirements of models (pre and post conditions to properties of the transformation (invariants. The approach has been applied to a well-known example of model transformation: the Entity-Relationship (ER to Relational Model (RM transformation.
An acoustical model based monitoring network
Wessels, P.W.; Basten, T.G.H.; Eerden, F.J.M. van der
2010-01-01
In this paper the approach for an acoustical model based monitoring network is demonstrated. This network is capable of reconstructing a noise map, based on the combination of measured sound levels and an acoustic model of the area. By pre-calculating the sound attenuation within the network the
Transport equations in weak topologies of dual Banach spaces
International Nuclear Information System (INIS)
Greenberg, W.; Polewczak, J.
1989-01-01
Nonlinear transport equations are studied, in which the nonlinearity, arising from the collision operator, is well behaved in the weak topology of a weakly compactly generated Banach space. The Cauchy problem is posed for general semilinear evolution equations, which can model a variety of diffusion and kinetic equations. Local existence theorems are obtained for such spaces. In particular, the results are applicable to transport equations in L ∞ with appropriate weak (i.e., L 1 ) continuity properties
Model-based version management system framework
International Nuclear Information System (INIS)
Mehmood, W.
2016-01-01
In this paper we present a model-based version management system. Version Management System (VMS) a branch of software configuration management (SCM) aims to provide a controlling mechanism for evolution of software artifacts created during software development process. Controlling the evolution requires many activities to perform, such as, construction and creation of versions, identification of differences between versions, conflict detection and merging. Traditional VMS systems are file-based and consider software systems as a set of text files. File based VMS systems are not adequate for performing software configuration management activities such as, version control on software artifacts produced in earlier phases of the software life cycle. New challenges of model differencing, merge, and evolution control arise while using models as central artifact. The goal of this work is to present a generic framework model-based VMS which can be used to overcome the problem of tradition file-based VMS systems and provide model versioning services. (author)
Gradient-based model calibration with proxy-model assistance
Burrows, Wesley; Doherty, John
2016-02-01
Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.
PV panel model based on datasheet values
DEFF Research Database (Denmark)
Sera, Dezso; Teodorescu, Remus; Rodriguez, Pedro
2007-01-01
This work presents the construction of a model for a PV panel using the single-diode five-parameters model, based exclusively on data-sheet parameters. The model takes into account the series and parallel (shunt) resistance of the panel. The equivalent circuit and the basic equations of the PV cell...
Model-Based Design for Embedded Systems
Nicolescu, Gabriela
2009-01-01
Model-based design allows teams to start the design process from a high-level model that is gradually refined through abstraction levels to ultimately yield a prototype. This book describes the main facets of heterogeneous system design. It focuses on multi-core methodological issues, real-time analysis, and modeling and validation
Agent-based modeling of sustainable behaviors
Sánchez-Maroño, Noelia; Fontenla-Romero, Oscar; Polhill, J; Craig, Tony; Bajo, Javier; Corchado, Juan
2017-01-01
Using the O.D.D. (Overview, Design concepts, Detail) protocol, this title explores the role of agent-based modeling in predicting the feasibility of various approaches to sustainability. The chapters incorporated in this volume consist of real case studies to illustrate the utility of agent-based modeling and complexity theory in discovering a path to more efficient and sustainable lifestyles. The topics covered within include: households' attitudes toward recycling, designing decision trees for representing sustainable behaviors, negotiation-based parking allocation, auction-based traffic signal control, and others. This selection of papers will be of interest to social scientists who wish to learn more about agent-based modeling as well as experts in the field of agent-based modeling.
Diagnosis by integrating model-based reasoning with knowledge-based reasoning
Bylander, Tom
1988-01-01
Our research investigates how observations can be categorized by integrating a qualitative physical model with experiential knowledge. Our domain is diagnosis of pathologic gait in humans, in which the observations are the gait motions, muscle activity during gait, and physical exam data, and the diagnostic hypotheses are the potential muscle weaknesses, muscle mistimings, and joint restrictions. Patients with underlying neurological disorders typically have several malfunctions. Among the problems that need to be faced are: the ambiguity of the observations, the ambiguity of the qualitative physical model, correspondence of the observations and hypotheses to the qualitative physical model, the inherent uncertainty of experiential knowledge, and the combinatorics involved in forming composite hypotheses. Our system divides the work so that the knowledge-based reasoning suggests which hypotheses appear more likely than others, the qualitative physical model is used to determine which hypotheses explain which observations, and another process combines these functionalities to construct a composite hypothesis based on explanatory power and plausibility. We speculate that the reasoning architecture of our system is generally applicable to complex domains in which a less-than-perfect physical model and less-than-perfect experiential knowledge need to be combined to perform diagnosis.
Professional development model for science teachers based on scientific literacy
Rubini, B.; Ardianto, D.; Pursitasari, I. D.; Permana, I.
2017-01-01
Scientific literacy is considered as a benchmark of high and low quality of science education in a country. Teachers as a major component of learning at the forefront of building science literacy skills of students in the class. The primary purpose this study is development science teacher coaching model based on scientific literacy. In this article we describe about teacher science literacy and profile coaching model for science’ teachers based on scientific literacy which a part of study conducted in first year. The instrument used in this study consisted of tests, observation sheet, interview guides. The finding showed that problem of low scientific literacy is not only happen the students, but science’ teachers which is a major component in the learning process is still not satisfactory. Understanding science teacher is strongly associated with the background disciplinary. Science teacher was still weak when explaining scientific phenomena, mainly related to the material that relates to the concept of environmental. Coaching model generated from this study consisted of 8 stages by assuming the teacher is an independent learner, so the coaching is done with methods on and off, with time off for activities designed more.
Don't Plan for the Unexpected: Planning Based on Plausibility Models
DEFF Research Database (Denmark)
Andersen, Mikkel Birkegaard; Bolander, Thomas; Jensen, Martin Holm
2015-01-01
that the agent achieves this goal. Conversely, a weak plan promises only the possibility of leading to the goal. In real-life planning scenarios where the planning agent is faced with a high degree of uncertainty and an almost endless number of possible exogenous events, strong epistemic planning......We present a framework for automated planning based on plausibility models, as well as algorithms for computing plans in this framework. Our plausibility models include postconditions, as ontic effects are essential for most planning purposes. The framework presented extends a previously developed...... framework based on dynamic epistemic logic (DEL), without plausibilities/beliefs. In the pure epistemic framework, one can distinguish between strong and weak epistemic plans for achieving some, possibly epistemic, goal. By taking all possible outcomes of actions into account, a strong plan guarantees...
Model-based Abstraction of Data Provenance
DEFF Research Database (Denmark)
Probst, Christian W.; Hansen, René Rydhof
2014-01-01
to bigger models, and the analyses adapt accordingly. Our approach extends provenance both with the origin of data, the actors and processes involved in the handling of data, and policies applied while doing so. The model and corresponding analyses are based on a formal model of spatial and organisational......Identifying provenance of data provides insights to the origin of data and intermediate results, and has recently gained increased interest due to data-centric applications. In this work we extend a data-centric system view with actors handling the data and policies restricting actions....... This extension is based on provenance analysis performed on system models. System models have been introduced to model and analyse spatial and organisational aspects of organisations, to identify, e.g., potential insider threats. Both the models and analyses are naturally modular; models can be combined...
Weak interlayers in flexible and semi-flexible road pavements: Part 1
CSIR Research Space (South Africa)
Netterberg, F
2012-04-01
Full Text Available flexible or semi-flexible pavement is far more deleterious thanis commonly appreciated. In Part 2 the effects of these weak layers are further modelled and discussed using various examples based an HVS testing and mechanistic pavement analyses...
Joint queue-perturbed and weakly-coupled power control for wireless backbone networks
CSIR Research Space (South Africa)
Olwal, TO
2012-09-01
Full Text Available perturbation and weakly-coupled based power control approach for the WBNs. The ultimate objectives are to increase energy-efficiency and the overal network capacity. In order to achieve these objectives, a Markov chain model is first presented to describe...
Cosmology with weak lensing surveys
International Nuclear Information System (INIS)
Munshi, Dipak; Valageas, Patrick; Waerbeke, Ludovic van; Heavens, Alan
2008-01-01
Weak gravitational lensing is responsible for the shearing and magnification of the images of high-redshift sources due to the presence of intervening matter. The distortions are due to fluctuations in the gravitational potential, and are directly related to the distribution of matter and to the geometry and dynamics of the Universe. As a consequence, weak gravitational lensing offers unique possibilities for probing the Dark Matter and Dark Energy in the Universe. In this review, we summarise the theoretical and observational state of the subject, focussing on the statistical aspects of weak lensing, and consider the prospects for weak lensing surveys in the future. Weak gravitational lensing surveys are complementary to both galaxy surveys and cosmic microwave background (CMB) observations as they probe the unbiased non-linear matter power spectrum at modest redshifts. Most of the cosmological parameters are accurately estimated from CMB and large-scale galaxy surveys, so the focus of attention is shifting to understanding the nature of Dark Matter and Dark Energy. On the theoretical side, recent advances in the use of 3D information of the sources from photometric redshifts promise greater statistical power, and these are further enhanced by the use of statistics beyond two-point quantities such as the power spectrum. The use of 3D information also alleviates difficulties arising from physical effects such as the intrinsic alignment of galaxies, which can mimic weak lensing to some extent. On the observational side, in the next few years weak lensing surveys such as CFHTLS, VST-KIDS and Pan-STARRS, and the planned Dark Energy Survey, will provide the first weak lensing surveys covering very large sky areas and depth. In the long run even more ambitious programmes such as DUNE, the Supernova Anisotropy Probe (SNAP) and Large-aperture Synoptic Survey Telescope (LSST) are planned. Weak lensing of diffuse components such as the CMB and 21 cm emission can also
Hay, E.; Thomas, E; Pal, B; Hajeer, A.; Chambers, H; Silman, A
1998-01-01
OBJECTIVES—To determine associations between symptoms of dry eyes and dry mouth and objective evidence of lacrimal and salivary gland dysfunction in a population based sample. To determine associations between these elements and the presence of autoantibodies. METHODS—A cross sectional population based survey. Subjects were interviewed and examined (Schirmer-1 test and unstimulated salivary flow) for the presence of dry eyes and mouth. Antibodies (anti-Ro [SS-A], anti-La [SS-B], rheumatoid fa...
Reducing Weak to Strong Bisimilarity in CCP
Directory of Open Access Journals (Sweden)
Andrés Aristizábal
2012-12-01
Full Text Available Concurrent constraint programming (ccp is a well-established model for concurrency that singles out the fundamental aspects of asynchronous systems whose agents (or processes evolve by posting and querying (partial information in a global medium. Bisimilarity is a standard behavioural equivalence in concurrency theory. However, only recently a well-behaved notion of bisimilarity for ccp, and a ccp partition refinement algorithm for deciding the strong version of this equivalence have been proposed. Weak bisimiliarity is a central behavioural equivalence in process calculi and it is obtained from the strong case by taking into account only the actions that are observable in the system. Typically, the standard partition refinement can also be used for deciding weak bisimilarity simply by using Milner's reduction from weak to strong bisimilarity; a technique referred to as saturation. In this paper we demonstrate that, because of its involved labeled transitions, the above-mentioned saturation technique does not work for ccp. We give an alternative reduction from weak ccp bisimilarity to the strong one that allows us to use the ccp partition refinement algorithm for deciding this equivalence.
Axion monodromy and the weak gravity conjecture
International Nuclear Information System (INIS)
Hebecker, Arthur; Rompineve, Fabrizio; Westphal, Alexander
2015-12-01
Axions with broken discrete shift symmetry (axion monodromy) have recently played a central role both in the discussion of inflation and the 'relaxion' approach to the hierarchy problem. We suggest a very minimalist way to constrain such models by the weak gravity conjecture for domain walls: While the electric side of the conjecture is always satisfied if the cosine-oscillations of the axion potential are sufficiently small, the magnetic side imposes a cutoff, Λ 3 ∝mfM pl , independent of the height of these 'wiggles'. We compare our approach with the recent related proposal by Ibanez, Montero, Uranga and Valenzuela. We also discuss the non-trivial question which version, if any, of the weak gravity conjecture for domain walls should hold. In particular, we show that string compactifications with branes of different dimensions wrapped on different cycles lead to a 'geometric weak gravity conjecture' relating volumes of cycles, norms of corresponding forms and the volume of the compact space. Imposing this 'geometric conjecture', e.g. on the basis of the more widely accepted weak gravity conjecture for particles, provides at least some support for the (electric and magnetic) conjecture for domain walls.
Bimetallic Schiff base complexes: models for conjugated shape-persistent metallopolymers.
Leung, Alfred C W; Hui, Joseph K-H; Chong, Jonathan H; MacLachlan, Mark J
2009-07-14
New Schiff base ligands with two metal binding sites have been prepared. Copper and zinc complexes of the ligands, which serve as models for rigid, conjugated metallopolymers, were synthesized and characterized. The copper complexes display only weak intramolecular antiferromagnetic interactions, suggesting that the polymer structure is not useful for developing magnetic materials. Preliminary investigations of the novel polymers, including the preparation of a conjugated zinc-containing polymer, are reported.
Culturicon model: A new model for cultural-based emoticon
Zukhi, Mohd Zhafri Bin Mohd; Hussain, Azham
2017-10-01
Emoticons are popular among distributed collective interaction user in expressing their emotion, gestures and actions. Emoticons have been proved to be able to avoid misunderstanding of the message, attention saving and improved the communications among different native speakers. However, beside the benefits that emoticons can provide, the study regarding emoticons in cultural perspective is still lacking. As emoticons are crucial in global communication, culture should be one of the extensively research aspect in distributed collective interaction. Therefore, this study attempt to explore and develop model for cultural-based emoticon. Three cultural models that have been used in Human-Computer Interaction were studied which are the Hall Culture Model, Trompenaars and Hampden Culture Model and Hofstede Culture Model. The dimensions from these three models will be used in developing the proposed cultural-based emoticon model.
Agent-based modeling and network dynamics
Namatame, Akira
2016-01-01
The book integrates agent-based modeling and network science. It is divided into three parts, namely, foundations, primary dynamics on and of social networks, and applications. The book begins with the network origin of agent-based models, known as cellular automata, and introduce a number of classic models, such as Schelling’s segregation model and Axelrod’s spatial game. The essence of the foundation part is the network-based agent-based models in which agents follow network-based decision rules. Under the influence of the substantial progress in network science in late 1990s, these models have been extended from using lattices into using small-world networks, scale-free networks, etc. The book also shows that the modern network science mainly driven by game-theorists and sociophysicists has inspired agent-based social scientists to develop alternative formation algorithms, known as agent-based social networks. The book reviews a number of pioneering and representative models in this family. Upon the gi...
Bounds on the Capacity of Weakly constrained two-dimensional Codes
DEFF Research Database (Denmark)
Forchhammer, Søren
2002-01-01
Upper and lower bounds are presented for the capacity of weakly constrained two-dimensional codes. The maximum entropy is calculated for two simple models of 2-D codes constraining the probability of neighboring 1s as an example. For given models of the coded data, upper and lower bounds...... on the capacity for 2-D channel models based on occurrences of neighboring 1s are considered....
CEAI: CCM based Email Authorship Identification Model
DEFF Research Database (Denmark)
Nizamani, Sarwat; Memon, Nasrullah
2013-01-01
In this paper we present a model for email authorship identification (EAI) by employing a Cluster-based Classification (CCM) technique. Traditionally, stylometric features have been successfully employed in various authorship analysis tasks; we extend the traditional feature-set to include some...... reveal that the proposed CCM-based email authorship identification model, along with the proposed feature set, outperforms the state-of-the-art support vector machine (SVM)-based models, as well as the models proposed by Iqbal et al. [1, 2]. The proposed model attains an accuracy rate of 94% for 10...... authors, 89% for 25 authors, and 81% for 50 authors, respectively on Enron data set, while 89.5% accuracy has been achieved on authors' constructed real email data set. The results on Enron data set have been achieved on quite a large number of authors as compared to the models proposed by Iqbal et al. [1...
Integration of Simulink Models with Component-based Software Models
Directory of Open Access Journals (Sweden)
MARIAN, N.
2008-06-01
Full Text Available Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics of abstract system descriptions. Usually, in mechatronics systems, design proceeds by iterating model construction, model analysis, and model transformation. Constructing a MATLAB/Simulink model, a plant and controller behavior is simulated using graphical blocks to represent mathematical and logical constructs and process flow, then software code is generated. A Simulink model is a representation of the design or implementation of a physical system that satisfies a set of requirements. A software component-based system aims to organize system architecture and behavior as a means of computation, communication and constraints, using computational blocks and aggregates for both discrete and continuous behavior, different interconnection and execution disciplines for event-based and time-based controllers, and so on, to encompass the demands to more functionality, at even lower prices, and with opposite constraints. COMDES (Component-based Design of Software for Distributed Embedded Systems is such a component-based system framework developed by the software engineering group of Mads Clausen Institute for Product Innovation (MCI, University of Southern Denmark. Once specified, the software model has to be analyzed. One way of doing that is to integrate in wrapper files the model back into Simulink S-functions, and use its extensive simulation features, thus allowing an early exploration of the possible design choices over multiple disciplines. The paper describes a safe translation of a restricted set of MATLAB/Simulink blocks to COMDES software components, both for continuous and discrete behavior, and the transformation of the software system into the S
SANS observations on weakly flocculated dispersions
DEFF Research Database (Denmark)
Mischenko, N.; Ourieva, G.; Mortensen, K.
1997-01-01
Structural changes occurring in colloidal dispersions of poly-(methyl metacrylate) (PMMA) particles, sterically stabilized with poly-(12-hydroxystearic acid) (PHSA), while varying the solvent quality, temperature and shear rate, are investigated by small-angle neutron scattering (SANS......). For a moderately concentrated dispersion in a marginal solvent the transition on cooling from the effective stability to a weak attraction is monitored, The degree of attraction is determined in the framework of the sticky spheres model (SSM), SANS and rheological results are correlated....
Summary of the hadronic weak interaction session
International Nuclear Information System (INIS)
Bock, G.; Bryman, D.A.; Numao, T.
1993-01-01
The authors summarize and discuss present and future experiments on decays of light mesons and muons that were presented in the Hadronic Weak Interaction working group session of the open-quotes Workshop on Future Directions in Particle and Nuclear Physics at Multi-GeV Hadron Facilities.close quotes Precise measurements and rare-decay searches, which sense mass scales in the 1-1000 TeV region, are discussed in the context of the standard model and beyond
Weak values in a classical theory with an epistemic restriction
International Nuclear Information System (INIS)
Karanjai, Angela; Cavalcanti, Eric G; Bartlett, Stephen D; Rudolph, Terry
2015-01-01
Weak measurement of a quantum system followed by postselection based on a subsequent strong measurement gives rise to a quantity called the weak value: a complex number for which the interpretation has long been debated. We analyse the procedure of weak measurement and postselection, and the interpretation of the associated weak value, using a theory of classical mechanics supplemented by an epistemic restriction that is known to be operationally equivalent to a subtheory of quantum mechanics. Both the real and imaginary components of the weak value appear as phase space displacements in the postselected expectation values of the measurement device's position and momentum distributions, and we recover the same displacements as in the quantum case by studying the corresponding evolution in our theory of classical mechanics with an epistemic restriction. By using this epistemically restricted theory, we gain insight into the appearance of the weak value as a result of the statistical effects of post selection, and this provides us with an operational interpretation of the weak value, both its real and imaginary parts. We find that the imaginary part of the weak value is a measure of how much postselection biases the mean phase space distribution for a given amount of measurement disturbance. All such biases proportional to the imaginary part of the weak value vanish in the limit where disturbance due to measurement goes to zero. Our analysis also offers intuitive insight into how measurement disturbance can be minimized and the limits of weak measurement. (paper)
Functional limb weakness and paralysis.
Stone, J; Aybek, S
2016-01-01
Functional (psychogenic) limb weakness describes genuinely experienced limb power or paralysis in the absence of neurologic disease. The hallmark of functional limb weakness is the presence of internal inconsistency revealing a pattern of symptoms governed by abnormally focused attention. In this chapter we review the history and epidemiology of this clinical presentation as well as its subjective experience highlighting the detailed descriptions of authors at the end of the 19th and early 20th century. We discuss the relevance that physiological triggers such as injury and migraine and psychophysiological events such as panic and dissociation have to understanding of mechanism and treatment. We review many different positive diagnostic features, their basis in neurophysiological testing and present data on sensitivity and specificity. Diagnostic bedside tests with the most evidence are Hoover's sign, the hip abductor sign, drift without pronation, dragging gait, give way weakness and co-contraction. © 2016 Elsevier B.V. All rights reserved.
Business Process Modelling based on Petri nets
Directory of Open Access Journals (Sweden)
Qin Jianglong
2017-01-01
Full Text Available Business process modelling is the way business processes are expressed. Business process modelling is the foundation of business process analysis, reengineering, reorganization and optimization. It can not only help enterprises to achieve internal information system integration and reuse, but also help enterprises to achieve with the external collaboration. Based on the prototype Petri net, this paper adds time and cost factors to form an extended generalized stochastic Petri net. It is a formal description of the business process. The semi-formalized business process modelling algorithm based on Petri nets is proposed. Finally, The case from a logistics company proved that the modelling algorithm is correct and effective.
Instance-Based Generative Biological Shape Modeling.
Peng, Tao; Wang, Wei; Rohde, Gustavo K; Murphy, Robert F
2009-01-01
Biological shape modeling is an essential task that is required for systems biology efforts to simulate complex cell behaviors. Statistical learning methods have been used to build generative shape models based on reconstructive shape parameters extracted from microscope image collections. However, such parametric modeling approaches are usually limited to simple shapes and easily-modeled parameter distributions. Moreover, to maximize the reconstruction accuracy, significant effort is required to design models for specific datasets or patterns. We have therefore developed an instance-based approach to model biological shapes within a shape space built upon diffeomorphic measurement. We also designed a recursive interpolation algorithm to probabilistically synthesize new shape instances using the shape space model and the original instances. The method is quite generalizable and therefore can be applied to most nuclear, cell and protein object shapes, in both 2D and 3D.
Introduction to unification of electromagnetic and weak interactions
International Nuclear Information System (INIS)
Martin, F.
1980-01-01
After reviewing the present status of weak interaction phenomenology we discuss the basic principles of gauge theories. Then we show how Higgs mechanism can give massive quanta of interaction. The so-called 'Weinberg-Salam' model, which unifies electromagnetic and weak interactions, is described. We conclude with a few words on unification with strong interactions and gravity [fr
Integration of Simulink Models with Component-based Software Models
DEFF Research Database (Denmark)
Marian, Nicolae; Top, Søren
2008-01-01
to be analyzed. One way of doing that is to integrate in wrapper files the model back into Simulink S-functions, and use its extensive simulation features, thus allowing an early exploration of the possible design choices over multiple disciplines. The paper describes a safe translation of a restricted set......Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics...... constructs and process flow, then software code is generated. A Simulink model is a representation of the design or implementation of a physical system that satisfies a set of requirements. A software component-based system aims to organize system architecture and behaviour as a means of computation...
Cosmology and the weak interaction
International Nuclear Information System (INIS)
Schramm, D.N.
1989-12-01
The weak interaction plays a critical role in modern Big Bang cosmology. This review will emphasize two of its most publicized cosmological connections: Big Bang nucleosynthesis and Dark Matter. The first of these is connected to the cosmological prediction of Neutrino Flavours, N ν ∼ 3 which is now being confirmed at SLC and LEP. The second is interrelated to the whole problem of galaxy and structure formation in the universe. This review will demonstrate the role of the weak interaction both for dark matter candidates and for the problem of generating seeds to form structure. 87 refs., 3 figs., 5 tabs
Nonlinear waves and weak turbulence
Zakharov, V E
1997-01-01
This book is a collection of papers on dynamical and statistical theory of nonlinear wave propagation in dispersive conservative media. Emphasis is on waves on the surface of an ideal fluid and on Rossby waves in the atmosphere. Although the book deals mainly with weakly nonlinear waves, it is more than simply a description of standard perturbation techniques. The goal is to show that the theory of weakly interacting waves is naturally related to such areas of mathematics as Diophantine equations, differential geometry of waves, Poincaré normal forms, and the inverse scattering method.
Weak interactions at high energies
International Nuclear Information System (INIS)
Ellis, J.
1978-08-01
Review lectures are presented on the phenomenological implications of the modern spontaneously broken gauge theories of the weak and electromagnetic interactions, and some observations are made about which high energy experiments probe what aspects of gauge theories. Basic quantum chromodynamics phenomenology is covered including momentum dependent effective quark distributions, the transverse momentum cutoff, search for gluons as sources of hadron jets, the status and prospects for the spectroscopy of fundamental fermions and how fermions may be used to probe aspects of the weak and electromagnetic gauge theory, studies of intermediate vector bosons, and miscellaneous possibilities suggested by gauge theories from the Higgs bosons to speculations about proton decay. 187 references
Singularity analysis of potential fields to enhance weak anomalies
Chen, G.; Cheng, Q.; Liu, T.
2013-12-01
Geoanomalies generally are nonlinear, non-stationary and weak, especially in the land cover areas, however, the traditional methods of geoanomaly identification are usually based on linear theory. In past two decades, many power-law function models have been developed based on fractal concept in mineral exploration and mineral resource assessment, such that the density-area (C-A) model and spectrum-area model (S-A) suggested by Qiuming Cheng have played important roles in extracting geophysical and geochemical anomalies. Several power-law relationships are evident in geophysical potential fields, such as field value-distance, power spectrum-wave number as well as density-area models. The singularity index based on density-area model involves the first derivative transformation of the measure. Hence, we introduce the singularity analysis to develop a novel high-pass filter for extracting gravity and magnetic anomalies with the advantage of scale invariance. Furthermore, we suggest that the statistics of singularity indices can provide a new edge detection scheme for the gravity or magnetic source bodies. Meanwhile, theoretical magnetic anomalies are established to verify these assertions. In the case study from Nanling mineral district in south China and Qikou Depression in east China, compared with traditional geophysical filtering methods including multiscale wavelet analysis and total horizontal gradient methods, the singularity method enhances and extracts the weak anomalies caused by buried magmatic rocks more effectively, and provides more distinct boundary information of rocks. Moreover, the singularity mapping results have good correspondence relationship with both the outcropping rocks and known mineral deposits to support future mineral resource exploration. The singularity method based on fractal analysis has potential to be a new useful theory and technique for processing gravity and magnetic anomaly data.
Base Flow Model Validation, Phase II
National Aeronautics and Space Administration — The program focuses on turbulence modeling enhancements for predicting high-speed rocket base flows. A key component of the effort is the collection of high-fidelity...
2014-06-01
tracks. Several Slocum and Spray gliders sampled the A~no Nuevo upwelling center during the transitions from upwelling to re- laxation and back to...assimilated (b) Slocum and (c) Spray glider tracks duringAugust 2003. The red dots represent the locations of the moored buoys M1 (inside the bay) and...altimetry (due to the limited area of the model domain), vertical profiles of temperature and salinity from Slocum and Spray gliders and two moor- ings
New global ICT-based business models
DEFF Research Database (Denmark)
Universities. The book continues by describing, analyzing and showing how NEWGIBM was implemented in SMEs in different industrial companies/networks. Based on this effort, the researchers try to describe and analyze the current context, experience of NEWGIBM and finally the emerging scenarios of NEWGIBM...... The NEWGIBM Cases Show? The Strategy Concept in Light of the Increased Importance of Innovative Business Models Successful Implementation of Global BM Innovation Globalisation Of ICT Based Business Models: Today And In 2020...
A probabilistic graphical model based stochastic input model construction
International Nuclear Information System (INIS)
Wan, Jiang; Zabaras, Nicholas
2014-01-01
Model reduction techniques have been widely used in modeling of high-dimensional stochastic input in uncertainty quantification tasks. However, the probabilistic modeling of random variables projected into reduced-order spaces presents a number of computational challenges. Due to the curse of dimensionality, the underlying dependence relationships between these random variables are difficult to capture. In this work, a probabilistic graphical model based approach is employed to learn the dependence by running a number of conditional independence tests using observation data. Thus a probabilistic model of the joint PDF is obtained and the PDF is factorized into a set of conditional distributions based on the dependence structure of the variables. The estimation of the joint PDF from data is then transformed to estimating conditional distributions under reduced dimensions. To improve the computational efficiency, a polynomial chaos expansion is further applied to represent the random field in terms of a set of standard random variables. This technique is combined with both linear and nonlinear model reduction methods. Numerical examples are presented to demonstrate the accuracy and efficiency of the probabilistic graphical model based stochastic input models. - Highlights: • Data-driven stochastic input models without the assumption of independence of the reduced random variables. • The problem is transformed to a Bayesian network structure learning problem. • Examples are given in flows in random media
Model-based auralizations of violin sound trends accompanying plate-bridge tuning or holding.
Bissinger, George; Mores, Robert
2015-04-01
To expose systematic trends in violin sound accompanying "tuning" only the plates or only the bridge, the first structural acoustics-based model auralizations of violin sound were created by passing a bowed-string driving force measured at the bridge of a solid body violin through the dynamic filter (DF) model radiativity profile "filter" RDF(f) (frequency-dependent pressure per unit driving force, free-free suspension, anechoic chamber). DF model auralizations for the more realistic case of a violin held/played in a reverberant auditorium reveal that holding the violin greatly diminishes its low frequency response, an effect only weakly compensated for by auditorium reverberation.
Secure Authentication from a Weak Key, Without Leaking Information
N.J. Bouman (Niek); S. Fehr (Serge); K.G. Paterson (Kerry)
2011-01-01
htmlabstractWe study the problem of authentication based on a weak key in the information-theoretic setting. A key is weak if its min-entropy is an arbitrary small fraction of its bit length. This problem has recently received considerable attention, with different solutions optimizing different
Econophysics of agent-based models
Aoyama, Hideaki; Chakrabarti, Bikas; Chakraborti, Anirban; Ghosh, Asim
2014-01-01
The primary goal of this book is to present the research findings and conclusions of physicists, economists, mathematicians and financial engineers working in the field of "Econophysics" who have undertaken agent-based modelling, comparison with empirical studies and related investigations. Most standard economic models assume the existence of the representative agent, who is “perfectly rational” and applies the utility maximization principle when taking action. One reason for this is the desire to keep models mathematically tractable: no tools are available to economists for solving non-linear models of heterogeneous adaptive agents without explicit optimization. In contrast, multi-agent models, which originated from statistical physics considerations, allow us to go beyond the prototype theories of traditional economics involving the representative agent. This book is based on the Econophys-Kolkata VII Workshop, at which many such modelling efforts were presented. In the book, leading researchers in the...
Springer handbook of model-based science
Bertolotti, Tommaso
2017-01-01
The handbook offers the first comprehensive reference guide to the interdisciplinary field of model-based reasoning. It highlights the role of models as mediators between theory and experimentation, and as educational devices, as well as their relevance in testing hypotheses and explanatory functions. The Springer Handbook merges philosophical, cognitive and epistemological perspectives on models with the more practical needs related to the application of this tool across various disciplines and practices. The result is a unique, reliable source of information that guides readers toward an understanding of different aspects of model-based science, such as the theoretical and cognitive nature of models, as well as their practical and logical aspects. The inferential role of models in hypothetical reasoning, abduction and creativity once they are constructed, adopted, and manipulated for different scientific and technological purposes is also discussed. Written by a group of internationally renowned experts in ...
Exploring model-based target discrimination metrics
Witus, Gary; Weathersby, Marshall
2004-08-01
Visual target discrimination has occurred when the observer can say "I see a target THERE!" and can designate the target location. Target discrimination occurs when a perceived shape is sufficiently similar one or more of the instances the observer has been trained on. Marr defined vision as "knowing what is where by seeing." Knowing "what" requires prior knowledge. Target discrimination requires model-based visual processing. Model-based signature metrics attempt to answer the question "to what extent does the target in the image resemble a training image?" Model-based signature metrics attempt to represent the effects of high-level top-down visual cognition, in addition to low-level bottom-up effects. Recent advances in realistic 3D target rendering and computer-vision object recognition have made model-based signature metrics more practical. The human visual system almost certainly does NOT use the same processing algorithms as computer vision object recognition, but some processing elements and the overall effects are similar. It remains to be determined whether model-based metrics explain the variance in human performance. The purpose of this paper is to explain and illustrate the model-based approach to signature metrics.
Voltage Weak DC Distribution Grids
Hailu, T.G.; Mackay, L.J.; Ramirez Elizondo, L.M.; Ferreira, J.A.
2017-01-01
This paper describes the behavior of voltage weak DC distribution systems. These systems have relatively small system capacitance. The size of system capacitance, which stores energy, has a considerable effect on the value of fault currents, control complexity, and system reliability. A number of
Second threshold in weak interactions
Veltman, M.J.G.
1977-01-01
The point of view that weak interactions must have a second threshold below 300 – 600 GeV is developed. Above this threshold new physics must come in. This new physics may be the Higgs system, or some other nonperturbative system possibly having some similarities to the Higgs system. The limit of
Coverings, Networks and Weak Topologies
Czech Academy of Sciences Publication Activity Database
Dow, A.; Junnila, H.; Pelant, Jan
2006-01-01
Roč. 53, č. 2 (2006), s. 287-320 ISSN 0025-5793 R&D Projects: GA ČR GA201/97/0216 Institutional research plan: CEZ:AV0Z10190503 Keywords : Banach spaces * weak topologies * networks topologies Subject RIV: BA - General Mathematics
Submanifolds weakly associated with graphs
Indian Academy of Sciences (India)
theory by defining submanifolds weakly associated with graphs. We prove that, in a local sense, every submanifold satisfies such an association, and other general results. Finally, we study submanifolds associated with graphs either in low dimensions or belonging to some special families. Keywords. Almost Hermitian ...
Submanifolds weakly associated with graphs
Indian Academy of Sciences (India)
We establish an interesting link between differential geometry and graph theory by defining submanifolds weakly associated with graphs. We prove that, in a local sense, every submanifold satisfies such an association, and other general results. Finally, we study submanifolds associated with graphs either in low ...
Rule-based Modelling and Tunable Resolution
Directory of Open Access Journals (Sweden)
Russ Harmer
2009-11-01
Full Text Available We investigate the use of an extension of rule-based modelling for cellular signalling to create a structured space of model variants. This enables the incremental development of rule sets that start from simple mechanisms and which, by a gradual increase in agent and rule resolution, evolve into more detailed descriptions.
Model-based testing for software safety
Gurbuz, Havva Gulay; Tekinerdogan, Bedir
2017-01-01
Testing safety-critical systems is crucial since a failure or malfunction may result in death or serious injuries to people, equipment, or environment. An important challenge in testing is the derivation of test cases that can identify the potential faults. Model-based testing adopts models of a
Approximation Algorithms for Model-Based Diagnosis
Feldman, A.B.
2010-01-01
Model-based diagnosis is an area of abductive inference that uses a system model, together with observations about system behavior, to isolate sets of faulty components (diagnoses) that explain the observed behavior, according to some minimality criterion. This thesis presents greedy approximation
Agent-based modelling of cholera diffusion
Augustijn-Beckers, Petronella; Doldersum, Tom; Useya, Juliana; Augustijn, Dionysius C.M.
2016-01-01
This paper introduces a spatially explicit agent-based simulation model for micro-scale cholera diffusion. The model simulates both an environmental reservoir of naturally occurring V.cholerae bacteria and hyperinfectious V. cholerae. Objective of the research is to test if runoff from open refuse
Pereira, Jorge F B; Barber, Patrick S; Kelley, Steven P; Berton, Paula; Rogers, Robin D
2017-10-11
The properties of double salt ionic liquids based on solutions of cholinium acetate ([Ch][OAc]), ethanolammonium acetate ([NH 3 (CH 2 ) 2 OH][OAc]), hydroxylammonium acetate ([NH 3 OH][OAc]), ethylammonium acetate ([NH 3 CH 2 CH 3 ][OAc]), and tetramethylammonium acetate ([N(CH 3 ) 4 ][OAc]) in 1-ethyl-3-methylimidazolium acetate ([C 2 mim][OAc]) were investigated by NMR spectroscopy and X-ray crystallography. Through mixture preparation, the solubility of [N(CH 3 ) 4 ][OAc] is the lowest, and [Ch][OAc] shows a 3-fold lower solubility than the other hydroxylated ammonium acetate-based salts in [C 2 mim][OAc] at room temperature. NMR and X-ray crystallographic studies of the pure salts suggest that the molecular-level mechanisms governing such miscibility differences are related to the weaker interactions between the -NH 3 groups and [OAc] - , even though three of these salts possess the same strong 1 : 1 hydrogen bonds between the cation -OH group and the [OAc] - ion. The formation of polyionic clusters between the anion and those cations with unsatisfied hydrogen bond donors seems to be a new tool by which the solubility of these salts in [C 2 mim][OAc] can be controlled.
Gradient based filtering of digital elevation models
DEFF Research Database (Denmark)
Knudsen, Thomas; Andersen, Rune Carbuhn
We present a filtering method for digital terrain models (DTMs). The method is based on mathematical morphological filtering within gradient (slope) defined domains. The intention with the filtering procedure is to improbé the cartographic quality of height contours generated from a DTM based on ...
Accept & Reject Statement-Based Uncertainty Models
E. Quaeghebeur (Erik); G. de Cooman; F. Hermans (Felienne)
2015-01-01
textabstractWe develop a framework for modelling and reasoning with uncertainty based on accept and reject statements about gambles. It generalises the frameworks found in the literature based on statements of acceptability, desirability, or favourability and clarifies their relative position. Next
Orbits in weak and strong bars
Contopoulos, George
1980-01-01
The authors study the plane orbits in simple bar models embedded in an axisymmetric background when the bar density is about 1% (weak), 10% (intermediate) or 100% (strong bar) of the axisymmetric density. Most orbits follow the stable periodic orbits. The basic families of periodic orbits are described. In weak bars with two Inner Lindblad Resonances there is a family of stable orbits extending from the center up to the Outer Lindblad Resonance. This family contains the long period orbits near corotation. Other stable families appear between the Inner Lindblad Resonances, outside the Outer Lindblad Resonance, around corotation (short period orbits) and around the center (retrograde). Some families become unstable or disappear in strong bars. A comparison is made with cases having one or no Inner Lindblad Resonance. (12 refs).
Information modelling and knowledge bases XXV
Tokuda, T; Jaakkola, H; Yoshida, N
2014-01-01
Because of our ever increasing use of and reliance on technology and information systems, information modelling and knowledge bases continue to be important topics in those academic communities concerned with data handling and computer science. As the information itself becomes more complex, so do the levels of abstraction and the databases themselves. This book is part of the series Information Modelling and Knowledge Bases, which concentrates on a variety of themes in the important domains of conceptual modeling, design and specification of information systems, multimedia information modelin
DEFF Research Database (Denmark)
Halle, Lars Halvard; Nicaise, Johannes
Presenting the first systematic treatment of the behavior of Néron models under ramified base change, this book can be read as an introduction to various subtle invariants and constructions related to Néron models of semi-abelian varieties, motivated by concrete research problems and complemented...... on Néron component groups, Edixhoven’s filtration and the base change conductor of Chai and Yu, and we study these invariants using various techniques such as models of curves, sheaves on Grothendieck sites and non-archimedean uniformization. We then apply our results to the study of motivic zeta functions...
Theory of weakly nonlinear self-sustained detonations
Faria, Luiz
2015-11-03
We propose a theory of weakly nonlinear multidimensional self-sustained detonations based on asymptotic analysis of the reactive compressible Navier-Stokes equations. We show that these equations can be reduced to a model consisting of a forced unsteady small-disturbance transonic equation and a rate equation for the heat release. In one spatial dimension, the model simplifies to a forced Burgers equation. Through analysis, numerical calculations and comparison with the reactive Euler equations, the model is demonstrated to capture such essential dynamical characteristics of detonations as the steady-state structure, the linear stability spectrum, the period-doubling sequence of bifurcations and chaos in one-dimensional detonations and cellular structures in multidimensional detonations.
Reducing uncertainty based on model fitness: Application to a ...
African Journals Online (AJOL)
A weakness of global sensitivity and uncertainty analysis methodologies is the often subjective definition of prior parameter probability distributions, especially ... The reservoir representing the central part of the wetland, where flood waters separate into several independent distributaries, is a keystone area within the model.
Energy based prediction models for building acoustics
DEFF Research Database (Denmark)
Brunskog, Jonas
2012-01-01
In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...... on underlying basic assumptions, such as diffuse fields, high modal overlap, resonant field being dominant, etc., and the consequences of these in terms of limitations in the theory and in the practical use of the models....
Multi-Domain Modeling Based on Modelica
Directory of Open Access Journals (Sweden)
Liu Jun
2016-01-01
Full Text Available With the application of simulation technology in large-scale and multi-field problems, multi-domain unified modeling become an effective way to solve these problems. This paper introduces several basic methods and advantages of the multidisciplinary model, and focuses on the simulation based on Modelica language. The Modelica/Mworks is a newly developed simulation software with features of an object-oriented and non-casual language for modeling of the large, multi-domain system, which makes the model easier to grasp, develop and maintain.It This article shows the single degree of freedom mechanical vibration system based on Modelica language special connection mechanism in Mworks. This method that multi-domain modeling has simple and feasible, high reusability. it closer to the physical system, and many other advantages.
Directory of Open Access Journals (Sweden)
Dariush Akbarian
2017-09-01
Full Text Available The Production Possibility Set (PPS is defined as a set of inputs and outputs of a system in which inputs can produce outputs. The Production Possibility Set of the Data Envelopment Analysis (DEA model is contain of two types defining hyperplanes (facets; strong and weak efficient facets. In this paper, the problem of finding weak defining hyperplanes of the PPS of the CCR model is dealt with. However, the equation of strong defining hyperplanes of the PPS of the CCR model can be found in this paper. We state and prove some properties relative to our method. To illustrate the applicability of the proposed model, some numerical examples are finally provided. Our algorithm can easily be implemented using existing packages for operation research, such as GAMS.
Towards weakly constrained double field theory
Directory of Open Access Journals (Sweden)
Kanghoon Lee
2016-08-01
Full Text Available We show that it is possible to construct a well-defined effective field theory incorporating string winding modes without using strong constraint in double field theory. We show that X-ray (Radon transform on a torus is well-suited for describing weakly constrained double fields, and any weakly constrained fields are represented as a sum of strongly constrained fields. Using inverse X-ray transform we define a novel binary operation which is compatible with the level matching constraint. Based on this formalism, we construct a consistent gauge transform and gauge invariant action without using strong constraint. We then discuss the relation of our result to the closed string field theory. Our construction suggests that there exists an effective field theory description for massless sector of closed string field theory on a torus in an associative truncation.
The Weak Gravity Conjecture in three dimensions
Energy Technology Data Exchange (ETDEWEB)
Montero, Miguel [Departamento de Física Teórica, Facultad de Ciencias,Universidad Autónoma de Madrid,Calle Francisco Tomás y Valiente 7, 28049 Madrid (Spain); Instituto de Física Teórica IFT-UAM/CSIC, Campus de Cantoblanco,C/ Nicolás Cabrera 13-15, 28049 Madrid (Spain); Shiu, Gary; Soler, Pablo [Department of Physics, University of Wisconsin-Madison,1150 University Ave, Madison, WI 53706 (United States); Department of Physics & Institute for Advanced Study,Hong Kong University of Science and Technology,Lo Ka Chung Building, Lee Shau Kee Campus, Clear Water Bay (Hong Kong)
2016-10-28
We study weakly coupled U(1) theories in AdS{sub 3}, their associated charged BTZ solutions, and their charged spectra. We find that modular invariance of the holographic dual two-dimensional CFT and compactness of the gauge group together imply the existence of charged operators with conformal dimension significantly below the black hole threshold. We regard this as a form of the Weak Gravity Conjecture (WGC) in three dimensions. We also explore the constraints posed by modular invariance on a particular discrete ℤ{sub N} symmetry which arises in our discussion. In this case, modular invariance does not guarantee the existence of light ℤ{sub N}-charged states. We also highlight the differences between our discussion and the usual heuristic arguments for the WGC based on black hole remnants.
Identification of walking human model using agent-based modelling
Shahabpoor, Erfan; Pavic, Aleksandar; Racic, Vitomir
2018-03-01
The interaction of walking people with large vibrating structures, such as footbridges and floors, in the vertical direction is an important yet challenging phenomenon to describe mathematically. Several different models have been proposed in the literature to simulate interaction of stationary people with vibrating structures. However, the research on moving (walking) human models, explicitly identified for vibration serviceability assessment of civil structures, is still sparse. In this study, the results of a comprehensive set of FRF-based modal tests were used, in which, over a hundred test subjects walked in different group sizes and walking patterns on a test structure. An agent-based model was used to simulate discrete traffic-structure interactions. The occupied structure modal parameters found in tests were used to identify the parameters of the walking individual's single-degree-of-freedom (SDOF) mass-spring-damper model using 'reverse engineering' methodology. The analysis of the results suggested that the normal distribution with the average of μ = 2.85Hz and standard deviation of σ = 0.34Hz can describe human SDOF model natural frequency. Similarly, the normal distribution with μ = 0.295 and σ = 0.047 can describe the human model damping ratio. Compared to the previous studies, the agent-based modelling methodology proposed in this paper offers significant flexibility in simulating multi-pedestrian walking traffics, external forces and simulating different mechanisms of human-structure and human-environment interaction at the same time.
A model evaluation checklist for process-based environmental models
Jackson-Blake, Leah
2015-04-01
the conceptual model on which it is based. In this study, a number of model structural shortcomings were identified, such as a lack of dissolved phosphorus transport via infiltration excess overland flow, potential discrepancies in the particulate phosphorus simulation and a lack of spatial granularity. (4) Conceptual challenges, as conceptual models on which predictive models are built are often outdated, having not kept up with new insights from monitoring and experiments. For example, soil solution dissolved phosphorus concentration in INCA-P is determined by the Freundlich adsorption isotherm, which could potentially be replaced using more recently-developed adsorption models that take additional soil properties into account. This checklist could be used to assist in identifying why model performance may be poor or unreliable. By providing a model evaluation framework, it could help prioritise which areas should be targeted to improve model performance or model credibility, whether that be through using alternative calibration techniques and statistics, improved data collection, improving or simplifying the model structure or updating the model to better represent current understanding of catchment processes.
Informal Institutions and the "Weaknesses" of Human Behavior
National Research Council Canada - National Science Library
Goebel, Markus; Thomas, Tobias
2005-01-01
... to interpersonal consistency and interpersonal conformity here. These sources of a systematic deviation from the standard model of the homo oeconomicus result in systematic weaknesses of perception and deviations of behavior...
Existence of Weak Solutions for a Nonlinear Elliptic System
Directory of Open Access Journals (Sweden)
Gilbert RobertP
2009-01-01
Full Text Available We investigate the existence of weak solutions to the following Dirichlet boundary value problem, which occurs when modeling an injection molding process with a partial slip condition on the boundary. We have in ; in ; , and on .
2013-01-01
Background The interRAI Acute Care instrument is a multidimensional geriatric assessment system intended to determine a hospitalized older persons’ medical, psychosocial and functional capacity and needs. Its objective is to develop an overall plan for treatment and long-term follow-up based on a common set of standardized items that can be used in various care settings. A Belgian web-based software system (BelRAI-software) was developed to enable clinicians to interpret the output and to communicate the patients’ data across wards and care organizations. The purpose of the study is to evaluate the (dis)advantages of the implementation of the interRAI Acute Care instrument as a comprehensive geriatric assessment instrument in an acute hospital context. Methods In a cross-sectional multicenter study on four geriatric wards in three acute hospitals, trained clinical staff (nurses, occupational therapists, social workers, and geriatricians) assessed 410 inpatients in routine clinical practice. The BelRAI-system was evaluated by focus groups, observations, and questionnaires. The Strengths, Weaknesses, Opportunities and Threats were mapped (SWOT-analysis) and validated by the participants. Results The primary strengths of the BelRAI-system were a structured overview of the patients’ condition early after admission and the promotion of multidisciplinary assessment. Our study was a first attempt to transfer standardized data between home care organizations, nursing homes and hospitals and a way to centralize medical, allied health professionals and nursing data. With the BelRAI-software, privacy of data is guaranteed. Weaknesses are the time-consuming character of the process and the overlap with other assessment instruments or (electronic) registration forms. There is room for improving the user-friendliness and the efficiency of the software, which needs hospital-specific adaptations. Opportunities are a timely and systematic problem detection and continuity of
Devriendt, Els; Wellens, Nathalie I H; Flamaing, Johan; Declercq, Anja; Moons, Philip; Boonen, Steven; Milisen, Koen
2013-09-05
The interRAI Acute Care instrument is a multidimensional geriatric assessment system intended to determine a hospitalized older persons' medical, psychosocial and functional capacity and needs. Its objective is to develop an overall plan for treatment and long-term follow-up based on a common set of standardized items that can be used in various care settings. A Belgian web-based software system (BelRAI-software) was developed to enable clinicians to interpret the output and to communicate the patients' data across wards and care organizations. The purpose of the study is to evaluate the (dis)advantages of the implementation of the interRAI Acute Care instrument as a comprehensive geriatric assessment instrument in an acute hospital context. In a cross-sectional multicenter study on four geriatric wards in three acute hospitals, trained clinical staff (nurses, occupational therapists, social workers, and geriatricians) assessed 410 inpatients in routine clinical practice. The BelRAI-system was evaluated by focus groups, observations, and questionnaires. The Strengths, Weaknesses, Opportunities and Threats were mapped (SWOT-analysis) and validated by the participants. The primary strengths of the BelRAI-system were a structured overview of the patients' condition early after admission and the promotion of multidisciplinary assessment. Our study was a first attempt to transfer standardized data between home care organizations, nursing homes and hospitals and a way to centralize medical, allied health professionals and nursing data. With the BelRAI-software, privacy of data is guaranteed. Weaknesses are the time-consuming character of the process and the overlap with other assessment instruments or (electronic) registration forms. There is room for improving the user-friendliness and the efficiency of the software, which needs hospital-specific adaptations. Opportunities are a timely and systematic problem detection and continuity of care. An actual shortage of
Cox, Nicholas; Ames, William; Epel, Boris; Kulik, Leonid V; Rapatskiy, Leonid; Neese, Frank; Messinger, Johannes; Wieghardt, Karl; Lubitz, Wolfgang
2011-09-05
An analysis of the electronic structure of the [Mn(II)Mn(III)(μ-OH)-(μ-piv)(2)(Me(3)tacn)(2)](ClO(4))(2) (PivOH) complex is reported. It displays features that include: (i) a ground 1/2 spin state; (ii) a small exchange (J) coupling between the two Mn ions; (iii) a mono-μ-hydroxo bridge, bis-μ-carboxylato motif; and (iv) a strongly coupled, terminally bound N ligand to the Mn(III). All of these features are observed in structural models of the oxygen evolving complex (OEC). Multifrequency electron paramagnetic resonance (EPR) and electron nuclear double resonance (ENDOR) measurements were performed on this complex, and the resultant spectra simulated using the Spin Hamiltonian formalism. The strong field dependence of the (55)Mn-ENDOR constrains the (55)Mn hyperfine tensors such that a unique solution for the electronic structure can be deduced. Large hyperfine anisotropy is required to reproduce the EPR/ENDOR spectra for both the Mn(II) and Mn(III) ions. The large effective hyperfine tensor anisotropy of the Mn(II), a d(5) ion which usually exhibits small anisotropy, is interpreted within a formalism in which the fine structure tensor of the Mn(III) ion strongly perturbs the zero-field energy levels of the Mn(II)Mn(III) complex. An estimate of the fine structure parameter (d) for the Mn(III) of -4 cm(-1) was made, by assuming the intrinsic anisotropy of the Mn(II) ion is small. The magnitude of the fine structure and intrinsic (onsite) hyperfine tensor of the Mn(III) is consistent with the known coordination environment of the Mn(III) ion as seen from its crystal structure. Broken symmetry density functional theory (DFT) calculations were performed on the crystal structure geometry. DFT values for both the isotropic and the anisotropic components of the onsite (intrinsic) hyperfine tensors match those inferred from the EPR/ENDOR simulations described above, to within 5%. This study demonstrates that DFT calculations provide reliable estimates for spectroscopic
History of the weak interactions
International Nuclear Information System (INIS)
Lee, T.D.
1987-01-01
At the 'Jackfest' marking the 65th birthday of Jack Steinberger (see July/August 1986 issue, page 29), T.D. Lee gave an account of the history of the weak interactions. This edited version omits some of Lee's tributes to Steinberger, but retains the impressive insight into the subtleties of a key area of modern physics by one who played a vital role in its development. (orig./HSI).
Development of Ensemble Model Based Water Demand Forecasting Model
Kwon, Hyun-Han; So, Byung-Jin; Kim, Seong-Hyeon; Kim, Byung-Seop
2014-05-01
In recent years, Smart Water Grid (SWG) concept has globally emerged over the last decade and also gained significant recognition in South Korea. Especially, there has been growing interest in water demand forecast and optimal pump operation and this has led to various studies regarding energy saving and improvement of water supply reliability. Existing water demand forecasting models are categorized into two groups in view of modeling and predicting their behavior in time series. One is to consider embedded patterns such as seasonality, periodicity and trends, and the other one is an autoregressive model that is using short memory Markovian processes (Emmanuel et al., 2012). The main disadvantage of the abovementioned model is that there is a limit to predictability of water demands of about sub-daily scale because the system is nonlinear. In this regard, this study aims to develop a nonlinear ensemble model for hourly water demand forecasting which allow us to estimate uncertainties across different model classes. The proposed model is consist of two parts. One is a multi-model scheme that is based on combination of independent prediction model. The other one is a cross validation scheme named Bagging approach introduced by Brieman (1996) to derive weighting factors corresponding to individual models. Individual forecasting models that used in this study are linear regression analysis model, polynomial regression, multivariate adaptive regression splines(MARS), SVM(support vector machine). The concepts are demonstrated through application to observed from water plant at several locations in the South Korea. Keywords: water demand, non-linear model, the ensemble forecasting model, uncertainty. Acknowledgements This subject is supported by Korea Ministry of Environment as "Projects for Developing Eco-Innovation Technologies (GT-11-G-02-001-6)
Macromolecular X-ray structure determination using weak, single-wavelength anomalous data
Energy Technology Data Exchange (ETDEWEB)
Bunkóczi, Gábor; McCoy, Airlie J.; Echols, Nathaniel; Grosse-Kunstleve, Ralf W.; Adams, Paul D.; Holton, James M.; Read, Randy J.; Terwilliger, Thomas C.
2014-12-22
We describe a likelihood-based method for determining the substructure of anomalously scattering atoms in macromolecular crystals that allows successful structure determination by single-wavelength anomalous diffraction (SAD) X-ray analysis with weak anomalous signal. With the use of partial models and electron density maps in searches for anomalously scattering atoms, testing of alternative values of parameters and parallelized automated model-building, this method has the potential to extend the applicability of the SAD method in challenging cases.
A Nursing Practice Model Based on Christ: The Agape Model.
Eckerd, Nancy
2017-06-07
Nine out of 10 American adults believe Jesus was a real person, and almost two-thirds have made a commitment to Jesus Christ. Research further supports that spiritual beliefs and religious practices influence overall health and well-being. Christian nurses need a practice model that helps them serve as kingdom nurses. This article introduces the Agape Model, based on the agape love and characteristics of Christ, upon which Christian nurses may align their practice to provide Christ-centered care.
SLS Navigation Model-Based Design Approach
Oliver, T. Emerson; Anzalone, Evan; Geohagan, Kevin; Bernard, Bill; Park, Thomas
2018-01-01
The SLS Program chose to implement a Model-based Design and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team has been responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for the navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1-B design, the additional GPS Receiver hardware is managed as a DMM at the vehicle design level. This paper provides a discussion of the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the Navigation components. These include composing system requirements, requirements verification, model development, model verification and validation, and modeling and analysis approaches. The Model-based Design and Requirements approach does not reduce the effort associated with the design process versus previous processes used at Marshall Space Flight Center. Instead, the approach takes advantage of overlap between the requirements development and management process, and the design and analysis process by efficiently combining the control (i.e. the requirement) and the design mechanisms. The design mechanism is the representation of the component behavior and performance in design and analysis tools. The focus in the early design process shifts from the development and
DEFF Research Database (Denmark)
Halle, Lars Halvard; Nicaise, Johannes
Presenting the first systematic treatment of the behavior of Néron models under ramified base change, this book can be read as an introduction to various subtle invariants and constructions related to Néron models of semi-abelian varieties, motivated by concrete research problems and complemented...... with explicit examples. Néron models of abelian and semi-abelian varieties have become an indispensable tool in algebraic and arithmetic geometry since Néron introduced them in his seminal 1964 paper. Applications range from the theory of heights in Diophantine geometry to Hodge theory. We focus specifically...... on Néron component groups, Edixhoven’s filtration and the base change conductor of Chai and Yu, and we study these invariants using various techniques such as models of curves, sheaves on Grothendieck sites and non-archimedean uniformization. We then apply our results to the study of motivic zeta functions...
Ravi, O.
2015-01-01
called weakly Iπg-open sets in ideal topological spaces is introduced and the notion of weakly Iπg-closed sets in ideal topologicalspaces is studied. The relationships of weakly Iπg-closed sets andvarious properties of weakly Iπg-closed sets are investigated
A Multiagent Based Model for Tactical Planning
2002-10-01
Pub. Co. 1985. [10] Castillo, J.M. Aproximación mediante procedimientos de Inteligencia Artificial al planeamiento táctico. Doctoral Thesis...been developed under the same conceptual model and using similar Artificial Intelligence Tools. We use four different stimulus/response agents in...The conceptual model is built on base of the Agents theory. To implement the different agents we have used Artificial Intelligence techniques such
Model-Based Motion Tracking of Infants
DEFF Research Database (Denmark)
Olsen, Mikkel Damgaard; Herskind, Anna; Nielsen, Jens Bo
2014-01-01
Even though motion tracking is a widely used technique to analyze and measure human movements, only a few studies focus on motion tracking of infants. In recent years, a number of studies have emerged focusing on analyzing the motion pattern of infants, using computer vision. Most of these studie...... that resembles the body surface of an infant, where the model is based on simple geometric shapes and a hierarchical skeleton model....
Quality Model Based on Cots Quality Attributes
Jawad Alkhateeb; Khaled Musa
2013-01-01
The quality of software is essential to corporations in making their commercial software. Good or poorquality to software plays an important role to some systems such as embedded systems, real-time systems,and control systems that play an important aspect in human life. Software products or commercial off theshelf software are usually programmed based on a software quality model. In the software engineeringfield, each quality model contains a set of attributes or characteristics that drives i...
Evolutionary modeling-based approach for model errors correction
Directory of Open Access Journals (Sweden)
S. Q. Wan
2012-08-01
Full Text Available The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963 equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data."
On the basis of the intelligent features of evolutionary modeling (EM, including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.
Image-Based Multiresolution Implicit Object Modeling
Directory of Open Access Journals (Sweden)
Sarti Augusto
2002-01-01
Full Text Available We discuss two image-based 3D modeling methods based on a multiresolution evolution of a volumetric function′s level set. In the former method, the role of the level set implosion is to fuse ("sew" and "stitch" together several partial reconstructions (depth maps into a closed model. In the later, the level set′s implosion is steered directly by the texture mismatch between views. Both solutions share the characteristic of operating in an adaptive multiresolution fashion, in order to boost up computational efficiency and robustness.
Model-based testing for embedded systems
Zander, Justyna; Mosterman, Pieter J
2011-01-01
What the experts have to say about Model-Based Testing for Embedded Systems: "This book is exactly what is needed at the exact right time in this fast-growing area. From its beginnings over 10 years ago of deriving tests from UML statecharts, model-based testing has matured into a topic with both breadth and depth. Testing embedded systems is a natural application of MBT, and this book hits the nail exactly on the head. Numerous topics are presented clearly, thoroughly, and concisely in this cutting-edge book. The authors are world-class leading experts in this area and teach us well-used
Model Based Control of Reefer Container Systems
DEFF Research Database (Denmark)
Sørensen, Kresten Kjær
This thesis is concerned with the development of model based control for the Star Cool refrigerated container (reefer) with the objective of reducing energy consumption. This project has been carried out under the Danish Industrial PhD programme and has been financed by Lodam together with the Da......This thesis is concerned with the development of model based control for the Star Cool refrigerated container (reefer) with the objective of reducing energy consumption. This project has been carried out under the Danish Industrial PhD programme and has been financed by Lodam together...
Synchronization of weakly coupled canard oscillators
Köksal Ersöz, Elif; Desroches, Mathieu; Krupa, Martin
2017-06-01
Synchronization has been studied extensively in the context of weakly coupled oscillators using the so-called phase response curve (PRC) which measures how a change of the phase of an oscillator is affected by a small perturbation. This approach was based upon the work of Malkin, and it has been extended to relaxation oscillators. Namely, synchronization conditions were established under the weak coupling assumption, leading to a criterion for the existence of synchronous solutions of weakly coupled relaxation oscillators. Previous analysis relies on the fact that the slow nullcline does not intersect the fast nullcline near one of its fold points, where canard solutions can arise. In the present study we use numerical continuation techniques to solve the adjoint equations and we show that synchronization properties of canard cycles are different than those of classical relaxation cycles. In particular, we highlight a new special role of the maximal canard in separating two distinct synchronization regimes: the Hopf regime and the relaxation regime. Phase plane analysis of slow-fast oscillators undergoing a canard explosion provides an explanation for this change of synchronization properties across the maximal canard.
Nonstationary weak signal detection based on normalization ...
Indian Academy of Sciences (India)
... than the traditional stochastic resonance. The method develops the area of time-varying signal detection with stochastic resonance and presents new strategy for detection and denoising of a time-varying signal. It can be expected to be widely used in the areas of aperiodic signal processing, radar communication,etc ...
Modeling of photoluminescence in laser-based lighting systems
Chatzizyrli, Elisavet; Tinne, Nadine; Lachmayer, Roland; Neumann, Jörg; Kracht, Dietmar
2017-12-01
The development of laser-based lighting systems has been the latest step towards a revolution in illumination technology brought about by solid-state lighting. Laser-activated remote phosphor systems produce white light sources with significantly higher luminance than LEDs. The weak point of such systems is often considered to be the conversion element. The high-intensity exciting laser beam in combination with the limited thermal conductivity of ceramic phosphor materials leads to thermal quenching, the phenomenon in which the emission efficiency decreases as temperature rises. For this reason, the aim of the presented study is the modeling of remote phosphor systems in order to investigate their thermal limitations and to calculate the parameters for optimizing the efficiency of such systems. The common approach to simulate remote phosphor systems utilizes a combination of different tools such as ray tracing algorithms and wave optics tools for describing the incident and converted light, whereas the modeling of the conversion process itself, i.e. photoluminescence, in most cases is circumvented by using the absorption and emission spectra of the phosphor material. In this study, we describe the processes involved in luminescence quantum-mechanically using the single-configurational-coordinate diagram as well as the Franck-Condon principle and propose a simulation model that incorporates the temperature dependence of these processes. Following an increasing awareness of climate change and environmental issues, the development of ecologically friendly lighting systems featuring low power consumption and high luminous efficiency is imperative more than ever. The better understanding of laser-based lighting systems is an important step towards that aim as they may improve on LEDs in the near future.
Acute neuromuscular weakness associated with dengue infection
Directory of Open Access Journals (Sweden)
Harmanjit Singh Hira
2012-01-01
Full Text Available Background: Dengue infections may present with neurological complications. Whether these are due to neuromuscular disease or electrolyte imbalance is unclear. Materials and Methods: Eighty-eight patients of dengue fever required hospitalization during epidemic in year 2010. Twelve of them presented with acute neuromuscular weakness. We enrolled them for study. Diagnosis of dengue infection based on clinical profile of patients, positive serum IgM ELISA, NS1 antigen, and sero-typing. Complete hemogram, kidney and liver functions, serum electrolytes, and creatine phosphokinase (CPK were tested. In addition, two patients underwent nerve conduction velocity (NCV test and electromyography. Results: Twelve patients were included in the present study. Their age was between 18 and 34 years. Fever, myalgia, and motor weakness of limbs were most common presenting symptoms. Motor weakness developed on 2 nd to 4 th day of illness in 11 of 12 patients. In one patient, it developed on 10 th day of illness. Ten of 12 showed hypokalemia. One was of Guillain-Barré syndrome and other suffered from myositis; they underwent NCV and electromyography. Serum CPK and SGOT raised in 8 out of 12 patients. CPK of patient of myositis was 5098 IU. All of 12 patients had thrombocytopenia. WBC was in normal range. Dengue virus was isolated in three patients, and it was of serotype 1. CSF was normal in all. Within 24 hours, those with hypokalemia recovered by potassium correction. Conclusions: It was concluded that the dengue virus infection led to acute neuromuscular weakness because of hypokalemia, myositis, and Guillain-Barré syndrome. It was suggested to look for presence of hypokalemia in such patients.
Designing Network-based Business Model Ontology
DEFF Research Database (Denmark)
Hashemi Nekoo, Ali Reza; Ashourizadeh, Shayegheh; Zarei, Behrouz
2015-01-01
is going to propose e-business model ontology from the network point of view and its application in real world. The suggested ontology for network-based businesses is composed of individuals` characteristics and what kind of resources they own. also, their connections and pre-conceptions of connections...... such as shared-mental model and trust. However, it mostly covers previous business model elements. To confirm the applicability of this ontology, it has been implemented in business angel network and showed how it works....
Identification of Differentially Methylated Sites with Weak Methylation Effects
Directory of Open Access Journals (Sweden)
Hong Tran
2018-02-01
Full Text Available Deoxyribonucleic acid (DNA methylation is an epigenetic alteration crucial for regulating stress responses. Identifying large-scale DNA methylation at single nucleotide resolution is made possible by whole genome bisulfite sequencing. An essential task following the generation of bisulfite sequencing data is to detect differentially methylated cytosines (DMCs among treatments. Most statistical methods for DMC detection do not consider the dependency of methylation patterns across the genome, thus possibly inflating type I error. Furthermore, small sample sizes and weak methylation effects among different phenotype categories make it difficult for these statistical methods to accurately detect DMCs. To address these issues, the wavelet-based functional mixed model (WFMM was introduced to detect DMCs. To further examine the performance of WFMM in detecting weak differential methylation events, we used both simulated and empirical data and compare WFMM performance to a popular DMC detection tool methylKit. Analyses of simulated data that replicated the effects of the herbicide glyphosate on DNA methylation in Arabidopsis thaliana show that WFMM results in higher sensitivity and specificity in detecting DMCs compared to methylKit, especially when the methylation differences among phenotype groups are small. Moreover, the performance of WFMM is robust with respect to small sample sizes, making it particularly attractive considering the current high costs of bisulfite sequencing. Analysis of empirical Arabidopsis thaliana data under varying glyphosate dosages, and the analysis of monozygotic (MZ twins who have different pain sensitivities—both datasets have weak methylation effects of <1%—show that WFMM can identify more relevant DMCs related to the phenotype of interest than methylKit. Differentially methylated regions (DMRs are genomic regions with different DNA methylation status across biological samples. DMRs and DMCs are essentially the same
Model Predictive Control based on Finite Impulse Response Models
DEFF Research Database (Denmark)
Prasath, Guru; Jørgensen, John Bagterp
2008-01-01
We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations ...
Incident Duration Modeling Using Flexible Parametric Hazard-Based Models
Directory of Open Access Journals (Sweden)
Ruimin Li
2014-01-01
Full Text Available Assessing and prioritizing the duration time and effects of traffic incidents on major roads present significant challenges for road network managers. This study examines the effect of numerous factors associated with various types of incidents on their duration and proposes an incident duration prediction model. Several parametric accelerated failure time hazard-based models were examined, including Weibull, log-logistic, log-normal, and generalized gamma, as well as all models with gamma heterogeneity and flexible parametric hazard-based models with freedom ranging from one to ten, by analyzing a traffic incident dataset obtained from the Incident Reporting and Dispatching System in Beijing in 2008. Results show that different factors significantly affect different incident time phases, whose best distributions were diverse. Given the best hazard-based models of each incident time phase, the prediction result can be reasonable for most incidents. The results of this study can aid traffic incident management agencies not only in implementing strategies that would reduce incident duration, and thus reduce congestion, secondary incidents, and the associated human and economic losses, but also in effectively predicting incident duration time.
A Case Study on a Combination NDVI Forecasting Model Based on the Entropy Weight Method
Energy Technology Data Exchange (ETDEWEB)
Huang, Shengzhi; Ming, Bo; Huang, Qiang; Leng, Guoyong; Hou, Beibei
2017-05-05
It is critically meaningful to accurately predict NDVI (Normalized Difference Vegetation Index), which helps guide regional ecological remediation and environmental managements. In this study, a combination forecasting model (CFM) was proposed to improve the performance of NDVI predictions in the Yellow River Basin (YRB) based on three individual forecasting models, i.e., the Multiple Linear Regression (MLR), Artificial Neural Network (ANN), and Support Vector Machine (SVM) models. The entropy weight method was employed to determine the weight coefficient for each individual model depending on its predictive performance. Results showed that: (1) ANN exhibits the highest fitting capability among the four orecasting models in the calibration period, whilst its generalization ability becomes weak in the validation period; MLR has a poor performance in both calibration and validation periods; the predicted results of CFM in the calibration period have the highest stability; (2) CFM generally outperforms all individual models in the validation period, and can improve the reliability and stability of predicted results through combining the strengths while reducing the weaknesses of individual models; (3) the performances of all forecasting models are better in dense vegetation areas than in sparse vegetation areas.
A Weak Solution of a Stochastic Nonlinear Problem
Directory of Open Access Journals (Sweden)
M. L. Hadji
2015-01-01
Full Text Available We consider a problem modeling a porous medium with a random perturbation. This model occurs in many applications such as biology, medical sciences, oil exploitation, and chemical engineering. Many authors focused their study mostly on the deterministic case. The more classical one was due to Biot in the 50s, where he suggested to ignore everything that happens at the microscopic level, to apply the principles of the continuum mechanics at the macroscopic level. Here we consider a stochastic problem, that is, a problem with a random perturbation. First we prove a result on the existence and uniqueness of the solution, by making use of the weak formulation. Furthermore, we use a numerical scheme based on finite differences to present numerical results.
The Weak Lensing Masses of Filaments between Luminous Red Galaxies
Epps, Seth D.; Hudson, Michael J.
2017-07-01
In the standard model of non-linear structure formation, a cosmic web of dark-matter-dominated filaments connects dark matter haloes. In this paper, we stack the weak lensing signal of an ensemble of filaments between groups and clusters of galaxies. Specifically, we detect the weak lensing signal, using CFHTLenS galaxy ellipticities, from stacked filaments between Sloan Digital Sky Survey (SDSS)-III/Baryon Oscillation Spectroscopic Survey luminous red galaxies (LRGs). As a control, we compare the physical LRG pairs with projected LRG pairs that are more widely separated in redshift space. We detect the excess filament mass density in the projected pairs at the 5σ level, finding a mass of (1.6 ± 0.3) × 1013 M⊙ for a stacked filament region 7.1 h-1 Mpc long and 2.5 h-1 Mpc wide. This filament signal is compared with a model based on the three-point galaxy-galaxy-convergence correlation function, as developed in Clampitt et al., yielding reasonable agreement.
Knowledge-Based Environmental Context Modeling
Pukite, P. R.; Challou, D. J.
2017-12-01
As we move from the oil-age to an energy infrastructure based on renewables, the need arises for new educational tools to support the analysis of geophysical phenomena and their behavior and properties. Our objective is to present models of these phenomena to make them amenable for incorporation into more comprehensive analysis contexts. Starting at the level of a college-level computer science course, the intent is to keep the models tractable and therefore practical for student use. Based on research performed via an open-source investigation managed by DARPA and funded by the Department of Interior [1], we have adapted a variety of physics-based environmental models for a computer-science curriculum. The original research described a semantic web architecture based on patterns and logical archetypal building-blocks (see figure) well suited for a comprehensive environmental modeling framework. The patterns span a range of features that cover specific land, atmospheric and aquatic domains intended for engineering modeling within a virtual environment. The modeling engine contained within the server relied on knowledge-based inferencing capable of supporting formal terminology (through NASA JPL's Semantic Web for Earth and Environmental Technology (SWEET) ontology and a domain-specific language) and levels of abstraction via integrated reasoning modules. One of the key goals of the research was to simplify models that were ordinarily computationally intensive to keep them lightweight enough for interactive or virtual environment contexts. The breadth of the elements incorporated is well-suited for learning as the trend toward ontologies and applying semantic information is vital for advancing an open knowledge infrastructure. As examples of modeling, we have covered such geophysics topics as fossil-fuel depletion, wind statistics, tidal analysis, and terrain modeling, among others. Techniques from the world of computer science will be necessary to promote efficient
McCullough, Sheila M; Constable, Peter D
2003-08-01
To determine values for the total concentration of nonvolatile weak acids (Atot) and effective dissociation constant of nonvolatile weak acids (Ka) in plasma of cats. Convenience plasma samples of 5 male and 5 female healthy adult cats. Cats were sedated, and 20 mL of blood was obtained from the jugular vein. Plasma was tonometered at 37 degrees C to systematically vary PCO2 from 8 to 156 mm Hg, thereby altering plasma pH from 6.90 to 7.97. Plasma pH, PCO2, and concentrations of quantitatively important strong cations (Na+, K+, and Ca2+), strong anions (Cl-, lactate), and buffer ions (total protein, albumin, and phosphate) were determined. Strong ion difference was estimated from the measured strong ion concentrations and nonlinear regression used to calculate Atot and Ka from the measured pH and PCO2 and estimated strong ion difference. Mean (+/- SD) values were as follows: Atot = 24.3 +/- 4.6 mmol/L (equivalent to 0.35 mmol/g of protein or 0.76 mmol/g of albumin); Ka = 0.67 +/- 0.40 x 10(-7); and the negative logarithm (base 10) of Ka (pKa) = 7.17. At 37 degrees C, pH of 7.35, and a partial pressure of CO2 (PCO2) of 30 mm Hg, the calculated venous strong ion difference was 30 mEq/L. These results indicate that at a plasma pH of 7.35, a 1 mEq/L decrease in strong ion difference will decrease pH by 0.020, a 1 mm Hg decrease in PCO2 will increase plasma pH by 0.011, and a 1 g/dL decrease in albumin concentration will increase plasma pH by 0.093.