International Nuclear Information System (INIS)
Braendas, E.
1986-01-01
The method of complex scaling is taken to include bound states, resonances, remaining scattering background and interference. Particular points of the general complex coordinate formulation are presented. It is shown that care must be exercised to avoid paradoxical situations resulting from inadequate definitions of operator domains. A new resonance localization theorem is presented
Linearization Method and Linear Complexity
Tanaka, Hidema
We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N(≅2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.
Information geometric methods for complexity
Felice, Domenico; Cafaro, Carlo; Mancini, Stefano
2018-03-01
Research on the use of information geometry (IG) in modern physics has witnessed significant advances recently. In this review article, we report on the utilization of IG methods to define measures of complexity in both classical and, whenever available, quantum physical settings. A paradigmatic example of a dramatic change in complexity is given by phase transitions (PTs). Hence, we review both global and local aspects of PTs described in terms of the scalar curvature of the parameter manifold and the components of the metric tensor, respectively. We also report on the behavior of geodesic paths on the parameter manifold used to gain insight into the dynamics of PTs. Going further, we survey measures of complexity arising in the geometric framework. In particular, we quantify complexity of networks in terms of the Riemannian volume of the parameter space of a statistical manifold associated with a given network. We are also concerned with complexity measures that account for the interactions of a given number of parts of a system that cannot be described in terms of a smaller number of parts of the system. Finally, we investigate complexity measures of entropic motion on curved statistical manifolds that arise from a probabilistic description of physical systems in the presence of limited information. The Kullback-Leibler divergence, the distance to an exponential family and volumes of curved parameter manifolds, are examples of essential IG notions exploited in our discussion of complexity. We conclude by discussing strengths, limits, and possible future applications of IG methods to the physics of complexity.
Scattering methods in complex fluids
Chen, Sow-Hsin
2015-01-01
Summarising recent research on the physics of complex liquids, this in-depth analysis examines the topic of complex liquids from a modern perspective, addressing experimental, computational and theoretical aspects of the field. Selecting only the most interesting contemporary developments in this rich field of research, the authors present multiple examples including aggregation, gel formation and glass transition, in systems undergoing percolation, at criticality, or in supercooled states. Connecting experiments and simulation with key theoretical principles, and covering numerous systems including micelles, micro-emulsions, biological systems, and cement pastes, this unique text is an invaluable resource for graduate students and researchers looking to explore and understand the expanding field of complex fluids.
International Nuclear Information System (INIS)
Pilipenko, A.T.; Karetnikova, E.A.
1982-01-01
Complexing of indium with 4-methyl-2-(2'-oxynaphtylazo-1')-thiazol is studied. The optimal region of In complexing is pH 3.5-5.0. Component ratio in the complex is 1:1. The optimal conditions for extracting the formed complexes by chloroform, the spectrophotometric characteristics of the complexes and stability constants are determined. The determination of In with reagent should be conducted in aqueous-alcohol medium at a 5-fold excess of the reactant. At a 1 cm thickness of the absorbing layer the sensitivity of determination makes up 0.024. Phosphate, EDTA, citrate, oxalate, tartrate interfere with the determination of In. A technique for the determination of indium impurities in alkali-halogen crystals is developed
Immune Algorithm Complex Method for Transducer Calibration
Directory of Open Access Journals (Sweden)
YU Jiangming
2014-08-01
Full Text Available As a key link in engineering test tasks, the transducer calibration has significant influence on accuracy and reliability of test results. Because of unknown and complex nonlinear characteristics, conventional method can’t achieve satisfactory accuracy. An Immune algorithm complex modeling approach is proposed, and the simulated studies on the calibration of third multiple output transducers is made respectively by use of the developed complex modeling. The simulated and experimental results show that the Immune algorithm complex modeling approach can improve significantly calibration precision comparison with traditional calibration methods.
Continuum Level Density in Complex Scaling Method
International Nuclear Information System (INIS)
Suzuki, R.; Myo, T.; Kato, K.
2005-01-01
A new calculational method of continuum level density (CLD) at unbound energies is studied in the complex scaling method (CSM). It is shown that the CLD can be calculated by employing the discretization of continuum states in the CSM without any smoothing technique
Modeling complex work systems - method meets reality
van der Veer, Gerrit C.; Hoeve, Machteld; Lenting, Bert
1996-01-01
Modeling an existing task situation is often a first phase in the (re)design of information systems. For complex systems design, this model should consider both the people and the organization involved, the work, and situational aspects. Groupware Task Analysis (GTA) as part of a method for the
Method Points: towards a metric for method complexity
Directory of Open Access Journals (Sweden)
Graham McLeod
1998-11-01
Full Text Available A metric for method complexity is proposed as an aid to choosing between competing methods, as well as in validating the effects of method integration or the products of method engineering work. It is based upon a generic method representation model previously developed by the author and adaptation of concepts used in the popular Function Point metric for system size. The proposed technique is illustrated by comparing two popular I.E. deliverables with counterparts in the object oriented Unified Modeling Language (UML. The paper recommends ways to improve the practical adoption of new methods.
Methods for determination of extractable complex composition
International Nuclear Information System (INIS)
Sergievskij, V.V.
1984-01-01
Specific features and restrictions of main methods for determining the extractable complex composition by the distribution data (methods of equilibrium shift, saturation, mathematical models) are considered. Special attention is given to the solution of inverse problems with account for hydration effect on the activity of organic phase components. By example of the systems lithium halides-isoamyl alcohol, thorium nitrate-n-hexyl alcohol, mineral acids tri-n-butyl phosphate (TBP), metal nitrates (uranium lanthanides) - TBP the results on determining stoichiometry of extraction equilibria obtained by various methods are compared
An improved sampling method of complex network
Gao, Qi; Ding, Xintong; Pan, Feng; Li, Weixing
2014-12-01
Sampling subnet is an important topic of complex network research. Sampling methods influence the structure and characteristics of subnet. Random multiple snowball with Cohen (RMSC) process sampling which combines the advantages of random sampling and snowball sampling is proposed in this paper. It has the ability to explore global information and discover the local structure at the same time. The experiments indicate that this novel sampling method could keep the similarity between sampling subnet and original network on degree distribution, connectivity rate and average shortest path. This method is applicable to the situation where the prior knowledge about degree distribution of original network is not sufficient.
Level density in the complex scaling method
International Nuclear Information System (INIS)
Suzuki, Ryusuke; Kato, Kiyoshi; Myo, Takayuki
2005-01-01
It is shown that the continuum level density (CLD) at unbound energies can be calculated with the complex scaling method (CSM), in which the energy spectra of bound states, resonances and continuum states are obtained in terms of L 2 basis functions. In this method, the extended completeness relation is applied to the calculation of the Green functions, and the continuum-state part is approximately expressed in terms of discretized complex scaled continuum solutions. The obtained result is compared with the CLD calculated exactly from the scattering phase shift. The discretization in the CSM is shown to give a very good description of continuum states. We discuss how the scattering phase shifts can inversely be calculated from the discretized CLD using a basis function technique in the CSM. (author)
Cut Based Method for Comparing Complex Networks.
Liu, Qun; Dong, Zhishan; Wang, En
2018-03-23
Revealing the underlying similarity of various complex networks has become both a popular and interdisciplinary topic, with a plethora of relevant application domains. The essence of the similarity here is that network features of the same network type are highly similar, while the features of different kinds of networks present low similarity. In this paper, we introduce and explore a new method for comparing various complex networks based on the cut distance. We show correspondence between the cut distance and the similarity of two networks. This correspondence allows us to consider a broad range of complex networks and explicitly compare various networks with high accuracy. Various machine learning technologies such as genetic algorithms, nearest neighbor classification, and model selection are employed during the comparison process. Our cut method is shown to be suited for comparisons of undirected networks and directed networks, as well as weighted networks. In the model selection process, the results demonstrate that our approach outperforms other state-of-the-art methods with respect to accuracy.
Complex networks principles, methods and applications
Latora, Vito; Russo, Giovanni
2017-01-01
Networks constitute the backbone of complex systems, from the human brain to computer communications, transport infrastructures to online social systems and metabolic reactions to financial markets. Characterising their structure improves our understanding of the physical, biological, economic and social phenomena that shape our world. Rigorous and thorough, this textbook presents a detailed overview of the new theory and methods of network science. Covering algorithms for graph exploration, node ranking and network generation, among the others, the book allows students to experiment with network models and real-world data sets, providing them with a deep understanding of the basics of network theory and its practical applications. Systems of growing complexity are examined in detail, challenging students to increase their level of skill. An engaging presentation of the important principles of network science makes this the perfect reference for researchers and undergraduate and graduate students in physics, ...
A Low Complexity Discrete Radiosity Method
Chatelier , Pierre Yves; Malgouyres , Rémy
2006-01-01
International audience; Rather than using Monte Carlo sampling techniques or patch projections to compute radiosity, it is possible to use a discretization of a scene into voxels and perform some discrete geometry calculus to quickly compute visibility information. In such a framework , the radiosity method may be as precise as a patch-based radiosity using hemicube computation for form-factors, but it lowers the overall theoretical complexity to an O(N log N) + O(N), where the O(N) is largel...
Measurement methods on the complexity of network
Institute of Scientific and Technical Information of China (English)
LIN Lin; DING Gang; CHEN Guo-song
2010-01-01
Based on the size of network and the number of paths in the network,we proposed a model of topology complexity of a network to measure the topology complexity of the network.Based on the analyses of the effects of the number of the equipment,the types of equipment and the processing time of the node on the complexity of the network with the equipment-constrained,a complexity model of equipment-constrained network was constructed to measure the integrated complexity of the equipment-constrained network.The algorithms for the two models were also developed.An automatic generator of the random single label network was developed to test the models.The results show that the models can correctly evaluate the topology complexity and the integrated complexity of the networks.
Research on image complexity evaluation method based on color information
Wang, Hao; Duan, Jin; Han, Xue-hui; Xiao, Bo
2017-11-01
In order to evaluate the complexity of a color image more effectively and find the connection between image complexity and image information, this paper presents a method to compute the complexity of image based on color information.Under the complexity ,the theoretical analysis first divides the complexity from the subjective level, divides into three levels: low complexity, medium complexity and high complexity, and then carries on the image feature extraction, finally establishes the function between the complexity value and the color characteristic model. The experimental results show that this kind of evaluation method can objectively reconstruct the complexity of the image from the image feature research. The experimental results obtained by the method of this paper are in good agreement with the results of human visual perception complexity,Color image complexity has a certain reference value.
Complex operator method of the hydrogen atom
International Nuclear Information System (INIS)
Jiang, X.
1989-01-01
Frequently the hydrogen atom eigenvalue problem is analytically solved by solving a radial wave equation for a particle in a Coulomb field. In this article, complex coordinates are introduced, and an expression for the energy levels of the hydrogen atom is obtained by means of the algebraic solution of operators. The form of this solution is in accord with that of the analytical solution
Rising Trend: Complex and sophisticated attack methods
Indian Academy of Sciences (India)
Stux, DuQu, Nitro, Luckycat, Exploit Kits, FLAME. ADSL/SoHo Router Compromise. Botnets of compromised ADSL/SoHo Routers; User Redirection via malicious DNS entry. Web Application attacks. SQL Injection, RFI etc. More and more Webshells. More utility to hackers; Increasing complexity and evading mechanisms.
Hybrid recommendation methods in complex networks.
Fiasconaro, A; Tumminello, M; Nicosia, V; Latora, V; Mantegna, R N
2015-07-01
We propose two recommendation methods, based on the appropriate normalization of already existing similarity measures, and on the convex combination of the recommendation scores derived from similarity between users and between objects. We validate the proposed measures on three data sets, and we compare the performance of our methods to other recommendation systems recently proposed in the literature. We show that the proposed similarity measures allow us to attain an improvement of performances of up to 20% with respect to existing nonparametric methods, and that the accuracy of a recommendation can vary widely from one specific bipartite network to another, which suggests that a careful choice of the most suitable method is highly relevant for an effective recommendation on a given system. Finally, we study how an increasing presence of random links in the network affects the recommendation scores, finding that one of the two recommendation algorithms introduced here can systematically outperform the others in noisy data sets.
Multistage Spectral Relaxation Method for Solving the Hyperchaotic Complex Systems
Directory of Open Access Journals (Sweden)
Hassan Saberi Nik
2014-01-01
Full Text Available We present a pseudospectral method application for solving the hyperchaotic complex systems. The proposed method, called the multistage spectral relaxation method (MSRM is based on a technique of extending Gauss-Seidel type relaxation ideas to systems of nonlinear differential equations and using the Chebyshev pseudospectral methods to solve the resulting system on a sequence of multiple intervals. In this new application, the MSRM is used to solve famous hyperchaotic complex systems such as hyperchaotic complex Lorenz system and the complex permanent magnet synchronous motor. We compare this approach to the Runge-Kutta based ode45 solver to show that the MSRM gives accurate results.
Hagbani, Turki Al; Nazzal, Sami
2017-03-30
One approach to enhance curcumin (CUR) aqueous solubility is to use cyclodextrins (CDs) to form inclusion complexes where CUR is encapsulated as a guest molecule within the internal cavity of the water-soluble CD. Several methods have been reported for the complexation of CUR with CDs. Limited information, however, is available on the use of the autoclave process (AU) in complex formation. The aims of this work were therefore to (1) investigate and evaluate the AU cycle as a complex formation method to enhance CUR solubility; (2) compare the efficacy of the AU process with the freeze-drying (FD) and evaporation (EV) processes in complex formation; and (3) confirm CUR stability by characterizing CUR:CD complexes by NMR, Raman spectroscopy, DSC, and XRD. Significant differences were found in the saturation solubility of CUR from its complexes with CD when prepared by the three complexation methods. The AU yielded a complex with expected chemical and physical fingerprints for a CUR:CD inclusion complex that maintained the chemical integrity and stability of CUR and provided the highest solubility of CUR in water. Physical and chemical characterizations of the AU complexes confirmed the encapsulated of CUR inside the CD cavity and the transformation of the crystalline CUR:CD inclusion complex to an amorphous form. It was concluded that the autoclave process with its short processing time could be used as an alternate and efficient methods for drug:CD complexation. Copyright © 2017 Elsevier B.V. All rights reserved.
Early Language Learning: Complexity and Mixed Methods
Enever, Janet, Ed.; Lindgren, Eva, Ed.
2017-01-01
This is the first collection of research studies to explore the potential for mixed methods to shed light on foreign or second language learning by young learners in instructed contexts. It brings together recent studies undertaken in Cameroon, China, Croatia, Ethiopia, France, Germany, Italy, Kenya, Mexico, Slovenia, Spain, Sweden, Tanzania and…
Complexity, Methodology and Method: Crafting a Critical Process of Research
Alhadeff-Jones, Michel
2013-01-01
This paper defines a theoretical framework aiming to support the actions and reflections of researchers looking for a "method" in order to critically conceive the complexity of a scientific process of research. First, it starts with a brief overview of the core assumptions framing Morin's "paradigm of complexity" and Le…
Energy Technology Data Exchange (ETDEWEB)
Helal, A A; Iman, D M; Khalifa, S M; Aly, H F [Hot Laborities Center, Atomic Energy Authority, Cairo, (Egypt)
1996-03-01
Strontium-90 represents one of the main radionuclides produced in fission products. Migration of strontium in the environment in case of accidental release is governed by many factors, besides its interaction with the materials present in the environment. Both humic acid and fertilizers are present on agricultural lands or aqueous streams. Other ligands such as EDTA, citrates, and phosphates are present in the environment. The binding and exchange of cations by the soil organic fractions is of importance in soil fertility because the supply of Na{sup +}, K{sup +}, Ca{sup 2+}, Mg{sup 2+} and certain micro nutrients (Cu{sup 2+}, Mn{sup 2+}, Zn{sup 2+} and Fe{sup 3+}) to plant is strongly dependent on the cation exchange capacity of the soil which may be induced by organic matters. The effect of the presence of certain fertilizers and some environmental ligands on the Sr-humate complex was studied. In general, the fertilizers and the complexing ligands investigated are compared with humic acid in its complexation with strontium. 6 figs.
An efficient Korringa-Kohn-Rostoker method for ''complex'' lattices
International Nuclear Information System (INIS)
Yussouff, M.; Zeller, R.
1980-10-01
We present a modification of the exact KKR-band structure method which uses (a) a new energy expansion for structure constants and (b) only the reciprocal lattice summation. It is quite efficient and particularly useful for 'complex' lattices. The band structure of hexagonal-close-packed Beryllium at symmetry points is presented as an example of this method. (author)
A direction of developing a mining method and mining complexes
Energy Technology Data Exchange (ETDEWEB)
Gabov, V.V.; Efimov, I.A. [St. Petersburg State Mining Institute, St. Petersburg (Russian Federation). Vorkuta Branch
1996-12-31
The analyses of a mining method as a main factor determining the development stages of mining units is presented. The paper suggests a perspective mining method which differs from the known ones by following peculiarities: the direction selectivity of cuts with regard to coal seams structure; the cutting speed, thickness and succession of dusts. This method may be done by modulate complexes (a shield carrying a cutting head for coal mining), their mining devices being supplied with hydraulic drive. An experimental model of the module complex has been developed. 2 refs.
High-resolution method for evolving complex interface networks
Pan, Shucheng; Hu, Xiangyu Y.; Adams, Nikolaus A.
2018-04-01
In this paper we describe a high-resolution transport formulation of the regional level-set approach for an improved prediction of the evolution of complex interface networks. The novelty of this method is twofold: (i) construction of local level sets and reconstruction of a global level set, (ii) local transport of the interface network by employing high-order spatial discretization schemes for improved representation of complex topologies. Various numerical test cases of multi-region flow problems, including triple-point advection, single vortex flow, mean curvature flow, normal driven flow, dry foam dynamics and shock-bubble interaction show that the method is accurate and suitable for a wide range of complex interface-network evolutions. Its overall computational cost is comparable to the Semi-Lagrangian regional level-set method while the prediction accuracy is significantly improved. The approach thus offers a viable alternative to previous interface-network level-set method.
An Extended Newmark-FDTD Method for Complex Dispersive Media
Directory of Open Access Journals (Sweden)
Yu-Qiang Zhang
2018-01-01
Full Text Available Based on polarizability in the form of a complex quadratic rational function, a novel finite-difference time-domain (FDTD approach combined with the Newmark algorithm is presented for dealing with a complex dispersive medium. In this paper, the time-stepping equation of the polarization vector is derived by applying simultaneously the Newmark algorithm to the two sides of a second-order time-domain differential equation obtained from the relation between the polarization vector and electric field intensity in the frequency domain by the inverse Fourier transform. Then, its accuracy and stability are discussed from the two aspects of theoretical analysis and numerical computation. It is observed that this method possesses the advantages of high accuracy, high stability, and a wide application scope and can thus be applied to the treatment of many complex dispersion models, including the complex conjugate pole residue model, critical point model, modified Lorentz model, and complex quadratic rational function.
A Qualitative Method to Estimate HSI Display Complexity
International Nuclear Information System (INIS)
Hugo, Jacques; Gertman, David
2013-01-01
There is mounting evidence that complex computer system displays in control rooms contribute to cognitive complexity and, thus, to the probability of human error. Research shows that reaction time increases and response accuracy decreases as the number of elements in the display screen increase. However, in terms of supporting the control room operator, approaches focusing on addressing display complexity solely in terms of information density and its location and patterning, will fall short of delivering a properly designed interface. This paper argues that information complexity and semantic complexity are mandatory components when considering display complexity and that the addition of these concepts assists in understanding and resolving differences between designers and the preferences and performance of operators. This paper concludes that a number of simplified methods, when combined, can be used to estimate the impact that a particular display may have on the operator's ability to perform a function accurately and effectively. We present a mixed qualitative and quantitative approach and a method for complexity estimation
A Qualitative Method to Estimate HSI Display Complexity
Energy Technology Data Exchange (ETDEWEB)
Hugo, Jacques; Gertman, David [Idaho National Laboratory, Idaho (United States)
2013-04-15
There is mounting evidence that complex computer system displays in control rooms contribute to cognitive complexity and, thus, to the probability of human error. Research shows that reaction time increases and response accuracy decreases as the number of elements in the display screen increase. However, in terms of supporting the control room operator, approaches focusing on addressing display complexity solely in terms of information density and its location and patterning, will fall short of delivering a properly designed interface. This paper argues that information complexity and semantic complexity are mandatory components when considering display complexity and that the addition of these concepts assists in understanding and resolving differences between designers and the preferences and performance of operators. This paper concludes that a number of simplified methods, when combined, can be used to estimate the impact that a particular display may have on the operator's ability to perform a function accurately and effectively. We present a mixed qualitative and quantitative approach and a method for complexity estimation.
New complex variable meshless method for advection—diffusion problems
International Nuclear Information System (INIS)
Wang Jian-Fei; Cheng Yu-Min
2013-01-01
In this paper, an improved complex variable meshless method (ICVMM) for two-dimensional advection—diffusion problems is developed based on improved complex variable moving least-square (ICVMLS) approximation. The equivalent functional of two-dimensional advection—diffusion problems is formed, the variation method is used to obtain the equation system, and the penalty method is employed to impose the essential boundary conditions. The difference method for two-point boundary value problems is used to obtain the discrete equations. Then the corresponding formulas of the ICVMM for advection—diffusion problems are presented. Two numerical examples with different node distributions are used to validate and inestigate the accuracy and efficiency of the new method in this paper. It is shown that ICVMM is very effective for advection—diffusion problems, and has a good convergent character, accuracy, and computational efficiency
Methods for forming complex oxidation reaction products including superconducting articles
International Nuclear Information System (INIS)
Rapp, R.A.; Urquhart, A.W.; Nagelberg, A.S.; Newkirk, M.S.
1992-01-01
This patent describes a method for producing a superconducting complex oxidation reaction product of two or more metals in an oxidized state. It comprises positioning at least one parent metal source comprising one of the metals adjacent to a permeable mass comprising at least one metal-containing compound capable of reaction to form the complex oxidation reaction product in step below, the metal component of the at least one metal-containing compound comprising at least a second of the two or more metals, and orienting the parent metal source and the permeable mass relative to each other so that formation of the complex oxidation reaction product will occur in a direction towards and into the permeable mass; and heating the parent metal source in the presence of an oxidant to a temperature region above its melting point to form a body of molten parent metal to permit infiltration and reaction of the molten parent metal into the permeable mass and with the oxidant and the at least one metal-containing compound to form the complex oxidation reaction product, and progressively drawing the molten parent metal source through the complex oxidation reaction product towards the oxidant and towards and into the adjacent permeable mass so that fresh complex oxidation reaction product continues to form within the permeable mass; and recovering the resulting complex oxidation reaction product
Complexity analysis of accelerated MCMC methods for Bayesian inversion
International Nuclear Information System (INIS)
Hoang, Viet Ha; Schwab, Christoph; Stuart, Andrew M
2013-01-01
The Bayesian approach to inverse problems, in which the posterior probability distribution on an unknown field is sampled for the purposes of computing posterior expectations of quantities of interest, is starting to become computationally feasible for partial differential equation (PDE) inverse problems. Balancing the sources of error arising from finite-dimensional approximation of the unknown field, the PDE forward solution map and the sampling of the probability space under the posterior distribution are essential for the design of efficient computational Bayesian methods for PDE inverse problems. We study Bayesian inversion for a model elliptic PDE with an unknown diffusion coefficient. We provide complexity analyses of several Markov chain Monte Carlo (MCMC) methods for the efficient numerical evaluation of expectations under the Bayesian posterior distribution, given data δ. Particular attention is given to bounds on the overall work required to achieve a prescribed error level ε. Specifically, we first bound the computational complexity of ‘plain’ MCMC, based on combining MCMC sampling with linear complexity multi-level solvers for elliptic PDE. Our (new) work versus accuracy bounds show that the complexity of this approach can be quite prohibitive. Two strategies for reducing the computational complexity are then proposed and analyzed: first, a sparse, parametric and deterministic generalized polynomial chaos (gpc) ‘surrogate’ representation of the forward response map of the PDE over the entire parameter space, and, second, a novel multi-level Markov chain Monte Carlo strategy which utilizes sampling from a multi-level discretization of the posterior and the forward PDE. For both of these strategies, we derive asymptotic bounds on work versus accuracy, and hence asymptotic bounds on the computational complexity of the algorithms. In particular, we provide sufficient conditions on the regularity of the unknown coefficients of the PDE and on the
Molecular photoionization using the complex Kohn variational method
International Nuclear Information System (INIS)
Lynch, D.L.; Schneider, B.I.
1992-01-01
We have applied the complex Kohn variational method to the study of molecular-photoionization processes. This requires electron-ion scattering calculations enforcing incoming boundary conditions. The sensitivity of these results to the choice of the cutoff function in the Kohn method has been studied and we have demonstrated that a simple matching of the irregular function to a linear combination of regular functions produces accurate scattering phase shifts
Measurement of complex permittivity of composite materials using waveguide method
Tereshchenko, O.V.; Buesink, Frederik Johannes Karel; Leferink, Frank Bernardus Johannes
2011-01-01
Complex dielectric permittivity of 4 different composite materials has been measured using the transmissionline method. A waveguide fixture in L, S, C and X band was used for the measurements. Measurement accuracy is influenced by air gaps between test fixtures and the materials tested. One of the
Analytical Method to Estimate the Complex Permittivity of Oil Samples
Directory of Open Access Journals (Sweden)
Lijuan Su
2018-03-01
Full Text Available In this paper, an analytical method to estimate the complex dielectric constant of liquids is presented. The method is based on the measurement of the transmission coefficient in an embedded microstrip line loaded with a complementary split ring resonator (CSRR, which is etched in the ground plane. From this response, the dielectric constant and loss tangent of the liquid under test (LUT can be extracted, provided that the CSRR is surrounded by such LUT, and the liquid level extends beyond the region where the electromagnetic fields generated by the CSRR are present. For that purpose, a liquid container acting as a pool is added to the structure. The main advantage of this method, which is validated from the measurement of the complex dielectric constant of olive and castor oil, is that reference samples for calibration are not required.
Ethnographic methods for process evaluations of complex health behaviour interventions.
Morgan-Trimmer, Sarah; Wood, Fiona
2016-05-04
This article outlines the contribution that ethnography could make to process evaluations for trials of complex health-behaviour interventions. Process evaluations are increasingly used to examine how health-behaviour interventions operate to produce outcomes and often employ qualitative methods to do this. Ethnography shares commonalities with the qualitative methods currently used in health-behaviour evaluations but has a distinctive approach over and above these methods. It is an overlooked methodology in trials of complex health-behaviour interventions that has much to contribute to the understanding of how interventions work. These benefits are discussed here with respect to three strengths of ethnographic methodology: (1) producing valid data, (2) understanding data within social contexts, and (3) building theory productively. The limitations of ethnography within the context of process evaluations are also discussed.
Unplanned Complex Suicide-A Consideration of Multiple Methods.
Ateriya, Navneet; Kanchan, Tanuj; Shekhawat, Raghvendra Singh; Setia, Puneet; Saraf, Ashish
2018-05-01
Detailed death investigations are mandatory to find out the exact cause and manner in non-natural deaths. In this reference, use of multiple methods in suicide poses a challenge for the investigators especially when the choice of methods to cause death is unplanned. There is an increased likelihood that doubts of homicide are raised in cases of unplanned complex suicides. A case of complex suicide is reported where the victim resorted to multiple methods to end his life, and what appeared to be an unplanned variant based on the death scene investigations. A meticulous crime scene examination, interviews of the victim's relatives and other witnesses, and a thorough autopsy are warranted to conclude on the cause and manner of death in all such cases. © 2017 American Academy of Forensic Sciences.
Comparison of association mapping methods in a complex pedigreed population
DEFF Research Database (Denmark)
Sahana, Goutam; Guldbrandtsen, Bernt; Janss, Luc
2010-01-01
to collect SNP signals in intervals, to avoid the scattering of a QTL signal over multiple neighboring SNPs. Methods not accounting for genetic background (full pedigree information) performed worse, and methods using haplotypes were considerably worse with a high false-positive rate, probably due...... to the presence of low-frequency haplotypes. It was necessary to account for full relationships among individuals to avoid excess false discovery. Although the methods were tested on a cattle pedigree, the results are applicable to any population with a complex pedigree structure...
Complex finite element sensitivity method for creep analysis
International Nuclear Information System (INIS)
Gomez-Farias, Armando; Montoya, Arturo; Millwater, Harry
2015-01-01
The complex finite element method (ZFEM) has been extended to perform sensitivity analysis for mechanical and structural systems undergoing creep deformation. ZFEM uses a complex finite element formulation to provide shape, material, and loading derivatives of the system response, providing an insight into the essential factors which control the behavior of the system as a function of time. A complex variable-based quadrilateral user element (UEL) subroutine implementing the power law creep constitutive formulation was incorporated within the Abaqus commercial finite element software. The results of the complex finite element computations were verified by comparing them to the reference solution for the steady-state creep problem of a thick-walled cylinder in the power law creep range. A practical application of the ZFEM implementation to creep deformation analysis is the calculation of the skeletal point of a notched bar test from a single ZFEM run. In contrast, the standard finite element procedure requires multiple runs. The value of the skeletal point is that it identifies the location where the stress state is accurate, regardless of the certainty of the creep material properties. - Highlights: • A novel finite element sensitivity method (ZFEM) for creep was introduced. • ZFEM has the capability to calculate accurate partial derivatives. • ZFEM can be used for identification of the skeletal point of creep structures. • ZFEM can be easily implemented in a commercial software, e.g. Abaqus. • ZFEM results were shown to be in excellent agreement with analytical solutions
POWER ANALYSIS FOR COMPLEX MEDIATIONAL DESIGNS USING MONTE CARLO METHODS
Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.
2010-01-01
Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex mediational models. The approach is based on the well known technique of generating a large number of samples in a Monte Carlo study, and estimating power...
Learning with Generalization Capability by Kernel Methods of Bounded Complexity
Czech Academy of Sciences Publication Activity Database
Kůrková, Věra; Sanguineti, M.
2005-01-01
Roč. 21, č. 3 (2005), s. 350-367 ISSN 0885-064X R&D Projects: GA AV ČR 1ET100300419 Institutional research plan: CEZ:AV0Z10300504 Keywords : supervised learning * generalization * model complexity * kernel methods * minimization of regularized empirical errors * upper bounds on rates of approximate optimization Subject RIV: BA - General Mathematics Impact factor: 1.186, year: 2005
Comparison of topotactic fluorination methods for complex oxide films
Moon, E. J.; Choquette, A. K.; Huon, A.; Kulesa, S. Z.; Barbash, D.; May, S. J.
2015-06-01
We have investigated the synthesis of SrFeO3-αFγ (α and γ ≤ 1) perovskite films using topotactic fluorination reactions utilizing poly(vinylidene fluoride) as a fluorine source. Two different fluorination methods, a spin-coating and a vapor transport approach, were performed on as-grown SrFeO2.5 films. We highlight differences in the structural, compositional, and optical properties of the oxyfluoride films obtained via the two methods, providing insight into how fluorination reactions can be used to modify electronic and optical behavior in complex oxide heterostructures.
Comparison of topotactic fluorination methods for complex oxide films
Energy Technology Data Exchange (ETDEWEB)
Moon, E. J., E-mail: em582@drexel.edu; Choquette, A. K.; Huon, A.; Kulesa, S. Z.; May, S. J., E-mail: smay@coe.drexel.edu [Department of Materials Science and Engineering, Drexel University, Philadelphia, Pennsylvania 19104 (United States); Barbash, D. [Centralized Research Facilities, Drexel University, Philadelphia, Pennsylvania 19104 (United States)
2015-06-01
We have investigated the synthesis of SrFeO{sub 3−α}F{sub γ} (α and γ ≤ 1) perovskite films using topotactic fluorination reactions utilizing poly(vinylidene fluoride) as a fluorine source. Two different fluorination methods, a spin-coating and a vapor transport approach, were performed on as-grown SrFeO{sub 2.5} films. We highlight differences in the structural, compositional, and optical properties of the oxyfluoride films obtained via the two methods, providing insight into how fluorination reactions can be used to modify electronic and optical behavior in complex oxide heterostructures.
Complex Data Modeling and Computationally Intensive Statistical Methods
Mantovan, Pietro
2010-01-01
The last years have seen the advent and development of many devices able to record and store an always increasing amount of complex and high dimensional data; 3D images generated by medical scanners or satellite remote sensing, DNA microarrays, real time financial data, system control datasets. The analysis of this data poses new challenging problems and requires the development of novel statistical models and computational methods, fueling many fascinating and fast growing research areas of modern statistics. The book offers a wide variety of statistical methods and is addressed to statistici
Comparison of topotactic fluorination methods for complex oxide films
Directory of Open Access Journals (Sweden)
E. J. Moon
2015-06-01
Full Text Available We have investigated the synthesis of SrFeO3−αFγ (α and γ ≤ 1 perovskite films using topotactic fluorination reactions utilizing poly(vinylidene fluoride as a fluorine source. Two different fluorination methods, a spin-coating and a vapor transport approach, were performed on as-grown SrFeO2.5 films. We highlight differences in the structural, compositional, and optical properties of the oxyfluoride films obtained via the two methods, providing insight into how fluorination reactions can be used to modify electronic and optical behavior in complex oxide heterostructures.
Method for analysis the complex grounding cables system
International Nuclear Information System (INIS)
Ackovski, R.; Acevski, N.
2002-01-01
A new iterative method for the analysis of the performances of the complex grounding systems (GS) in underground cable power networks with coated and/or uncoated metal sheathed cables is proposed in this paper. The analyzed grounding system consists of the grounding grid of a high voltage (HV) supplying transformer station (TS), middle voltage/low voltage (MV/LV) consumer TSs and arbitrary number of power cables, connecting them. The derived method takes into consideration the drops of voltage in the cable sheets and the mutual influence among all earthing electrodes, due to the resistive coupling through the soil. By means of the presented method it is possible to calculate the main grounding system performances, such as earth electrode potentials under short circuit fault to ground conditions, earth fault current distribution in the whole complex grounding system, step and touch voltages in the nearness of the earthing electrodes dissipating the fault current in the earth, impedances (resistances) to ground of all possible fault locations, apparent shield impedances to ground of all power cables, e.t.c. The proposed method is based on the admittance summation method [1] and is appropriately extended, so that it takes into account resistive coupling between the elements that the GS. (Author)
A dissipative particle dynamics method for arbitrarily complex geometries
Li, Zhen; Bian, Xin; Tang, Yu-Hang; Karniadakis, George Em
2018-02-01
Dissipative particle dynamics (DPD) is an effective Lagrangian method for modeling complex fluids in the mesoscale regime but so far it has been limited to relatively simple geometries. Here, we formulate a local detection method for DPD involving arbitrarily shaped geometric three-dimensional domains. By introducing an indicator variable of boundary volume fraction (BVF) for each fluid particle, the boundary of arbitrary-shape objects is detected on-the-fly for the moving fluid particles using only the local particle configuration. Therefore, this approach eliminates the need of an analytical description of the boundary and geometry of objects in DPD simulations and makes it possible to load the geometry of a system directly from experimental images or computer-aided designs/drawings. More specifically, the BVF of a fluid particle is defined by the weighted summation over its neighboring particles within a cutoff distance. Wall penetration is inferred from the value of the BVF and prevented by a predictor-corrector algorithm. The no-slip boundary condition is achieved by employing effective dissipative coefficients for liquid-solid interactions. Quantitative evaluations of the new method are performed for the plane Poiseuille flow, the plane Couette flow and the Wannier flow in a cylindrical domain and compared with their corresponding analytical solutions and (high-order) spectral element solution of the Navier-Stokes equations. We verify that the proposed method yields correct no-slip boundary conditions for velocity and generates negligible fluctuations of density and temperature in the vicinity of the wall surface. Moreover, we construct a very complex 3D geometry - the "Brown Pacman" microfluidic device - to explicitly demonstrate how to construct a DPD system with complex geometry directly from loading a graphical image. Subsequently, we simulate the flow of a surfactant solution through this complex microfluidic device using the new method. Its
Directed forgetting of complex pictures in an item method paradigm.
Hauswald, Anne; Kissler, Johanna
2008-11-01
An item-cued directed forgetting paradigm was used to investigate the ability to control episodic memory and selectively encode complex coloured pictures. A series of photographs was presented to 21 participants who were instructed to either remember or forget each picture after it was presented. Memory performance was later tested with a recognition task where all presented items had to be retrieved, regardless of the initial instructions. A directed forgetting effect--that is, better recognition of "to-be-remembered" than of "to-be-forgotten" pictures--was observed, although its size was smaller than previously reported for words or line drawings. The magnitude of the directed forgetting effect correlated negatively with participants' depression and dissociation scores. The results indicate that, at least in an item method, directed forgetting occurs for complex pictures as well as words and simple line drawings. Furthermore, people with higher levels of dissociative or depressive symptoms exhibit altered memory encoding patterns.
Equivalence of the generalized and complex Kohn variational methods
Energy Technology Data Exchange (ETDEWEB)
Cooper, J N; Armour, E A G [School of Mathematical Sciences, University Park, Nottingham NG7 2RD (United Kingdom); Plummer, M, E-mail: pmxjnc@googlemail.co [STFC Daresbury Laboratory, Daresbury, Warrington, Cheshire WA4 4AD (United Kingdom)
2010-04-30
For Kohn variational calculations on low energy (e{sup +} - H{sub 2}) elastic scattering, we prove that the phase shift approximation, obtained using the complex Kohn method, is precisely equal to a value which can be obtained immediately via the real-generalized Kohn method. Our treatment is sufficiently general to be applied directly to arbitrary potential scattering or single open channel scattering problems, with exchange if required. In the course of our analysis, we develop a framework formally to describe the anomalous behaviour of our generalized Kohn calculations in the regions of the well-known Schwartz singularities. This framework also explains the mathematical origin of the anomaly-free singularities we reported in a previous article. Moreover, we demonstrate a novelty: that explicit solutions of the Kohn equations are not required in order to calculate optimal phase shift approximations. We relate our rigorous framework to earlier descriptions of the Kohn-type methods.
Equivalence of the generalized and complex Kohn variational methods
International Nuclear Information System (INIS)
Cooper, J N; Armour, E A G; Plummer, M
2010-01-01
For Kohn variational calculations on low energy (e + - H 2 ) elastic scattering, we prove that the phase shift approximation, obtained using the complex Kohn method, is precisely equal to a value which can be obtained immediately via the real-generalized Kohn method. Our treatment is sufficiently general to be applied directly to arbitrary potential scattering or single open channel scattering problems, with exchange if required. In the course of our analysis, we develop a framework formally to describe the anomalous behaviour of our generalized Kohn calculations in the regions of the well-known Schwartz singularities. This framework also explains the mathematical origin of the anomaly-free singularities we reported in a previous article. Moreover, we demonstrate a novelty: that explicit solutions of the Kohn equations are not required in order to calculate optimal phase shift approximations. We relate our rigorous framework to earlier descriptions of the Kohn-type methods.
Hexographic Method of Complex Town-Planning Terrain Estimate
Khudyakov, A. Ju
2017-11-01
The article deals with the vital problem of a complex town-planning analysis based on the “hexographic” graphic analytic method, makes a comparison with conventional terrain estimate methods and contains the method application examples. It discloses a procedure of the author’s estimate of restrictions and building of a mathematical model which reflects not only conventional town-planning restrictions, but also social and aesthetic aspects of the analyzed territory. The method allows one to quickly get an idea of the territory potential. It is possible to use an unlimited number of estimated factors. The method can be used for the integrated assessment of urban areas. In addition, it is possible to use the methods of preliminary evaluation of the territory commercial attractiveness in the preparation of investment projects. The technique application results in simple informative graphics. Graphical interpretation is straightforward from the experts. A definite advantage is the free perception of the subject results as they are not prepared professionally. Thus, it is possible to build a dialogue between professionals and the public on a new level allowing to take into account the interests of various parties. At the moment, the method is used as a tool for the preparation of integrated urban development projects at the Department of Architecture in Federal State Autonomous Educational Institution of Higher Education “South Ural State University (National Research University)”, FSAEIHE SUSU (NRU). The methodology is included in a course of lectures as the material on architectural and urban design for architecture students. The same methodology was successfully tested in the preparation of business strategies for the development of some territories in the Chelyabinsk region. This publication is the first in a series of planned activities developing and describing the methodology of hexographical analysis in urban and architectural practice. It is also
Analysis and application of classification methods of complex carbonate reservoirs
Li, Xiongyan; Qin, Ruibao; Ping, Haitao; Wei, Dan; Liu, Xiaomei
2018-06-01
There are abundant carbonate reservoirs from the Cenozoic to Mesozoic era in the Middle East. Due to variation in sedimentary environment and diagenetic process of carbonate reservoirs, several porosity types coexist in carbonate reservoirs. As a result, because of the complex lithologies and pore types as well as the impact of microfractures, the pore structure is very complicated. Therefore, it is difficult to accurately calculate the reservoir parameters. In order to accurately evaluate carbonate reservoirs, based on the pore structure evaluation of carbonate reservoirs, the classification methods of carbonate reservoirs are analyzed based on capillary pressure curves and flow units. Based on the capillary pressure curves, although the carbonate reservoirs can be classified, the relationship between porosity and permeability after classification is not ideal. On the basis of the flow units, the high-precision functional relationship between porosity and permeability after classification can be established. Therefore, the carbonate reservoirs can be quantitatively evaluated based on the classification of flow units. In the dolomite reservoirs, the average absolute error of calculated permeability decreases from 15.13 to 7.44 mD. Similarly, the average absolute error of calculated permeability of limestone reservoirs is reduced from 20.33 to 7.37 mD. Only by accurately characterizing pore structures and classifying reservoir types, reservoir parameters could be calculated accurately. Therefore, characterizing pore structures and classifying reservoir types are very important to accurate evaluation of complex carbonate reservoirs in the Middle East.
Approaching complexity by stochastic methods: From biological systems to turbulence
Energy Technology Data Exchange (ETDEWEB)
Friedrich, Rudolf [Institute for Theoretical Physics, University of Muenster, D-48149 Muenster (Germany); Peinke, Joachim [Institute of Physics, Carl von Ossietzky University, D-26111 Oldenburg (Germany); Sahimi, Muhammad [Mork Family Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, CA 90089-1211 (United States); Reza Rahimi Tabar, M., E-mail: mohammed.r.rahimi.tabar@uni-oldenburg.de [Department of Physics, Sharif University of Technology, Tehran 11155-9161 (Iran, Islamic Republic of); Institute of Physics, Carl von Ossietzky University, D-26111 Oldenburg (Germany); Fachbereich Physik, Universitaet Osnabrueck, Barbarastrasse 7, 49076 Osnabrueck (Germany)
2011-09-15
This review addresses a central question in the field of complex systems: given a fluctuating (in time or space), sequentially measured set of experimental data, how should one analyze the data, assess their underlying trends, and discover the characteristics of the fluctuations that generate the experimental traces? In recent years, significant progress has been made in addressing this question for a class of stochastic processes that can be modeled by Langevin equations, including additive as well as multiplicative fluctuations or noise. Important results have emerged from the analysis of temporal data for such diverse fields as neuroscience, cardiology, finance, economy, surface science, turbulence, seismic time series and epileptic brain dynamics, to name but a few. Furthermore, it has been recognized that a similar approach can be applied to the data that depend on a length scale, such as velocity increments in fully developed turbulent flow, or height increments that characterize rough surfaces. A basic ingredient of the approach to the analysis of fluctuating data is the presence of a Markovian property, which can be detected in real systems above a certain time or length scale. This scale is referred to as the Markov-Einstein (ME) scale, and has turned out to be a useful characteristic of complex systems. We provide a review of the operational methods that have been developed for analyzing stochastic data in time and scale. We address in detail the following issues: (i) reconstruction of stochastic evolution equations from data in terms of the Langevin equations or the corresponding Fokker-Planck equations and (ii) intermittency, cascades, and multiscale correlation functions.
Approaching complexity by stochastic methods: From biological systems to turbulence
International Nuclear Information System (INIS)
Friedrich, Rudolf; Peinke, Joachim; Sahimi, Muhammad; Reza Rahimi Tabar, M.
2011-01-01
This review addresses a central question in the field of complex systems: given a fluctuating (in time or space), sequentially measured set of experimental data, how should one analyze the data, assess their underlying trends, and discover the characteristics of the fluctuations that generate the experimental traces? In recent years, significant progress has been made in addressing this question for a class of stochastic processes that can be modeled by Langevin equations, including additive as well as multiplicative fluctuations or noise. Important results have emerged from the analysis of temporal data for such diverse fields as neuroscience, cardiology, finance, economy, surface science, turbulence, seismic time series and epileptic brain dynamics, to name but a few. Furthermore, it has been recognized that a similar approach can be applied to the data that depend on a length scale, such as velocity increments in fully developed turbulent flow, or height increments that characterize rough surfaces. A basic ingredient of the approach to the analysis of fluctuating data is the presence of a Markovian property, which can be detected in real systems above a certain time or length scale. This scale is referred to as the Markov-Einstein (ME) scale, and has turned out to be a useful characteristic of complex systems. We provide a review of the operational methods that have been developed for analyzing stochastic data in time and scale. We address in detail the following issues: (i) reconstruction of stochastic evolution equations from data in terms of the Langevin equations or the corresponding Fokker-Planck equations and (ii) intermittency, cascades, and multiscale correlation functions.
Complexity methods applied to turbulence in plasma astrophysics
Vlahos, L.; Isliker, H.
2016-09-01
In this review many of the well known tools for the analysis of Complex systems are used in order to study the global coupling of the turbulent convection zone with the solar atmosphere where the magnetic energy is dissipated explosively. Several well documented observations are not easy to interpret with the use of Magnetohydrodynamic (MHD) and/or Kinetic numerical codes. Such observations are: (1) The size distribution of the Active Regions (AR) on the solar surface, (2) The fractal and multi fractal characteristics of the observed magnetograms, (3) The Self-Organised characteristics of the explosive magnetic energy release and (4) the very efficient acceleration of particles during the flaring periods in the solar corona. We review briefly the work published the last twenty five years on the above issues and propose solutions by using methods borrowed from the analysis of complex systems. The scenario which emerged is as follows: (a) The fully developed turbulence in the convection zone generates and transports magnetic flux tubes to the solar surface. Using probabilistic percolation models we were able to reproduce the size distribution and the fractal properties of the emerged and randomly moving magnetic flux tubes. (b) Using a Non Linear Force Free (NLFF) magnetic extrapolation numerical code we can explore how the emerged magnetic flux tubes interact nonlinearly and form thin and Unstable Current Sheets (UCS) inside the coronal part of the AR. (c) The fragmentation of the UCS and the redistribution of the magnetic field locally, when the local current exceeds a Critical threshold, is a key process which drives avalanches and forms coherent structures. This local reorganization of the magnetic field enhances the energy dissipation and influences the global evolution of the complex magnetic topology. Using a Cellular Automaton and following the simple rules of Self Organized Criticality (SOC), we were able to reproduce the statistical characteristics of the
Microscopic methods for the interactions between complex nuclei
International Nuclear Information System (INIS)
Ikeda, Kiyomi; Tamagaki, Ryozo; Saito, Sakae; Horiuchi, Hisashi; Tohsaki-Suzuki, Akihiro.
1978-01-01
Microscopic study on composite-particle interaction performed in Japan is described in this paper. In chapter 1, brief historical description of the study is presented. In chapter 2, the theory of resonating group method (RGM) for describing microscopically the interaction between nuclei (clusters) is reviewed, and formulation on the description is presented. It is shown that the generator coordinate method (GCM) is a useful one for the description of interaction between shell model clusters, and that the kernels in the RGM are easily obtained from those of the GCM. The inter-cluster interaction can be well described by the orthogonality condition model (OCM). In chapter 3, the calculational procedures for the kernels of GCN, RGM and OCM and some properties related to their calculation are discussed. The GCM kernels for various types of systems are treated. The RGM kernels are evaluated by the integral transformation of GCM kernels. The problems related to the RGM norm kernel (RGM-NK) are discussed. The projection operator onto the Pauli-allowed state in OCM is obtained directly from the solution of the eigenvalue problem of RGM-NK. In chapter 4, the exchange kernels due to the antisymmetrization are derived in analytical way with the symbolical use of computer memory by taking the α + O 16 system as a typical example. New algorisms for deriving analytically the generator coordinate kernel (GCM kernel) are presented. In chapter 5, precise generalization of the Kohn-Hulthen-Kato variational method for scattering matrix is made for the purpose of microscopic study of reactions between complex nuclei with many channels coupled. (Kato, T.)
Formal methods applied to industrial complex systems implementation of the B method
Boulanger, Jean-Louis
2014-01-01
This book presents real-world examples of formal techniques in an industrial context. It covers formal methods such as SCADE and/or the B Method, in various fields such as railways, aeronautics, and the automotive industry. The purpose of this book is to present a summary of experience on the use of "formal methods" (based on formal techniques such as proof, abstract interpretation and model-checking) in industrial examples of complex systems, based on the experience of people currently involved in the creation and assessment of safety critical system software. The involvement of people from
Complex of radioanalytical methods for radioecological study of STS
International Nuclear Information System (INIS)
Artemev, O.I.; Larin, V.N.; Ptitskaya, L.D.; Smagulova, G.S.
1998-01-01
Today the main task of the Institute of Radiation Safety and Ecology is the assessment of parameters of radioecological situation in areas of nuclear testing on the territory of the former Semipalatinsk Test Site (STS). According to the diagram below, the radioecological study begins with the Field radiometry and environmental sampling followed by the coordinate fixation. This work is performed by the staff of the Radioecology Laboratory equipped with the state-of-the-art devices of dosimetry and radiometry. All the devices annually undergo the State Check by the RK Gosstandard Centre in Almaty. The air samples are also collected for determination of radon content. Environmental samples are measured for the total gamma activity in order to dispatch and discard samples with the insufficient level of homogenization. Samples are measured with the gamma radiometry installation containing NaJ(TI) scintillation detector. The installation background is measured everyday and many times. Time duration of measurement depends on sample activity. Further, samples are measured with alpha and beta radiometers for the total alpha and beta activity that characterizes the radioactive contamination of sampling locations. Apart from the Radiometry Laboratory the analytical complex includes the Radiochemistry and Gamma Spectrometry Laboratories. The direct gamma spectral (instrumental) methods in most cases allow to obtain the sufficiently rapid information about the radionuclides present in a sample. The state-of-the-art equipment together with the computer technology provide the high quantitative and qualitative precision and high productivity as well. One of the advantages of the method is that samples after measurement maintain their state and can be used for the repeated measurements or radiochemical reanalyzes. The Gamma Spectrometry Laboratory has three state-of-the-art gamma spectral installations consisting of high resolution semi-conductive detectors and equipped with
Petascale Many Body Methods for Complex Correlated Systems
Pruschke, Thomas
2012-02-01
Correlated systems constitute an important class of materials in modern condensed matter physics. Correlation among electrons are at the heart of all ordering phenomena and many intriguing novel aspects, such as quantum phase transitions or topological insulators, observed in a variety of compounds. Yet, theoretically describing these phenomena is still a formidable task, even if one restricts the models used to the smallest possible set of degrees of freedom. Here, modern computer architectures play an essential role, and the joint effort to devise efficient algorithms and implement them on state-of-the art hardware has become an extremely active field in condensed-matter research. To tackle this task single-handed is quite obviously not possible. The NSF-OISE funded PIRE collaboration ``Graduate Education and Research in Petascale Many Body Methods for Complex Correlated Systems'' is a successful initiative to bring together leading experts around the world to form a virtual international organization for addressing these emerging challenges and educate the next generation of computational condensed matter physicists. The collaboration includes research groups developing novel theoretical tools to reliably and systematically study correlated solids, experts in efficient computational algorithms needed to solve the emerging equations, and those able to use modern heterogeneous computer architectures to make then working tools for the growing community.
Number theoretic methods in cryptography complexity lower bounds
Shparlinski, Igor
1999-01-01
The book introduces new techniques which imply rigorous lower bounds on the complexity of some number theoretic and cryptographic problems. These methods and techniques are based on bounds of character sums and numbers of solutions of some polynomial equations over finite fields and residue rings. It also contains a number of open problems and proposals for further research. We obtain several lower bounds, exponential in terms of logp, on the de grees and orders of • polynomials; • algebraic functions; • Boolean functions; • linear recurring sequences; coinciding with values of the discrete logarithm modulo a prime p at suf ficiently many points (the number of points can be as small as pI/He). These functions are considered over the residue ring modulo p and over the residue ring modulo an arbitrary divisor d of p - 1. The case of d = 2 is of special interest since it corresponds to the representation of the right most bit of the discrete logarithm and defines whether the argument is a quadratic...
Fuzzy Entropy Method for Quantifying Supply Chain Networks Complexity
Zhang, Jihui; Xu, Junqin
Supply chain is a special kind of complex network. Its complexity and uncertainty makes it very difficult to control and manage. Supply chains are faced with a rising complexity of products, structures, and processes. Because of the strong link between a supply chain’s complexity and its efficiency the supply chain complexity management becomes a major challenge of today’s business management. The aim of this paper is to quantify the complexity and organization level of an industrial network working towards the development of a ‘Supply Chain Network Analysis’ (SCNA). By measuring flows of goods and interaction costs between different sectors of activity within the supply chain borders, a network of flows is built and successively investigated by network analysis. The result of this study shows that our approach can provide an interesting conceptual perspective in which the modern supply network can be framed, and that network analysis can handle these issues in practice.
a Range Based Method for Complex Facade Modeling
Adami, A.; Fregonese, L.; Taffurelli, L.
2011-09-01
the complex architecture. From the point cloud we can extract a false colour map depending on the distance of each point from the average plane. In this way we can represent each point of the facades by a height map in grayscale. In this operation it is important to define the scale of the final result in order to set the correct pixel size in the map. The following step is concerning the use of a modifier which is well-known in computer graphics. In fact the modifier Displacement allows to simulate on a planar surface the original roughness of the object according to a grayscale map. The value of gray is read by the modifier as the distance from the reference plane and it represents the displacement of the corresponding element of the virtual plane. Similar to the bump map, the displacement modifier does not only simulate the effect, but it really deforms the planar surface. In this way the 3d model can be use not only in a static representation, but also in dynamic animation or interactive application. The setting of the plane to be deformed is the most important step in this process. In 3d Max the planar surface has to be characterized by the real dimension of the façade and also by a correct number of quadrangular faces which are the smallest part of the whole surface. In this way we can consider the modified surface as a 3d raster representation where each quadrangular face (corresponding to traditional pixel) is displaced according the value of gray (= distance from the plane). This method can be applied in different context, above all when the object to be represented can be considered as a 2,5 dimension such as facades of architecture in city model or large scale representation. But also it can be used to represent particular effect such as deformation of walls in a complete 3d way.
A RANGE BASED METHOD FOR COMPLEX FACADE MODELING
Directory of Open Access Journals (Sweden)
A. Adami
2012-09-01
homogeneous point cloud of the complex architecture. From the point cloud we can extract a false colour map depending on the distance of each point from the average plane. In this way we can represent each point of the facades by a height map in grayscale. In this operation it is important to define the scale of the final result in order to set the correct pixel size in the map. The following step is concerning the use of a modifier which is well-known in computer graphics. In fact the modifier Displacement allows to simulate on a planar surface the original roughness of the object according to a grayscale map. The value of gray is read by the modifier as the distance from the reference plane and it represents the displacement of the corresponding element of the virtual plane. Similar to the bump map, the displacement modifier does not only simulate the effect, but it really deforms the planar surface. In this way the 3d model can be use not only in a static representation, but also in dynamic animation or interactive application. The setting of the plane to be deformed is the most important step in this process. In 3d Max the planar surface has to be characterized by the real dimension of the façade and also by a correct number of quadrangular faces which are the smallest part of the whole surface. In this way we can consider the modified surface as a 3d raster representation where each quadrangular face (corresponding to traditional pixel is displaced according the value of gray (= distance from the plane. This method can be applied in different context, above all when the object to be represented can be considered as a 2,5 dimension such as facades of architecture in city model or large scale representation. But also it can be used to represent particular effect such as deformation of walls in a complete 3d way.
International Nuclear Information System (INIS)
Horner, D.A.; Colgan, J.; Martin, F.; McCurdy, C.W.; Pindzola, M.S.; Rescigno, T.N.
2004-01-01
Symmetrized complex amplitudes for the double photoionization of helium are computed by the time-dependent close-coupling and exterior complex scaling methods, and it is demonstrated that both methods are capable of the direct calculation of these amplitudes. The results are found to be in excellent agreement with each other and in very good agreement with results of other ab initio methods and experiment
Application of Lattice Boltzmann Methods in Complex Mass Transfer Systems
Sun, Ning
Lattice Boltzmann Method (LBM) is a novel computational fluid dynamics method that can easily handle complex and dynamic boundaries, couple local or interfacial interactions/reactions, and be easily parallelized allowing for simulation of large systems. While most of the current studies in LBM mainly focus on fluid dynamics, however, the inherent power of this method makes it an ideal candidate for the study of mass transfer systems involving complex/dynamic microstructures and local reactions. In this thesis, LBM is introduced to be an alternative computational method for the study of electrochemical energy storage systems (Li-ion batteries (LIBs) and electric double layer capacitors (EDLCs)) and transdermal drug design on mesoscopic scale. Based on traditional LBM, the following in-depth studies have been carried out: (1) For EDLCs, the simulation of diffuse charge dynamics is carried out for both the charge and the discharge processes on 2D systems of complex random electrode geometries (pure random, random spheres and random fibers). Steric effect of concentrated solutions is considered by using modified Poisson-Nernst-Plank (MPNP) equations and compared with regular Poisson-Nernst-Plank (PNP) systems. The effects of electrode microstructures (electrode density, electrode filler morphology, filler size, etc.) on the net charge distribution and charge/discharge time are studied in detail. The influence of applied potential during discharging process is also discussed. (2) For the study of dendrite formation on the anode of LIBs, it is shown that the Lattice Boltzmann model can capture all the experimentally observed features of microstructure evolution at the anode, from smooth to mossy to dendritic. The mechanism of dendrite formation process in mesoscopic scale is discussed in detail and compared with the traditional Sand's time theories. It shows that dendrite formation is closely related to the inhomogeneous reactively at the electrode-electrolyte interface
Knowledge based method for solving complexity in design problems
Vermeulen, B.
2007-01-01
The process of design aircraft systems is becoming more and more complex, due to an increasing amount of requirements. Moreover, the knowledge on how to solve these complex design problems becomes less readily available, because of a decrease in availability of intellectual resources and reduced
Complex molecular orbital method: open-shell theory
International Nuclear Information System (INIS)
Hendekovic, J.
1976-01-01
A singe-determinant open-shell formalism for complex molecular orbitals is developed. An iterative algorithm for solving the resulting secular equations is constructed. It is based on a sequence of similarity transformations and matrix triangularizations
Uranium complex recycling method of purifying uranium liquors
International Nuclear Information System (INIS)
Elikan, L.; Lyon, W.L.; Sundar, P.S.
1976-01-01
Uranium is separated from contaminating cations in an aqueous liquor containing uranyl ions. The liquor is mixed with sufficient recycled uranium complex to raise the weight ratio of uranium to said cations preferably to at least about three. The liquor is then extracted with at least enough non-interfering, water-immiscible, organic solvent to theoretically extract about all of the uranium in the liquor. The organic solvent contains a reagent which reacts with the uranyl ions to form a complex soluble in the solvent. If the aqueous liquor is acidic, the organic solvent is then scrubbed with water. The organic solvent is stripped with a solution containing at least enough ammonium carbonate to precipitate the uranium complex. A portion of the uranium complex is recycled and the remainder can be collected and calcined to produce U 3 O 8 or UO 2
A new entropy based method for computing software structural complexity
Roca, J L
2002-01-01
In this paper a new methodology for the evaluation of software structural complexity is described. It is based on the entropy evaluation of the random uniform response function associated with the so called software characteristic function SCF. The behavior of the SCF with the different software structures and their relationship with the number of inherent errors is investigated. It is also investigated how the entropy concept can be used to evaluate the complexity of a software structure considering the SCF as a canonical representation of the graph associated with the control flow diagram. The functions, parameters and algorithms that allow to carry out this evaluation are also introduced. After this analytic phase follows the experimental phase, verifying the consistency of the proposed metric and their boundary conditions. The conclusion is that the degree of software structural complexity can be measured as the entropy of the random uniform response function of the SCF. That entropy is in direct relation...
Efficacy of Two Different Instructional Methods Involving Complex Ecological Content
Randler, Christoph; Bogner, Franz X.
2009-01-01
Teaching and learning approaches in ecology very often follow linear conceptions of ecosystems. Empirical studies with an ecological focus consistent with existing syllabi and focusing on cognitive achievement are scarce. Consequently, we concentrated on a classroom unit that offers learning materials and highlights the existing complexity rather…
Markov Renewal Methods in Restart Problems in Complex Systems
DEFF Research Database (Denmark)
Asmussen, Søren; Lipsky, Lester; Thompson, Stephen
A task with ideal execution time L such as the execution of a computer program or the transmission of a file on a data link may fail, and the task then needs to be restarted. The task is handled by a complex system with features similar to the ones in classical reliability: failures may...
Studies of lanthanide complexes by a combination of spectroscopic methods
Czech Academy of Sciences Publication Activity Database
Krupová, Monika; Bouř, Petr; Andrushchenko, Valery
2015-01-01
Roč. 22, č. 1 (2015), s. 44 ISSN 1211-5894. [Discussions in Structural Molecular Biology. Annual Meeting of the Czech Society for Structural Biology /13./. 19.03.2015-21.03.2015, Nové Hrady] Institutional support: RVO:61388963 Keywords : lanthanide complexes * chirality sensing * chirality amplification * spectroscopy Subject RIV: CF - Physical ; Theoretical Chemistry
International Nuclear Information System (INIS)
Zhong, Z.
1985-01-01
A new approach to the solution of certain differential equations, the double complex function method, is developed, combining ordinary complex numbers and hyperbolic complex numbers. This method is applied to the theory of stationary axisymmetric Einstein equations in general relativity. A family of exact double solutions, double transformation groups, and n-soliton double solutions are obtained
Method for synthesizing metal bis(borano) hypophosphite complexes
Cordaro, Joseph G.
2013-06-18
The present invention describes the synthesis of a family of metal bis(borano) hypophosphite complexes. One procedure described in detail is the syntheses of complexes beginning from phosphorus trichloride and sodium borohydride. Temperature, solvent, concentration, and atmosphere are all critical to ensure product formation. In the case of sodium bis(borano) hypophosphite, hydrogen gas was evolved upon heating at temperatures above 150.degree. C. Included in this family of materials are the salts of the alkali metals Li, Na and K, and those of the alkaline earth metals Mg and Ca. Hydrogen storage materials are possible. In particular the lithium salt, Li[PH.sub.2(BH.sub.3).sub.2], theoretically would contain nearly 12 wt % hydrogen. Analytical data for product characterization and thermal properties are given.
Determinantal method for complex angular momenta in potential scattering
Energy Technology Data Exchange (ETDEWEB)
Lee, B. W. [University of Pennsylvania, Philadelphia, PA (United States)
1963-01-15
In this paper I would like do describe a formulation of the complex angular momenta in potential scattering based on the Lippmann-Schwinger integral equation rather than on the Schrödinger differential equation. This is intended as a preliminary to the paper by SAWYER on the Regge poles and high energy limits in field theory (Bethe-Salpeter amplitudes), where the integral formulation is definitely more advantageous than the differential formulation.
Directed forgetting of complex pictures in an item method paradigm
Hauswald, Anne; Kissler, Johanna
2008-01-01
An item-cued directed forgetting paradigm was used to investigate the ability to control episodic memory and selectively encode complex coloured pictures. A series of photographs was presented to 21 participants who were instructed to either remember or forget each picture after it was presented. Memory performance was later tested with a recognition task where all presented items had to be retrieved, regardless of the initial instructions. A directed forgetting effect that is, better recogni...
A new entropy based method for computing software structural complexity
International Nuclear Information System (INIS)
Roca, Jose L.
2002-01-01
In this paper a new methodology for the evaluation of software structural complexity is described. It is based on the entropy evaluation of the random uniform response function associated with the so called software characteristic function SCF. The behavior of the SCF with the different software structures and their relationship with the number of inherent errors is investigated. It is also investigated how the entropy concept can be used to evaluate the complexity of a software structure considering the SCF as a canonical representation of the graph associated with the control flow diagram. The functions, parameters and algorithms that allow to carry out this evaluation are also introduced. After this analytic phase follows the experimental phase, verifying the consistency of the proposed metric and their boundary conditions. The conclusion is that the degree of software structural complexity can be measured as the entropy of the random uniform response function of the SCF. That entropy is in direct relationship with the number of inherent software errors and it implies a basic hazard failure rate for it, so that a minimum structure assures a certain stability and maturity of the program. This metric can be used, either to evaluate the product or the process of software development, as development tool or for monitoring the stability and the quality of the final product. (author)
Energy Technology Data Exchange (ETDEWEB)
Avlyanov, Zh K; Kabanov, N M; Zezin, A B
1985-01-01
Polarographic investigation of cadmium complex with polyacrylate-anion in aqueous KCl solution is carried out. It is shown that the polarographic method allows one to define equilibrium constants of polymer metallic complex (PMC) formation even in the case when current magnitudes are defined by PMC dissociation reaction kinetic characteristics. The obtained equilibrium constants of stepped complexing provide the values of mean coordination PAAxCd complex number of approximately 1.5, that coincides with the value obtained by the potentiometric method.
International Nuclear Information System (INIS)
Avlyanov, Zh.K.; Kabanov, N.M.; Zezin, A.B.
1985-01-01
Polarographic investigation of cadmium complex with polyacrylate-anion in aqueous KCl solution is carried out. It is shown that the polarographic method allows one to define equilibrium constants of polymer metallic complex (PMC) formation even in the case, when current magnitudes are defined by PMC dissociation reaction kinetic characteristics. The obtained equilibrium constants of stepped complexing provide the values of mean coordination PAAxCd complex number of approximately 1.5, that coinsides with the value obtained by the potentiometric method
Level III Reliability methods feasible for complex structures
Waarts, P.H.; Boer, A. de
2001-01-01
The paper describes the comparison between three types of reliability methods: code type level I used by a designer, full level I and a level III method. Two cases that are typical for civil engineering practise, a cable-stayed subjected to traffic load and the installation of a soil retaining sheet
Adaptive calibration method with on-line growing complexity
Directory of Open Access Journals (Sweden)
Šika Z.
2011-12-01
Full Text Available This paper describes a modified variant of a kinematical calibration algorithm. In the beginning, a brief review of the calibration algorithm and its simple modification are described. As the described calibration modification uses some ideas used by the Lolimot algorithm, the algorithm is described and explained. Main topic of this paper is a description of a synthesis of the Lolimot-based calibration that leads to an adaptive algorithm with an on-line growing complexity. The paper contains a comparison of simple examples results and a discussion. A note about future research topics is also included.
Method and program for complex calculation of heterogeneous reactor
International Nuclear Information System (INIS)
Kalashnikov, A.G.; Glebov, A.P.; Elovskaya, L.F.; Kuznetsova, L.I.
1988-01-01
An algorithm and the GITA program for complex one-dimensional calculation of a heterogeneous reactor which permits to conduct calculations for the reactor and its cell simultaneously using the same algorithm are described. Multigroup macrocross sections for reactor zones in the thermal energy range are determined according to the technique for calculating a cell with complicate structure and then the continuous multi group calculation of the reactor in the thermal energy range and in the range of neutron thermalization is made. The kinetic equation is solved using the Pi- and DSn- approximations [fr
Distributed Cooperation Solution Method of Complex System Based on MAS
Weijin, Jiang; Yuhui, Xu
To adapt the model in reconfiguring fault diagnosing to dynamic environment and the needs of solving the tasks of complex system fully, the paper introduced multi-Agent and related technology to the complicated fault diagnosis, an integrated intelligent control system is studied in this paper. Based on the thought of the structure of diagnostic decision and hierarchy in modeling, based on multi-layer decomposition strategy of diagnosis task, a multi-agent synchronous diagnosis federation integrated different knowledge expression modes and inference mechanisms are presented, the functions of management agent, diagnosis agent and decision agent are analyzed, the organization and evolution of agents in the system are proposed, and the corresponding conflict resolution algorithm in given, Layered structure of abstract agent with public attributes is build. System architecture is realized based on MAS distributed layered blackboard. The real world application shows that the proposed control structure successfully solves the fault diagnose problem of the complex plant, and the special advantage in the distributed domain.
Kernel methods and flexible inference for complex stochastic dynamics
Capobianco, Enrico
2008-07-01
Approximation theory suggests that series expansions and projections represent standard tools for random process applications from both numerical and statistical standpoints. Such instruments emphasize the role of both sparsity and smoothness for compression purposes, the decorrelation power achieved in the expansion coefficients space compared to the signal space, and the reproducing kernel property when some special conditions are met. We consider these three aspects central to the discussion in this paper, and attempt to analyze the characteristics of some known approximation instruments employed in a complex application domain such as financial market time series. Volatility models are often built ad hoc, parametrically and through very sophisticated methodologies. But they can hardly deal with stochastic processes with regard to non-Gaussianity, covariance non-stationarity or complex dependence without paying a big price in terms of either model mis-specification or computational efficiency. It is thus a good idea to look at other more flexible inference tools; hence the strategy of combining greedy approximation and space dimensionality reduction techniques, which are less dependent on distributional assumptions and more targeted to achieve computationally efficient performances. Advantages and limitations of their use will be evaluated by looking at algorithmic and model building strategies, and by reporting statistical diagnostics.
Comparing methods of determining Legionella spp. in complex water matrices.
Díaz-Flores, Álvaro; Montero, Juan Carlos; Castro, Francisco Javier; Alejandres, Eva María; Bayón, Carmen; Solís, Inmaculada; Fernández-Lafuente, Roberto; Rodríguez, Guillermo
2015-04-29
Legionella testing conducted at environmental laboratories plays an essential role in assessing the risk of disease transmission associated with water systems. However, drawbacks of culture-based methodology used for Legionella enumeration can have great impact on the results and interpretation which together can lead to underestimation of the actual risk. Up to 20% of the samples analysed by these laboratories produced inconclusive results, making effective risk management impossible. Overgrowth of competing microbiota was reported as an important factor for culture failure. For quantitative polymerase chain reaction (qPCR), the interpretation of the results from the environmental samples still remains a challenge. Inhibitors may cause up to 10% of inconclusive results. This study compared a quantitative method based on immunomagnetic separation (IMS method) with culture and qPCR, as a new approach to routine monitoring of Legionella. First, pilot studies evaluated the recovery and detectability of Legionella spp using an IMS method, in the presence of microbiota and biocides. The IMS method results were not affected by microbiota while culture counts were significantly reduced (1.4 log) or negative in the same samples. Damage by biocides of viable Legionella was detected by the IMS method. Secondly, a total of 65 water samples were assayed by all three techniques (culture, qPCR and the IMS method). Of these, 27 (41.5%) were recorded as positive by at least one test. Legionella spp was detected by culture in 7 (25.9%) of the 27 samples. Eighteen (66.7%) of the 27 samples were positive by the IMS method, thirteen of them reporting counts below 10(3) colony forming units per liter (CFU l(-1)), six presented interfering microbiota and three presented PCR inhibition. Of the 65 water samples, 24 presented interfering microbiota by culture and 8 presented partial or complete inhibition of the PCR reaction. So the rate of inconclusive results of culture and PCR was 36
Computational RNA secondary structure design: empirical complexity and improved methods
Directory of Open Access Journals (Sweden)
Condon Anne
2007-01-01
Full Text Available Abstract Background We investigate the empirical complexity of the RNA secondary structure design problem, that is, the scaling of the typical difficulty of the design task for various classes of RNA structures as the size of the target structure is increased. The purpose of this work is to understand better the factors that make RNA structures hard to design for existing, high-performance algorithms. Such understanding provides the basis for improving the performance of one of the best algorithms for this problem, RNA-SSD, and for characterising its limitations. Results To gain insights into the practical complexity of the problem, we present a scaling analysis on random and biologically motivated structures using an improved version of the RNA-SSD algorithm, and also the RNAinverse algorithm from the Vienna package. Since primary structure constraints are relevant for designing RNA structures, we also investigate the correlation between the number and the location of the primary structure constraints when designing structures and the performance of the RNA-SSD algorithm. The scaling analysis on random and biologically motivated structures supports the hypothesis that the running time of both algorithms scales polynomially with the size of the structure. We also found that the algorithms are in general faster when constraints are placed only on paired bases in the structure. Furthermore, we prove that, according to the standard thermodynamic model, for some structures that the RNA-SSD algorithm was unable to design, there exists no sequence whose minimum free energy structure is the target structure. Conclusion Our analysis helps to better understand the strengths and limitations of both the RNA-SSD and RNAinverse algorithms, and suggests ways in which the performance of these algorithms can be further improved.
Dong, Yadong; Sun, Yongqi; Qin, Chao
2018-01-01
The existing protein complex detection methods can be broadly divided into two categories: unsupervised and supervised learning methods. Most of the unsupervised learning methods assume that protein complexes are in dense regions of protein-protein interaction (PPI) networks even though many true complexes are not dense subgraphs. Supervised learning methods utilize the informative properties of known complexes; they often extract features from existing complexes and then use the features to train a classification model. The trained model is used to guide the search process for new complexes. However, insufficient extracted features, noise in the PPI data and the incompleteness of complex data make the classification model imprecise. Consequently, the classification model is not sufficient for guiding the detection of complexes. Therefore, we propose a new robust score function that combines the classification model with local structural information. Based on the score function, we provide a search method that works both forwards and backwards. The results from experiments on six benchmark PPI datasets and three protein complex datasets show that our approach can achieve better performance compared with the state-of-the-art supervised, semi-supervised and unsupervised methods for protein complex detection, occasionally significantly outperforming such methods.
Methods of Complex Data Processing from Technical Means of Monitoring
Directory of Open Access Journals (Sweden)
Serhii Tymchuk
2017-03-01
Full Text Available The problem of processing the information from different types of monitoring equipment was examined. The use of generalized methods of information processing, based on the techniques of clustering combined territorial information sources for monitoring and the use of framing model of knowledge base for identification of monitoring objects was proposed as a possible solution of the problem. Clustering methods were formed on the basis of Lance-Williams hierarchical agglomerative procedure using the Ward metrics. Frame model of knowledge base was built using the tools of object-oriented modeling.
Power Analysis for Complex Mediational Designs Using Monte Carlo Methods
Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.
2010-01-01
Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex…
Unsteady panel method for complex configurations including wake modeling
CSIR Research Space (South Africa)
Van Zyl, Lourens H
2008-01-01
Full Text Available implementations of the DLM are however not very versatile in terms of geometries that can be modeled. The ZONA6 code offers a versatile surface panel body model including a separated wake model, but uses a pressure panel method for lifting surfaces. This paper...
Method for VAWT Placement on a Complex Building Structure
2013-06-01
Dec. 2013. [5] F. Balduzzi, A. Bianchini, E. Carnevale, L Ferrari, S. Magnani, “Feasibility analysis of a Darrieus vertical-axis wind turbine ... turbines used to power the cooling system. A simulation of Building 216, which is the planned site of the cooling system, was performed. A wind flow...analysis found that optimum placement of the wind turbines is at the front of the south end of the building. The method for placing the wind turbines is
Laser absorption spectroscopy - Method for monitoring complex trace gas mixtures
Green, B. D.; Steinfeld, J. I.
1976-01-01
A frequency stabilized CO2 laser was used for accurate determinations of the absorption coefficients of various gases in the wavelength region from 9 to 11 microns. The gases investigated were representative of the types of contaminants expected to build up in recycled atmospheres. These absorption coefficients were then used in determining the presence and amount of the gases in prepared mixtures. The effect of interferences on the minimum detectable concentration of the gases was measured. The accuracies of various methods of solution were also evaluated.
International Nuclear Information System (INIS)
Zhang Huiqun
2009-01-01
By using some exact solutions of an auxiliary ordinary differential equation, a direct algebraic method is described to construct the exact complex solutions for nonlinear partial differential equations. The method is implemented for the NLS equation, a new Hamiltonian amplitude equation, the coupled Schrodinger-KdV equations and the Hirota-Maccari equations. New exact complex solutions are obtained.
Modelling of complex heat transfer systems by the coupling method
Energy Technology Data Exchange (ETDEWEB)
Bacot, P.; Bonfils, R.; Neveu, A.; Ribuot, J. (Centre d' Energetique de l' Ecole des Mines de Paris, 75 (France))
1985-04-01
The coupling method proposed here is designed to reduce the size of matrices which appear in the modelling of heat transfer systems. It consists in isolating the elements that can be modelled separately, and among the input variables of a component, identifying those which will couple it to another component. By grouping these types of variable, one can thus identify a so-called coupling matrix of reduced size, and relate it to the overall system. This matrix allows the calculation of the coupling temperatures as a function of external stresses, and of the state of the overall system at the previous instant. The internal temperatures of the components are determined from for previous ones. Two examples of applications are presented, one concerning a dwelling unit, and the second a solar water heater.
Developing integrated methods to address complex resource and environmental issues
Smith, Kathleen S.; Phillips, Jeffrey D.; McCafferty, Anne E.; Clark, Roger N.
2016-02-08
IntroductionThis circular provides an overview of selected activities that were conducted within the U.S. Geological Survey (USGS) Integrated Methods Development Project, an interdisciplinary project designed to develop new tools and conduct innovative research requiring integration of geologic, geophysical, geochemical, and remote-sensing expertise. The project was supported by the USGS Mineral Resources Program, and its products and acquired capabilities have broad applications to missions throughout the USGS and beyond.In addressing challenges associated with understanding the location, quantity, and quality of mineral resources, and in investigating the potential environmental consequences of resource development, a number of field and laboratory capabilities and interpretative methodologies evolved from the project that have applications to traditional resource studies as well as to studies related to ecosystem health, human health, disaster and hazard assessment, and planetary science. New or improved tools and research findings developed within the project have been applied to other projects and activities. Specifically, geophysical equipment and techniques have been applied to a variety of traditional and nontraditional mineral- and energy-resource studies, military applications, environmental investigations, and applied research activities that involve climate change, mapping techniques, and monitoring capabilities. Diverse applied geochemistry activities provide a process-level understanding of the mobility, chemical speciation, and bioavailability of elements, particularly metals and metalloids, in a variety of environmental settings. Imaging spectroscopy capabilities maintained and developed within the project have been applied to traditional resource studies as well as to studies related to ecosystem health, human health, disaster assessment, and planetary science. Brief descriptions of capabilities and laboratory facilities and summaries of some
Thinking Inside the Box: Simple Methods to Evaluate Complex Treatments
Directory of Open Access Journals (Sweden)
J. Michael Menke
2011-10-01
Full Text Available We risk ignoring cheaper and safer medical treatments because they cannot be patented, lack profit potential, require too much patient-contact time, or do not have scientific results. Novel medical treatments may be difficult to evaluate for a variety of reasons such as patient selection bias, the effect of the package of care, or the lack of identifying the active elements of treatment. Whole Systems Research (WSR is an approach designed to assess the performance of complete packages of clinical management. While the WSR method is compelling, there is no standard procedure for WSR, and its implementation may be intimidating. The truth is that WSR methodological tools are neither new nor complicated. There are two sequential steps, or boxes, that guide WSR methodology: establishing system predictability, followed by an audit of system element effectiveness. We describe the implementation of WSR with a particular attention to threats to validity (Shadish, Cook, & Campbell, 2002; Shadish & Heinsman, 1997. DOI: 10.2458/azu_jmmss.v2i1.12365
Conjugate gradient type methods for linear systems with complex symmetric coefficient matrices
Freund, Roland
1989-01-01
We consider conjugate gradient type methods for the solution of large sparse linear system Ax equals b with complex symmetric coefficient matrices A equals A(T). Such linear systems arise in important applications, such as the numerical solution of the complex Helmholtz equation. Furthermore, most complex non-Hermitian linear systems which occur in practice are actually complex symmetric. We investigate conjugate gradient type iterations which are based on a variant of the nonsymmetric Lanczos algorithm for complex symmetric matrices. We propose a new approach with iterates defined by a quasi-minimal residual property. The resulting algorithm presents several advantages over the standard biconjugate gradient method. We also include some remarks on the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.
International Nuclear Information System (INIS)
Zhao Yi; Small, Michael; Coward, David; Howell, Eric; Zhao Chunnong; Ju Li; Blair, David
2006-01-01
We describe the application of complexity estimation and the surrogate data method to identify deterministic dynamics in simulated gravitational wave (GW) data contaminated with white and coloured noises. The surrogate method uses algorithmic complexity as a discriminating statistic to decide if noisy data contain a statistically significant level of deterministic dynamics (the GW signal). The results illustrate that the complexity method is sensitive to a small amplitude simulated GW background (SNR down to 0.08 for white noise and 0.05 for coloured noise) and is also more robust than commonly used linear methods (autocorrelation or Fourier analysis)
Using mixed methods to develop and evaluate complex interventions in palliative care research.
Farquhar, Morag C; Ewing, Gail; Booth, Sara
2011-12-01
there is increasing interest in combining qualitative and quantitative research methods to provide comprehensiveness and greater knowledge yield. Mixed methods are valuable in the development and evaluation of complex interventions. They are therefore particularly valuable in palliative care research where the majority of interventions are complex, and the identification of outcomes particularly challenging. this paper aims to introduce the role of mixed methods in the development and evaluation of complex interventions in palliative care, and how they may be used in palliative care research. the paper defines mixed methods and outlines why and how mixed methods are used to develop and evaluate complex interventions, with a pragmatic focus on design and data collection issues and data analysis. Useful texts are signposted and illustrative examples provided of mixed method studies in palliative care, including a detailed worked example of the development and evaluation of a complex intervention in palliative care for breathlessness. Key challenges to conducting mixed methods in palliative care research are identified in relation to data collection, data integration in analysis, costs and dissemination and how these might be addressed. the development and evaluation of complex interventions in palliative care benefit from the application of mixed methods. Mixed methods enable better understanding of whether and how an intervention works (or does not work) and inform the design of subsequent studies. However, they can be challenging: mixed method studies in palliative care will benefit from working with agreed protocols, multidisciplinary teams and engaging staff with appropriate skill sets.
International Nuclear Information System (INIS)
Purohit, D.N.; Goswami, A.K.; Chauhan, R.S.; Ressalan, S.
1999-01-01
A spectrophotometric method for determination of stability constants making use of Job's curves has been developed. Using this method stability constants of Zn(II), Cd(II), Mo(VI) and V(V) complexes of hydroxytriazenes have been determined. For the sake of comparison, values of the stability constants were also determined using Harvey and Manning's method. The values of the stability constants developed by two methods compare well. This new method has been named as Purohit's method. (author)
Complex data modeling and computationally intensive methods for estimation and prediction
Secchi, Piercesare; Advances in Complex Data Modeling and Computational Methods in Statistics
2015-01-01
The book is addressed to statisticians working at the forefront of the statistical analysis of complex and high dimensional data and offers a wide variety of statistical models, computer intensive methods and applications: network inference from the analysis of high dimensional data; new developments for bootstrapping complex data; regression analysis for measuring the downsize reputational risk; statistical methods for research on the human genome dynamics; inference in non-euclidean settings and for shape data; Bayesian methods for reliability and the analysis of complex data; methodological issues in using administrative data for clinical and epidemiological research; regression models with differential regularization; geostatistical methods for mobility analysis through mobile phone data exploration. This volume is the result of a careful selection among the contributions presented at the conference "S.Co.2013: Complex data modeling and computationally intensive methods for estimation and prediction" held...
Purification of 2-oxo acid dehydrogenase multienzyme complexes from ox heart by a new method.
Stanley, C J; Perham, R N
1980-01-01
A new method is described that allows the parallel purification of the pyruvate dehydrogenase and 2-oxoglutarate dehydrogenase multienzyme complexes from ox heart without the need for prior isolation of mitochondria. All the assayable activity of the 2-oxo acid dehydrogenase complexes in the disrupted tissue is made soluble by the inclusion of non-ionic detergents such as Triton X-100 or Tween-80 in the buffer used for the initial extraction of the enzyme complexes. The yields of the pyruvate...
BRAND program complex for neutron-physical experiment simulation by the Monte-Carlo method
International Nuclear Information System (INIS)
Androsenko, A.A.; Androsenko, P.A.
1984-01-01
Possibilities of the BRAND program complex for neutron and γ-radiation transport simulation by the Monte-Carlo method are described in short. The complex includes the following modules: geometric module, source module, detector module, modules of simulation of a vector of particle motion direction after interaction and a free path. The complex is written in the FORTRAN langauage and realized by the BESM-6 computer
Das, Subhraseema; Subuddhi, Usharani
2015-11-01
Inclusion complexes of diclofenac sodium (DS) with β-cyclodextrin (β-CD) were prepared in order to improve the solubility, dissolution and oral bioavailability of the poorly water soluble drug. The effect of method of preparation of the DS/β-CD inclusion complexes (ICs) was investigated. The ICs were prepared by microwave irradiation and also by the conventional methods of kneading, co-precipitation and freeze drying. Though freeze drying method is usually referred to as the gold standard among all the conventional methods, its long processing time limits the utility. Microwave irradiation accomplishes the process in a very short span of time and is a more environmentally benign method. Better efficacy of the microwaved inclusion product (MW) was observed in terms of dissolution, antimicrobial activity and antibiofilm properties of the drug. Thus microwave irradiation can be utilized as an improved, time-saving and cost-effective method for the generation of DS/β-CD inclusion complexes.
Directory of Open Access Journals (Sweden)
Savić Ivan
2009-01-01
Full Text Available The aim of this work was to optimize a GFC method for the analysis of bioactive metal (Cu, Co and Fe complexes with olygosaccharides (dextran and pullulan. Bioactive metal complexes with olygosaccharides were synthesized by original procedure. GFC was used to study the molecular weight distribution, polymerization degree of oligosaccharides and bioactive metal complexes. The metal bounding in complexes depends on the ligand polymerization degree and the presence of OH groups in coordinative sphere of the central metal ion. The interaction between oligosaccharide and metal ions are very important in veterinary medicine, agriculture, pharmacy and medicine.
International Nuclear Information System (INIS)
Dobrynina, N.A.
1992-01-01
Position of bioinorganic chemistry in the system of naturl science, as well as relations between bioinorganic and biocoordination chemistry, were considered. The content of chemical elements in geosphere and biosphere was analyzed. Characteristic features of biometal complexing with bioligands were pointed out. By way of example complex equilibria in solution were studie by the method of pH-metric titration using mathematical simulation. Advantages of the methods totality, when studying biosystems, were emphasized
The relationship between the Wigner-Weyl kinetic formalism and the complex geometrical optics method
Maj, Omar
2004-01-01
The relationship between two different asymptotic techniques developed in order to describe the propagation of waves beyond the standard geometrical optics approximation, namely, the Wigner-Weyl kinetic formalism and the complex geometrical optics method, is addressed. More specifically, a solution of the wave kinetic equation, relevant to the Wigner-Weyl formalism, is obtained which yields the same wavefield intensity as the complex geometrical optics method. Such a relationship is also disc...
Energy Technology Data Exchange (ETDEWEB)
Al Mouhamed, Mayez
1977-09-15
In a number of complex physical systems the accessible signals are often characterized by random fluctuations about a mean value. The fluctuations (signature) often transmit information about the state of the system that the mean value cannot predict. This study is undertaken to elaborate statistical methods of anomaly detection on the basis of signature analysis of the noise inherent in the process. The algorithm presented first learns the characteristics of normal operation of a complex process. Then it detects small deviations from the normal behavior. The algorithm can be implemented in a medium-sized computer for on line application. (author) [French] Dans de nombreux systemes physiques complexes les grandeurs accessibles a l'homme sont souvent caracterisees par des fluctuations aleatoires autour d'une valeur moyenne. Les fluctuations (signatures) transmettent souvent des informations sur l'etat du systeme que la valeur moyenne ne peut predire. Cette etude est entreprise pour elaborer des methodes statistiques de detection d'anomalies de fonctionnement sur la base de l'analyse des signatures contenues dans les signaux de bruit provenant du processus. L'algorithme presente est capable de: 1/ Apprendre les caracteristiques des operations normales dans un processus complexe. 2/ Detecter des petites deviations par rapport a la conduite normale du processus. L'algorithme peut etre implante sur un calculateur de taille moyenne pour les applications en ligne. (auteur)
Random walk-based similarity measure method for patterns in complex object
Directory of Open Access Journals (Sweden)
Liu Shihu
2017-04-01
Full Text Available This paper discusses the similarity of the patterns in complex objects. The complex object is composed both of the attribute information of patterns and the relational information between patterns. Bearing in mind the specificity of complex object, a random walk-based similarity measurement method for patterns is constructed. In this method, the reachability of any two patterns with respect to the relational information is fully studied, and in the case of similarity of patterns with respect to the relational information can be calculated. On this bases, an integrated similarity measurement method is proposed, and algorithms 1 and 2 show the performed calculation procedure. One can find that this method makes full use of the attribute information and relational information. Finally, a synthetic example shows that our proposed similarity measurement method is validated.
Energy Technology Data Exchange (ETDEWEB)
Clemens, M.; Weiland, T. [Technische Hochschule Darmstadt (Germany)
1996-12-31
In the field of computational electrodynamics the discretization of Maxwell`s equations using the Finite Integration Theory (FIT) yields very large, sparse, complex symmetric linear systems of equations. For this class of complex non-Hermitian systems a number of conjugate gradient-type algorithms is considered. The complex version of the biconjugate gradient (BiCG) method by Jacobs can be extended to a whole class of methods for complex-symmetric algorithms SCBiCG(T, n), which only require one matrix vector multiplication per iteration step. In this class the well-known conjugate orthogonal conjugate gradient (COCG) method for complex-symmetric systems corresponds to the case n = 0. The case n = 1 yields the BiCGCR method which corresponds to the conjugate residual algorithm for the real-valued case. These methods in combination with a minimal residual smoothing process are applied separately to practical 3D electro-quasistatical and eddy-current problems in electrodynamics. The practical performance of the SCBiCG methods is compared with other methods such as QMR and TFQMR.
A path method for finding energy barriers and minimum energy paths in complex micromagnetic systems
International Nuclear Information System (INIS)
Dittrich, R.; Schrefl, T.; Suess, D.; Scholz, W.; Forster, H.; Fidler, J.
2002-01-01
Minimum energy paths and energy barriers are calculated for complex micromagnetic systems. The method is based on the nudged elastic band method and uses finite-element techniques to represent granular structures. The method was found to be robust and fast for both simple test problems as well as for large systems such as patterned granular media. The method is used to estimate the energy barriers in CoCr-based perpendicular recording media
A simple method for determining polymeric IgA-containing immune complexes.
Sancho, J; Egido, J; González, E
1983-06-10
A simplified assay to measure polymeric IgA-immune complexes in biological fluids is described. The assay is based upon the specific binding of a secretory component for polymeric IgA. In the first step, multimeric IgA (monomeric and polymeric) immune complexes are determined by the standard Raji cell assay. Secondly, labeled secretory component added to the assay is bound to polymeric IgA-immune complexes previously fixed to Raji cells, but not to monomeric IgA immune complexes. To avoid false positives due to possible complement-fixing IgM immune complexes, prior IgM immunoadsorption is performed. Using anti-IgM antiserum coupled to CNBr-activated Sepharose 4B this step is not time-consuming. Polymeric IgA has a low affinity constant and binds weakly to Raji cells, as Scatchard analysis of the data shows. Thus, polymeric IgA immune complexes do not bind to Raji cells directly through Fc receptors, but through complement breakdown products, as with IgG-immune complexes. Using this method, we have been successful in detecting specific polymeric-IgA immune complexes in patients with IgA nephropathy (Berger's disease) and alcoholic liver disease, as well as in normal subjects after meals of high protein content. This new, simple, rapid and reproducible assay might help to study the physiopathological role of polymeric IgA immune complexes in humans and animals.
A new high-throughput LC-MS method for the analysis of complex fructan mixtures
DEFF Research Database (Denmark)
Verspreet, Joran; Hansen, Anders Holmgaard; Dornez, Emmie
2014-01-01
In this paper, a new liquid chromatography-mass spectrometry (LC-MS) method for the analysis of complex fructan mixtures is presented. In this method, columns with a trifunctional C18 alkyl stationary phase (T3) were used and their performance compared with that of a porous graphitized carbon (PGC...
Memory Indexing: A Novel Method for Tracing Memory Processes in Complex Cognitive Tasks
Renkewitz, Frank; Jahn, Georg
2012-01-01
We validate an eye-tracking method applicable for studying memory processes in complex cognitive tasks. The method is tested with a task on probabilistic inferences from memory. It provides valuable data on the time course of processing, thus clarifying previous results on heuristic probabilistic inference. Participants learned cue values of…
Gilstrap, Donald L.
2013-01-01
In addition to qualitative methods presented in chaos and complexity theories in educational research, this article addresses quantitative methods that may show potential for future research studies. Although much in the social and behavioral sciences literature has focused on computer simulations, this article explores current chaos and…
Richards, William D., Jr.
Previous methods for determining the communication structure of organizations work well for small or simple organizations, but are either inadequate or unwieldy for use with large complex organizations. An improved method uses a number of different measures and a series of successive approximations to order the communication matrix such that…
On a computational method for modelling complex ecosystems by superposition procedure
International Nuclear Information System (INIS)
He Shanyu.
1986-12-01
In this paper, the Superposition Procedure is concisely described, and a computational method for modelling a complex ecosystem is proposed. With this method, the information contained in acceptable submodels and observed data can be utilized to maximal degree. (author). 1 ref
Evaluating the response of complex systems to environmental threats: the Σ II method
International Nuclear Information System (INIS)
Corynen, G.C.
1983-05-01
The Σ II method was developed to model and compute the probabilistic performance of systems that operate in a threatening environment. Although we emphasize the vulnerability of complex systems to earthquakes and to electromagnetic threats such as EMP (electromagnetic pulse), the method applies in general to most large-scale systems or networks that are embedded in a potentially harmful environment. Other methods exist for obtaining system vulnerability, but their complexity increases exponentially as the size of systems is increased. The complexity of the Σ II method is polynomial, and accurate solutions are now possible for problems for which current methods require the use of rough statistical bounds, confidence statements, and other approximations. For super-large problems, where the costs of precise answers may be prohibitive, a desired accuracy can be specified, and the Σ II algorithms will halt when that accuracy has been reached. We summarize the results of a theoretical complexity analysis - which is reported elsewhere - and validate the theory with computer experiments conducted both on worst-case academic problems and on more reasonable problems occurring in practice. Finally, we compare our method with the exact methods of Abraham and Nakazawa, and with current bounding methods, and we demonstrate the computational efficiency and accuracy of Σ II
Abdi Tabari, Mahmoud; Ivey, Toni A.
2015-01-01
This paper provides a methodological review of previous research on cognitive task complexity, since the term emerged in 1995, and investigates why much research was more quantitative rather than qualitative. Moreover, it sheds light onto the studies which used the mixed-methods approach and determines which version of the mixed-methods designs…
Low-complexity video encoding method for wireless image transmission in capsule endoscope.
Takizawa, Kenichi; Hamaguchi, Kiyoshi
2010-01-01
This paper presents a low-complexity video encoding method applicable for wireless image transmission in capsule endoscopes. This encoding method is based on Wyner-Ziv theory, in which side information available at a transmitter is treated as side information at its receiver. Therefore complex processes in video encoding, such as estimation of the motion vector, are moved to the receiver side, which has a larger-capacity battery. As a result, the encoding process is only to decimate coded original data through channel coding. We provide a performance evaluation for a low-density parity check (LDPC) coding method in the AWGN channel.
Directory of Open Access Journals (Sweden)
Cohen Eyal
2012-10-01
Full Text Available Abstract Background Primary care medical homes may improve health outcomes for children with special healthcare needs (CSHCN, by improving care coordination. However, community-based primary care practices may be challenged to deliver comprehensive care coordination to complex subsets of CSHCN such as children with medical complexity (CMC. Linking a tertiary care center with the community may achieve cost effective and high quality care for CMC. The objective of this study was to evaluate the outcomes of community-based complex care clinics integrated with a tertiary care center. Methods A before- and after-intervention study design with mixed (quantitative/qualitative methods was utilized. Clinics at two community hospitals distant from tertiary care were staffed by local community pediatricians with the tertiary care center nurse practitioner and linked with primary care providers. Eighty-one children with underlying chronic conditions, fragility, requirement for high intensity care and/or technology assistance, and involvement of multiple providers participated. Main outcome measures included health care utilization and expenditures, parent reports of parent- and child-quality of life [QOL (SF-36®, CPCHILD©, PedsQL™], and family-centered care (MPOC-20®. Comparisons were made in equal (up to 1 year pre- and post-periods supplemented by qualitative perspectives of families and pediatricians. Results Total health care system costs decreased from median (IQR $244 (981 per patient per month (PPPM pre-enrolment to $131 (355 PPPM post-enrolment (p=.007, driven primarily by fewer inpatient days in the tertiary care center (p=.006. Parents reported decreased out of pocket expenses (p© domains [Health Standardization Section (p=.04; Comfort and Emotions (p=.03], while total CPCHILD© score decreased between baseline and 1 year (p=.003. Parents and providers reported the ability to receive care close to home as a key benefit. Conclusions Complex
Complex Method Mixed with PSO Applying to Optimization Design of Bridge Crane Girder
Directory of Open Access Journals (Sweden)
He Yan
2017-01-01
Full Text Available In engineer design, basic complex method has not enough global search ability for the nonlinear optimization problem, so it mixed with particle swarm optimization (PSO has been presented in the paper,that is the optimal particle evaluated from fitness function of particle swarm displacement complex vertex in order to realize optimal principle of the largest complex central distance.This method is applied to optimization design problems of box girder of bridge crane with constraint conditions.At first a mathematical model of the girder optimization has been set up,in which box girder cross section area of bridge crane is taken as the objective function, and its four sizes parameters as design variables, girder mechanics performance, manufacturing process, border sizes and so on requirements as constraint conditions. Then complex method mixed with PSO is used to solve optimization design problem of cane box girder from constrained optimization studying approach, and its optimal results have achieved the goal of lightweight design and reducing the crane manufacturing cost . The method is reliable, practical and efficient by the practical engineer calculation and comparative analysis with basic complex method.
Methods for deconvoluting and interpreting complex gamma- and x-ray spectral regions
International Nuclear Information System (INIS)
Gunnink, R.
1983-06-01
Germanium and silicon detectors are now widely used for the detection and measurement of x and gamma radiation. However, some analysis situations and spectral regions have heretofore been too complex to deconvolute and interpret by techniques in general use. One example is the L x-ray spectrum of an element taken with a Ge or Si detector. This paper describes some new tools and methods that were developed to analyze complex spectral regions; they are illustrated with examples
Determining Complex Structures using Docking Method with Single Particle Scattering Data
Directory of Open Access Journals (Sweden)
Haiguang Liu
2017-04-01
Full Text Available Protein complexes are critical for many molecular functions. Due to intrinsic flexibility and dynamics of complexes, their structures are more difficult to determine using conventional experimental methods, in contrast to individual subunits. One of the major challenges is the crystallization of protein complexes. Using X-ray free electron lasers (XFELs, it is possible to collect scattering signals from non-crystalline protein complexes, but data interpretation is more difficult because of unknown orientations. Here, we propose a hybrid approach to determine protein complex structures by combining XFEL single particle scattering data with computational docking methods. Using simulations data, we demonstrate that a small set of single particle scattering data collected at random orientations can be used to distinguish the native complex structure from the decoys generated using docking algorithms. The results also indicate that a small set of single particle scattering data is superior to spherically averaged intensity profile in distinguishing complex structures. Given the fact that XFEL experimental data are difficult to acquire and at low abundance, this hybrid approach should find wide applications in data interpretations.
Krylov Subspace Methods for Complex Non-Hermitian Linear Systems. Thesis
Freund, Roland W.
1991-01-01
We consider Krylov subspace methods for the solution of large sparse linear systems Ax = b with complex non-Hermitian coefficient matrices. Such linear systems arise in important applications, such as inverse scattering, numerical solution of time-dependent Schrodinger equations, underwater acoustics, eddy current computations, numerical computations in quantum chromodynamics, and numerical conformal mapping. Typically, the resulting coefficient matrices A exhibit special structures, such as complex symmetry, or they are shifted Hermitian matrices. In this paper, we first describe a Krylov subspace approach with iterates defined by a quasi-minimal residual property, the QMR method, for solving general complex non-Hermitian linear systems. Then, we study special Krylov subspace methods designed for the two families of complex symmetric respectively shifted Hermitian linear systems. We also include some results concerning the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.
Modern methods of surveyor observations in opencast mining under complex hydrogeological conditions.
Usoltseva, L. A.; Lushpei, V. P.; Mursin, VA
2017-10-01
The article considers the possibility of linking the modern methods of surveying security of open mining works to improve industrial safety in the Primorsky Territory, as well as their use in the educational process. Industrial Safety in the management of Surface Mining depends largely on the applied assessment methods and methods of stability of pit walls and slopes of dumps in the complex mining and hydro-geological conditions.
Simultaneous analysis of qualitative parameters of solid fuel using complex neutron gamma method
International Nuclear Information System (INIS)
Dombrovskij, V.P.; Ajtsev, N.I.; Ryashchikov, V.I.; Frolov, V.K.
1983-01-01
A study was made on complex neutron gamma method for simultaneous analysis of carbon content, ash content and humidity of solid fuel according to gamma radiation of inelastic fast neutron scattering and radiation capture of thermal neutrons. Metrological characteristics of pulse and stationary neutron gamma methods for determination of qualitative solid fuel parameters were analyzed, taking coke breeze as an example. Optimal energy ranges of gamma radiation detection (2-8 MeV) were determined. The advantages of using pulse neutron generator for complex analysis of qualitative parameters of solid fuel in large masses were shown
IMPACT OF MATRIX INVERSION ON THE COMPLEXITY OF THE FINITE ELEMENT METHOD
Directory of Open Access Journals (Sweden)
M. Sybis
2016-04-01
Full Text Available Purpose. The development of a wide construction market and a desire to design innovative architectural building constructions has resulted in the need to create complex numerical models of objects having increasingly higher computational complexity. The purpose of this work is to show that choosing a proper method for solving the set of equations can improve the calculation time (reduce the complexity by a few levels of magnitude. Methodology. The article presents an analysis of the impact of matrix inversion algorithm on the deflection calculation in the beam, using the finite element method (FEM. Based on the literature analysis, common methods of calculating set of equations were determined. From the found solutions the Gaussian elimination, LU and Cholesky decomposition methods have been implemented to determine the effect of the matrix inversion algorithm used for solving the equations set on the number of computational operations performed. In addition, each of the implemented method has been further optimized thereby reducing the number of necessary arithmetic operations. Findings. These optimizations have been performed on the use of certain properties of the matrix, such as symmetry or significant number of zero elements in the matrix. The results of the analysis are presented for the division of the beam to 5, 50, 100 and 200 nodes, for which the deflection has been calculated. Originality. The main achievement of this work is that it shows the impact of the used methodology on the complexity of solving the problem (or equivalently, time needed to obtain results. Practical value. The difference between the best (the less complex and the worst (the most complex is in the row of few orders of magnitude. This result shows that choosing wrong methodology may enlarge time needed to perform calculation significantly.
An image overall complexity evaluation method based on LSD line detection
Li, Jianan; Duan, Jin; Yang, Xu; Xiao, Bo
2017-04-01
In the artificial world, whether it is the city's traffic roads or engineering buildings contain a lot of linear features. Therefore, the research on the image complexity of linear information has become an important research direction in digital image processing field. This paper, by detecting the straight line information in the image and using the straight line as the parameter index, establishing the quantitative and accurate mathematics relationship. In this paper, we use LSD line detection algorithm which has good straight-line detection effect to detect the straight line, and divide the detected line by the expert consultation strategy. Then we use the neural network to carry on the weight training and get the weight coefficient of the index. The image complexity is calculated by the complexity calculation model. The experimental results show that the proposed method is effective. The number of straight lines in the image, the degree of dispersion, uniformity and so on will affect the complexity of the image.
Computational study of formamide-water complexes using the SAPT and AIM methods
International Nuclear Information System (INIS)
Parreira, Renato L.T.; Valdes, Haydee; Galembeck, Sergio E.
2006-01-01
In this work, the complexes formed between formamide and water were studied by means of the SAPT and AIM methods. Complexation leads to significant alterations in the geometries and electronic structure of formamide. Intermolecular interactions in the complexes are intense, especially in the cases where the solvent interacts with the carbonyl and amide groups simultaneously. In the transition states, the interaction between the water molecule and the lone pair on the amide nitrogen is also important. In all the complexes studied herein, the electrostatic interactions between formamide and water are the main attractive force, and their contribution may be five times as large as the corresponding contribution from dispersion, and twice as large as the contribution from induction. However, an increase in the resonance of planar formamide with the successive addition of water molecules may suggest that the hydrogen bonds taking place between formamide and water have some covalent character
Directory of Open Access Journals (Sweden)
Guimarães Katia S
2006-04-01
Full Text Available Abstract Background Most cellular processes are carried out by multi-protein complexes, groups of proteins that bind together to perform a specific task. Some proteins form stable complexes, while other proteins form transient associations and are part of several complexes at different stages of a cellular process. A better understanding of this higher-order organization of proteins into overlapping complexes is an important step towards unveiling functional and evolutionary mechanisms behind biological networks. Results We propose a new method for identifying and representing overlapping protein complexes (or larger units called functional groups within a protein interaction network. We develop a graph-theoretical framework that enables automatic construction of such representation. We illustrate the effectiveness of our method by applying it to TNFα/NF-κB and pheromone signaling pathways. Conclusion The proposed representation helps in understanding the transitions between functional groups and allows for tracking a protein's path through a cascade of functional groups. Therefore, depending on the nature of the network, our representation is capable of elucidating temporal relations between functional groups. Our results show that the proposed method opens a new avenue for the analysis of protein interaction networks.
Protein complex detection in PPI networks based on data integration and supervised learning method.
Yu, Feng; Yang, Zhi; Hu, Xiao; Sun, Yuan; Lin, Hong; Wang, Jian
2015-01-01
Revealing protein complexes are important for understanding principles of cellular organization and function. High-throughput experimental techniques have produced a large amount of protein interactions, which makes it possible to predict protein complexes from protein-protein interaction (PPI) networks. However, the small amount of known physical interactions may limit protein complex detection. The new PPI networks are constructed by integrating PPI datasets with the large and readily available PPI data from biomedical literature, and then the less reliable PPI between two proteins are filtered out based on semantic similarity and topological similarity of the two proteins. Finally, the supervised learning protein complex detection (SLPC), which can make full use of the information of available known complexes, is applied to detect protein complex on the new PPI networks. The experimental results of SLPC on two different categories yeast PPI networks demonstrate effectiveness of the approach: compared with the original PPI networks, the best average improvements of 4.76, 6.81 and 15.75 percentage units in the F-score, accuracy and maximum matching ratio (MMR) are achieved respectively; compared with the denoising PPI networks, the best average improvements of 3.91, 4.61 and 12.10 percentage units in the F-score, accuracy and MMR are achieved respectively; compared with ClusterONE, the start-of the-art complex detection method, on the denoising extended PPI networks, the average improvements of 26.02 and 22.40 percentage units in the F-score and MMR are achieved respectively. The experimental results show that the performances of SLPC have a large improvement through integration of new receivable PPI data from biomedical literature into original PPI networks and denoising PPI networks. In addition, our protein complexes detection method can achieve better performance than ClusterONE.
International Nuclear Information System (INIS)
Vinsova, H.; Koudelkova, M.; Ernestova, M.; Jedinakova-Krizova, V.
2003-01-01
Many of holmium and yttrium complex compounds of both organic and inorganic origin have been studied recently from the point of view of their radiopharmaceutical behavior. Complexes with Ho-166 and Y-90 can be either directly used as pharmaceutical preparations or they can be applied in a conjugate form with selected monoclonal antibody. Appropriate bifunctional chelation agents are necessary in the latter case for indirect binding of monoclonal antibody and selected radionuclide. Our present study has been focused on the characterization of radionuclide (metal) - ligand interaction using various analytical methods. Electromigration methods (capillary electrophoresis, capillary isotachophoresis), potentiometric titration and spectrophotometry have been tested from the point of view of their potential to determine conditional stability constants of holmium and yttrium complexes. A principle of an isotachophoretic determination of stability constants is based on the linear relation between logarithms of stability constant and a reduction of a zone of complex. For the calculation of thermodynamic constants using potentiometry it was necessary at first to determine the protonation constants of acid. Those were calculated using the computer program LETAGROP Etitr from data obtained by potentiometric acid-base titration. Consequently, the titration curves of holmium and yttrium with studied ligands and protonation constants of corresponding acid were applied for the calculation of metal-ligand stability constants. Spectrophotometric determination of stability constants of selected systems was based on the titration of holmium and yttrium nitrate solutions by Arsenazo III following by the titration of metal-Arsenazo III complex by selected ligand. Data obtained have been evaluated using the computation program OPIUM. Results obtained by all analytical methods tested in this study have been compared. It was found that direct potentiometric titration technique could not be
Directory of Open Access Journals (Sweden)
Hongfen Gao
2014-01-01
Full Text Available This paper describes the application of the complex variable meshless manifold method (CVMMM to stress intensity factor analyses of structures containing interface cracks between dissimilar materials. A discontinuous function and the near-tip asymptotic displacement functions are added to the CVMMM approximation using the framework of complex variable moving least-squares (CVMLS approximation. This enables the domain to be modeled by CVMMM without explicitly meshing the crack surfaces. The enriched crack-tip functions are chosen as those that span the asymptotic displacement fields for an interfacial crack. The complex stress intensity factors for bimaterial interfacial cracks were numerically evaluated using the method. Good agreement between the numerical results and the reference solutions for benchmark interfacial crack problems is realized.
Directory of Open Access Journals (Sweden)
Xiang Ding
2014-01-01
Full Text Available Project delivery planning is a key stage used by the project owner (or project investor for organizing design, construction, and other operations in a construction project. The main task in this stage is to select an appropriate project delivery method. In order to analyze different factors affecting the PDM selection, this paper establishes a multiagent model mainly to show how project complexity, governance strength, and market environment affect the project owner’s decision on PDM. Experiment results show that project owner usually choose Design-Build method when the project is very complex within a certain range. Besides, this paper points out that Design-Build method will be the prior choice when the potential contractors develop quickly. This paper provides the owners with methods and suggestions in terms of showing how the factors affect PDM selection, and it may improve the project performance.
Hybrid RANS/LES method for wind flow over complex terrain
DEFF Research Database (Denmark)
Bechmann, Andreas; Sørensen, Niels N.
2010-01-01
for flows at high Reynolds numbers. To reduce the computational cost of traditional LES, a hybrid method is proposed in which the near-wall eddies are modelled in a Reynolds-averaged sense. Close to walls, the flow is treated with the Reynolds-averaged Navier-Stokes (RANS) equations (unsteady RANS...... rough walls. Previous attempts of combining RANS and LES has resulted in unphysical transition regions between the two layers, but the present work improves this region by using a stochastic backscatter model. To demonstrate the ability of the proposed hybrid method, simulations are presented for wind...... the turbulent kinetic energy, whereas the new method captures the high turbulence levels well but underestimates the mean velocity. The presented results are for a relative mild configuration of complex terrain, but the proposed method can also be used for highly complex terrain where the benefits of the new...
Fractional Complex Transform and exp-Function Methods for Fractional Differential Equations
Directory of Open Access Journals (Sweden)
Ahmet Bekir
2013-01-01
Full Text Available The exp-function method is presented for finding the exact solutions of nonlinear fractional equations. New solutions are constructed in fractional complex transform to convert fractional differential equations into ordinary differential equations. The fractional derivatives are described in Jumarie's modified Riemann-Liouville sense. We apply the exp-function method to both the nonlinear time and space fractional differential equations. As a result, some new exact solutions for them are successfully established.
NetMHCcons: a consensus method for the major histocompatibility complex class I predictions
DEFF Research Database (Denmark)
Karosiene, Edita; Lundegaard, Claus; Lund, Ole
2012-01-01
A key role in cell-mediated immunity is dedicated to the major histocompatibility complex (MHC) molecules that bind peptides for presentation on the cell surface. Several in silico methods capable of predicting peptide binding to MHC class I have been developed. The accuracy of these methods depe...... at www.cbs.dtu.dk/services/NetMHCcons, and allows the user in an automatic manner to obtain the most accurate predictions for any given MHC molecule....
A general method for computing the total solar radiation force on complex spacecraft structures
Chan, F. K.
1981-01-01
The method circumvents many of the existing difficulties in computational logic presently encountered in the direct analytical or numerical evaluation of the appropriate surface integral. It may be applied to complex spacecraft structures for computing the total force arising from either specular or diffuse reflection or even from non-Lambertian reflection and re-radiation.
A method for evaluating the problem complex of choosing the ventilation system for a new building
DEFF Research Database (Denmark)
Hviid, Christian Anker; Svendsen, Svend
2007-01-01
The application of a ventilation system in a new building is a multidimensional complex problem that involves quantifiable and non-quantifiable data like energy consump¬tion, indoor environment, building integration and architectural expression. This paper presents a structured method for evaluat...
Simulation As a Method To Support Complex Organizational Transformations in Healthcare
Rothengatter, D.C.F.; Katsma, Christiaan; van Hillegersberg, Jos
2010-01-01
In this paper we study the application of simulation as a method to support information system and process design in complex organizational transitions. We apply a combined use of a collaborative workshop approach with the use of a detailed and accurate graphical simulation model in a hospital that
Functional analytic methods in complex analysis and applications to partial differential equations
International Nuclear Information System (INIS)
Mshimba, A.S.A.; Tutschke, W.
1990-01-01
The volume contains 24 lectures given at the Workshop on Functional Analytic Methods in Complex Analysis and Applications to Partial Differential Equations held in Trieste, Italy, between 8-19 February 1988, at the ICTP. A separate abstract was prepared for each of these lectures. Refs and figs
Structure of the automated uchebno-methodical complex on technical disciplines
Directory of Open Access Journals (Sweden)
Вячеслав Михайлович Дмитриев
2010-12-01
Full Text Available In article it is put and the problem of automation and information of process of training of students on the basis of the entered system-organizational forms which have received in aggregate the name of education methodical complexes on discipline dares.
Global Learning in a Geography Course Using the Mystery Method as an Approach to Complex Issues
Applis, Stefan
2014-01-01
In the study which is the foundation of this essay, the question is examined of whether the complexity of global issues can be solved at the level of teaching methodology. In this context, the first qualitative and constructive study was carried out which researches the Mystery Method using the Thinking-Through-Geography approach (David Leat,…
Li, Jie; Li, Rui; You, Leiming; Xu, Anlong; Fu, Yonggui; Huang, Shengfeng
2015-01-01
Switching between different alternative polyadenylation (APA) sites plays an important role in the fine tuning of gene expression. New technologies for the execution of 3’-end enriched RNA-seq allow genome-wide detection of the genes that exhibit significant APA site switching between different samples. Here, we show that the independence test gives better results than the linear trend test in detecting APA site-switching events. Further examination suggests that the discrepancy between these two statistical methods arises from complex APA site-switching events that cannot be represented by a simple change of average 3’-UTR length. In theory, the linear trend test is only effective in detecting these simple changes. We classify the switching events into four switching patterns: two simple patterns (3’-UTR shortening and lengthening) and two complex patterns. By comparing the results of the two statistical methods, we show that complex patterns account for 1/4 of all observed switching events that happen between normal and cancerous human breast cell lines. Because simple and complex switching patterns may convey different biological meanings, they merit separate study. We therefore propose to combine both the independence test and the linear trend test in practice. First, the independence test should be used to detect APA site switching; second, the linear trend test should be invoked to identify simple switching events; and third, those complex switching events that pass independence testing but fail linear trend testing can be identified. PMID:25875641
Directory of Open Access Journals (Sweden)
Y. Zhao
2017-06-01
Full Text Available Local line rolling forming is a common forming approach for the complex curvature plate of ships. However, the processing mode based on artificial experience is still applied at present, because it is difficult to integrally determine relational data for the forming shape, processing path, and process parameters used to drive automation equipment. Numerical simulation is currently the major approach for generating such complex relational data. Therefore, a highly precise and effective numerical computation method becomes crucial in the development of the automated local line rolling forming system for producing complex curvature plates used in ships. In this study, a three-dimensional elastoplastic finite element method was first employed to perform numerical computations for local line rolling forming, and the corresponding deformation and strain distribution features were acquired. In addition, according to the characteristics of strain distributions, a simplified deformation simulation method, based on the deformation obtained by applying strain was presented. Compared to the results of the three-dimensional elastoplastic finite element method, this simplified deformation simulation method was verified to provide high computational accuracy, and this could result in a substantial reduction in calculation time. Thus, the application of the simplified deformation simulation method was further explored in the case of multiple rolling loading paths. Moreover, it was also utilized to calculate the local line rolling forming for the typical complex curvature plate of ships. Research findings indicated that the simplified deformation simulation method was an effective tool for rapidly obtaining relationships between the forming shape, processing path, and process parameters.
Ravichandran, R; Rajendran, M; Devapiriam, D
2014-03-01
Quercetin found chelate cadmium ions, scavenge free radicals produced by cadmium. Hence new complex, quercetin with cadmium was synthesised, and the synthesised complex structures were determined by UV-vis spectrophotometry, infrared spectroscopy, thermogravimetry and differential thermal analysis techniques (UV-vis, IR, TGA and DTA). The equilibrium stability constants of quercetin-cadmium complex were determined by Job's method. The determined stability constant value of quercetin-cadminum complex at pH 4.4 is 2.27×10(6) and at pH 7.4 is 7.80×10(6). It was found that the quercetin and cadmium ion form 1:1 complex in both pH 4.4 and pH 7.4. The structure of the compounds was elucidated on the basis of obtained results. Furthermore, the antioxidant activity of the free quercetin and quercetin-cadmium complexes were determined by DPPH and ABTS assays. Copyright © 2013 Elsevier Ltd. All rights reserved.
Optimization of a method for preparing solid complexes of essential clove oil with β-cyclodextrins.
Hernández-Sánchez, Pilar; López-Miranda, Santiago; Guardiola, Lucía; Serrano-Martínez, Ana; Gabaldón, José Antonio; Nuñez-Delicado, Estrella
2017-01-01
Clove oil (CO) is an aromatic oily liquid used in the food, cosmetics and pharmaceutical industries for its functional properties. However, its disadvantages of pungent taste, volatility, light sensitivity and poor water solubility can be solved by applying microencapsulation or complexation techniques. Essential CO was successfully solubilized in aqueous solution by forming inclusion complexes with β-cyclodextrins (β-CDs). Moreover, phase solubility studies demonstrated that essential CO also forms insoluble complexes with β-CDs. Based on these results, essential CO-β-CD solid complexes were prepared by the novel approach of microwave irradiation (MWI), followed by three different drying methods: vacuum oven drying (VO), freeze-drying (FD) or spray-drying (SD). FD was the best option for drying the CO-β-CD solid complexes, followed by VO and SD. MWI can be used efficiently to prepare essential CO-β-CD complexes with good yield on an industrial scale. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Directory of Open Access Journals (Sweden)
Олег Богданович ЗАЧКО
2016-03-01
Full Text Available The methods and models of safety-oriented project management of the development of complex systems are proposed resulting from the convergence of existing approaches in project management in contrast to the mechanism of value-oriented management. A cognitive model of safety oriented project management of the development of complex systems is developed, which provides a synergistic effect that is to move the system from the original (pre condition in an optimal one from the viewpoint of life safety - post-project state. The approach of assessment the project complexity is proposed, which consists in taking into account the seasonal component of a time characteristic of life cycles of complex organizational and technical systems with occupancy. This enabled to take into account the seasonal component in simulation models of life cycle of the product operation in complex organizational and technical system, modeling the critical points of operation of systems with occupancy, which forms a new methodology for safety-oriented management of projects, programs and portfolios of projects with the formalization of the elements of complexity.
Dondelinger, Robert M
2004-01-01
This complex method of equipment replacement planning is a methodology; it is a means to an end, a process that focuses on equipment most in need of replacement, rather than the end itself. It uses data available from the maintenance management database, and attempts to quantify those subjective items important [figure: see text] in making equipment replacement decisions. Like the simple method of the last issue, it is a starting point--albeit an advanced starting point--which the user can modify to fit their particular organization, but the complex method leaves room for expansion. It is based on sound logic, documented facts, and is fully defensible during the decision-making process and will serve your organization well as provide a structure for your equipment replacement planning decisions.
Directory of Open Access Journals (Sweden)
Ru Liang
2018-01-01
Full Text Available The magnitude of business dynamics has increased rapidly due to increased complexity, uncertainty, and risk of large-scale infrastructure projects. This fact made it increasingly tough to “go alone” into a contractor. As a consequence, joint venture contractors with diverse strengths and weaknesses cooperatively bid for bidding. Understanding project complexity and making decision on the optimal joint venture contractor is challenging. This paper is to study how to select joint venture contractors for undertaking large-scale infrastructure projects based on a multiattribute mathematical model. Two different methods are developed to solve the problem. One is based on ideal points and the other one is based on balanced ideal advantages. Both of the two methods consider individual difference in expert judgment and contractor attributes. A case study of Hong Kong-Zhuhai-Macao-Bridge (HZMB project in China is used to demonstrate how to apply these two methods and their advantages.
International Nuclear Information System (INIS)
Ramakrishna Reddy, S.; Srinivasan, R.; Mallika, C.; Kamachi Mudali, U.; Natarajan, R.
2012-01-01
Spectrophotometric method employing numerous chromogenic reagents like thiourea, 1,10-phenanthroline, thiocyanate and tropolone is reported in the literature for the estimation of very low concentrations of Ru. A sensitive spectrophotometric method has been developed for the determination of ruthenium in the concentration range 1.5 to 6.5 ppm in the present work. This method is based on the reaction of ruthenium with barbituric acid to produce ruthenium(ll)tris-violurate, (Ru(H 2 Va) 3 ) -1 complex which gives a stable deep-red coloured solution. The maximum absorption of the complex is at 491 nm due to the inverted t 2g → Π(L-L ligand) electron - transfer transition. The molar absorptivity of the coloured species is 9,851 dm 3 mol -1 cm -1
Complex Hand Dexterity: A Review of Biomechanical Methods for Measuring Musical Performance
Directory of Open Access Journals (Sweden)
Cheryl Diane Metcalf
2014-05-01
Full Text Available Complex hand dexterity is fundamental to our interactions with the physical, social and cultural environment. Dexterity can be an expression of creativity and precision in a range of activities, including musical performance. Little is understood about complex hand dexterity or how virtuoso expertise is acquired, due to the versatility of movement combinations available to complete any given task. This has historically limited progress of the field because of difficulties in measuring movements of the hand. Recent developments in methods of motion capture and analysis mean it is now possible to explore the intricate movements of the hand and fingers. These methods allow us insights into the neurophysiological mechanisms underpinning complex hand dexterity and motor learning. They also allow investigation into the key factors that contribute to injury, recovery and functional compensation.The application of such analytical techniques within musical performance provides a multidisciplinary framework for purposeful investigation into the process of learning and skill acquisition in instrumental performance. These highly skilled manual and cognitive tasks present the ultimate achievement in complex hand dexterity. This paper will review methods of assessing instrumental performance in music, focusing specifically on biomechanical measurement and the associated technical challenges faced when measuring highly dexterous activities.
Method for data compression by associating complex numbers with files of data values
Feo, John Thomas; Hanks, David Carlton; Kraay, Thomas Arthur
1998-02-10
A method for compressing data for storage or transmission. Given a complex polynomial and a value assigned to each root, a root generated data file (RGDF) is created, one entry at a time. Each entry is mapped to a point in a complex plane. An iterative root finding technique is used to map the coordinates of the point to the coordinates of one of the roots of the polynomial. The value associated with that root is assigned to the entry. An equational data compression (EDC) method reverses this procedure. Given a target data file, the EDC method uses a search algorithm to calculate a set of m complex numbers and a value map that will generate the target data file. The error between a simple target data file and generated data file is typically less than 10%. Data files can be transmitted or stored without loss by transmitting the m complex numbers, their associated values, and an error file whose size is at most one-tenth of the size of the input data file.
Workshop on Recent Trends in Complex Methods for Partial Differential Equations
Celebi, A; Tutschke, Wolfgang
1999-01-01
This volume is a collection of manscripts mainly originating from talks and lectures given at the Workshop on Recent Trends in Complex Methods for Par tial Differential Equations held from July 6 to 10, 1998 at the Middle East Technical University in Ankara, Turkey, sponsored by The Scientific and Tech nical Research Council of Turkey and the Middle East Technical University. This workshop is a continuation oftwo workshops from 1988 and 1993 at the In ternational Centre for Theoretical Physics in Trieste, Italy entitled Functional analytic Methods in Complex Analysis and Applications to Partial Differential Equations. Since classical complex analysis of one and several variables has a long tra dition it is of high level. But most of its basic problems are solved nowadays so that within the last few decades it has lost more and more attention. The area of complex and functional analytic methods in partial differential equations, however, is still a growing and flourishing field, in particular as these ...
Cork-resin ablative insulation for complex surfaces and method for applying the same
Walker, H. M.; Sharpe, M. H.; Simpson, W. G. (Inventor)
1980-01-01
A method of applying cork-resin ablative insulation material to complex curved surfaces is disclosed. The material is prepared by mixing finely divided cork with a B-stage curable thermosetting resin, forming the resulting mixture into a block, B-stage curing the resin-containing block, and slicing the block into sheets. The B-stage cured sheet is shaped to conform to the surface being insulated, and further curing is then performed. Curing of the resins only to B-stage before shaping enables application of sheet material to complex curved surfaces and avoids limitations and disadvantages presented in handling of fully cured sheet material.
Wang, Yu; Chou, Chia-Chun
2018-05-01
The coupled complex quantum Hamilton-Jacobi equations for electronic nonadiabatic transitions are approximately solved by propagating individual quantum trajectories in real space. Equations of motion are derived through use of the derivative propagation method for the complex actions and their spatial derivatives for wave packets moving on each of the coupled electronic potential surfaces. These equations for two surfaces are converted into the moving frame with the same grid point velocities. Excellent wave functions can be obtained by making use of the superposition principle even when nodes develop in wave packet scattering.
A novel method for preparation of HAMLET-like protein complexes.
Permyakov, Sergei E; Knyazeva, Ekaterina L; Leonteva, Marina V; Fadeev, Roman S; Chekanov, Aleksei V; Zhadan, Andrei P; Håkansson, Anders P; Akatov, Vladimir S; Permyakov, Eugene A
2011-09-01
Some natural proteins induce tumor-selective apoptosis. α-Lactalbumin (α-LA), a milk calcium-binding protein, is converted into an antitumor form, called HAMLET/BAMLET, via partial unfolding and association with oleic acid (OA). Besides triggering multiple cell death mechanisms in tumor cells, HAMLET exhibits bactericidal activity against Streptococcus pneumoniae. The existing methods for preparation of active complexes of α-LA with OA employ neutral pH solutions, which greatly limit water solubility of OA. Therefore these methods suffer from low scalability and/or heterogeneity of the resulting α-LA - OA samples. In this study we present a novel method for preparation of α-LA - OA complexes using alkaline conditions that favor aqueous solubility of OA. The unbound OA is removed by precipitation under acidic conditions. The resulting sample, bLA-OA-45, bears 11 OA molecules and exhibits physico-chemical properties similar to those of BAMLET. Cytotoxic activities of bLA-OA-45 against human epidermoid larynx carcinoma and S. pneumoniae D39 cells are close to those of HAMLET. Treatment of S. pneumoniae with bLA-OA-45 or HAMLET induces depolarization and rupture of the membrane. The cells are markedly rescued from death upon pretreatment with an inhibitor of Ca(2+) transport. Hence, the activation mechanisms of S. pneumoniae death are analogous for these two complexes. The developed express method for preparation of active α-LA - OA complex is high-throughput and suited for development of other protein complexes with low-molecular-weight amphiphilic substances possessing valuable cytotoxic properties. Copyright © 2011 Elsevier Masson SAS. All rights reserved.
Models, methods and software tools for building complex adaptive traffic systems
International Nuclear Information System (INIS)
Alyushin, S.A.
2011-01-01
The paper studies the modern methods and tools to simulate the behavior of complex adaptive systems (CAS), the existing systems of traffic modeling in simulators and their characteristics; proposes requirements for assessing the suitability of the system to simulate the CAS behavior in simulators. The author has developed a model of adaptive agent representation and its functioning environment to meet certain requirements set above, and has presented methods of agents' interactions and methods of conflict resolution in simulated traffic situations. A simulation system realizing computer modeling for simulating the behavior of CAS in traffic situations has been created [ru
A low complexity method for the optimization of network path length in spatially embedded networks
International Nuclear Information System (INIS)
Chen, Guang; Yang, Xu-Hua; Xu, Xin-Li; Ming, Yong; Chen, Sheng-Yong; Wang, Wan-Liang
2014-01-01
The average path length of a network is an important index reflecting the network transmission efficiency. In this paper, we propose a new method of decreasing the average path length by adding edges. A new indicator is presented, incorporating traffic flow demand, to assess the decrease in the average path length when a new edge is added during the optimization process. With the help of the indicator, edges are selected and added into the network one by one. The new method has a relatively small time computational complexity in comparison with some traditional methods. In numerical simulations, the new method is applied to some synthetic spatially embedded networks. The result shows that the method can perform competitively in decreasing the average path length. Then, as an example of an application of this new method, it is applied to the road network of Hangzhou, China. (paper)
International Nuclear Information System (INIS)
Zolin, V.F.; Koreneva, L.G.; Serbinova, T.A.; Tsaryuk, V.I.
1975-01-01
The structure of pyridoxalidene amino acid complexes was studied by circular dichroism, magnetic circular dichroism and luminescence spectroscopy. It was shown that these are two-ligand complexes, whereby in the case of those based on valine, leucine and isoleucine the chromophores are almost perpendicular to one another. In the case of complexes based on glycine and alanine the co-ordination sphere is strongly deformed. (author)
Energy Technology Data Exchange (ETDEWEB)
Fresco, G F [Genoa Univ. (Italy). Dept. of Internal Medicine
1978-06-01
A new RIA method for the detection of circulating immune complexes and antibodies arising in the course of viral hepatitis is described. It involves the use of /sup 125/I-labeled antibodies and foresees the possibility of employing immune complex-coated polypropylene tubes. This simple and sensitive procedure takes into account the possibility that the immune complexes may be absorbed by the surface of polypropylene tubes during the period in which the serum remains there.
Microreactor and method for preparing a radiolabeled complex or a biomolecule conjugate
Energy Technology Data Exchange (ETDEWEB)
Reichert, David E; Kenis, Paul J. A.; Wheeler, Tobias D; Desai, Amit V; Zeng, Dexing; Onal, Birce C
2015-03-17
A microreactor for preparing a radiolabeled complex or a biomolecule conjugate comprises a microchannel for fluid flow, where the microchannel comprises a mixing portion comprising one or more passive mixing elements, and a reservoir for incubating a mixed fluid. The reservoir is in fluid communication with the microchannel and is disposed downstream of the mixing portion. A method of preparing a radiolabeled complex includes flowing a radiometal solution comprising a metallic radionuclide through a downstream mixing portion of a microchannel, where the downstream mixing portion includes one or more passive mixing elements, and flowing a ligand solution comprising a bifunctional chelator through the downstream mixing portion. The ligand solution and the radiometal solution are passively mixed while in the downstream mixing portion to initiate a chelation reaction between the metallic radionuclide and the bifunctional chelator. The chelation reaction is completed to form a radiolabeled complex.
Lam, Nina Siu-Ngan; Qiu, Hong-Lie; Quattrochi, Dale A.; Emerson, Charles W.; Arnold, James E. (Technical Monitor)
2001-01-01
The rapid increase in digital data volumes from new and existing sensors necessitates the need for efficient analytical tools for extracting information. We developed an integrated software package called ICAMS (Image Characterization and Modeling System) to provide specialized spatial analytical functions for interpreting remote sensing data. This paper evaluates the three fractal dimension measurement methods: isarithm, variogram, and triangular prism, along with the spatial autocorrelation measurement methods Moran's I and Geary's C, that have been implemented in ICAMS. A modified triangular prism method was proposed and implemented. Results from analyzing 25 simulated surfaces having known fractal dimensions show that both the isarithm and triangular prism methods can accurately measure a range of fractal surfaces. The triangular prism method is most accurate at estimating the fractal dimension of higher spatial complexity, but it is sensitive to contrast stretching. The variogram method is a comparatively poor estimator for all of the surfaces, particularly those with higher fractal dimensions. Similar to the fractal techniques, the spatial autocorrelation techniques are found to be useful to measure complex images but not images with low dimensionality. These fractal measurement methods can be applied directly to unclassified images and could serve as a tool for change detection and data mining.
EVALUATING THE NOVEL METHODS ON SPECIES DISTRIBUTION MODELING IN COMPLEX FOREST
Directory of Open Access Journals (Sweden)
C. H. Tu
2012-07-01
Full Text Available The prediction of species distribution has become a focus in ecology. For predicting a result more effectively and accurately, some novel methods have been proposed recently, like support vector machine (SVM and maximum entropy (MAXENT. However, high complexity in the forest, like that in Taiwan, will make the modeling become even harder. In this study, we aim to explore which method is more applicable to species distribution modeling in the complex forest. Castanopsis carlesii (long-leaf chinkapin, LLC, growing widely in Taiwan, was chosen as the target species because its seeds are an important food source for animals. We overlaid the tree samples on the layers of altitude, slope, aspect, terrain position, and vegetation index derived from SOPT-5 images, and developed three models, MAXENT, SVM, and decision tree (DT, to predict the potential habitat of LLCs. We evaluated these models by two sets of independent samples in different site and the effect on the complexity of forest by changing the background sample size (BSZ. In the forest with low complex (small BSZ, the accuracies of SVM (kappa = 0.87 and DT (0.86 models were slightly higher than that of MAXENT (0.84. In the more complex situation (large BSZ, MAXENT kept high kappa value (0.85, whereas SVM (0.61 and DT (0.57 models dropped significantly due to limiting the habitat close to samples. Therefore, MAXENT model was more applicable to predict species’ potential habitat in the complex forest; whereas SVM and DT models would tend to underestimate the potential habitat of LLCs.
Ishida, Akihiko; Yamada, Yasuko; Kamidate, Tamio
2008-11-01
In hygiene management, recently there has been a significant need for screening methods for microbial contamination by visual observation or with commonly used colorimetric apparatus. The amount of adenosine triphosphate (ATP) can serve as the index of a microorganism. This paper describes the development of a colorimetric method for the assay of ATP, using enzymatic cycling and Fe(III)-xylenol orange (XO) complex formation. The color characteristics of the Fe(III)-XO complexes, which show a distinct color change from yellow to purple, assist the visual observation in screening work. In this method, a trace amount of ATP was converted to pyruvate, which was further amplified exponentially with coupled enzymatic reactions. Eventually, pyruvate was converted to the Fe(III)-XO complexes through pyruvate oxidase reaction and Fe(II) oxidation. As the assay result, yellow or purple color was observed: A yellow color indicates that the ATP concentration is lower than the criterion of the test, and a purple color indicates that the ATP concentration is higher than the criterion. The method was applied to the assay of ATP extracted from Escherichia coli cells added to cow milk.
Network reliability analysis of complex systems using a non-simulation-based method
International Nuclear Information System (INIS)
Kim, Youngsuk; Kang, Won-Hee
2013-01-01
Civil infrastructures such as transportation, water supply, sewers, telecommunications, and electrical and gas networks often establish highly complex networks, due to their multiple source and distribution nodes, complex topology, and functional interdependence between network components. To understand the reliability of such complex network system under catastrophic events such as earthquakes and to provide proper emergency management actions under such situation, efficient and accurate reliability analysis methods are necessary. In this paper, a non-simulation-based network reliability analysis method is developed based on the Recursive Decomposition Algorithm (RDA) for risk assessment of generic networks whose operation is defined by the connections of multiple initial and terminal node pairs. The proposed method has two separate decomposition processes for two logical functions, intersection and union, and combinations of these processes are used for the decomposition of any general system event with multiple node pairs. The proposed method is illustrated through numerical network examples with a variety of system definitions, and is applied to a benchmark gas transmission pipe network in Memphis TN to estimate the seismic performance and functional degradation of the network under a set of earthquake scenarios.
Viegas, Carla; Sabino, Raquel; Botelho, Daniel; dos Santos, Mateus; Gomes, Anita Quintal
2015-09-01
Cork oak is the second most dominant forest species in Portugal and makes this country the world leader in cork export. Occupational exposure to Chrysonilia sitophila and the Penicillium glabrum complex in cork industry is common, and the latter fungus is associated with suberosis. However, as conventional methods seem to underestimate its presence in occupational environments, the aim of our study was to see whether information obtained by polymerase chain reaction (PCR), a molecular-based method, can complement conventional findings and give a better insight into occupational exposure of cork industry workers. We assessed fungal contamination with the P. glabrum complex in three cork manufacturing plants in the outskirts of Lisbon using both conventional and molecular methods. Conventional culturing failed to detect the fungus at six sampling sites in which PCR did detect it. This confirms our assumption that the use of complementing methods can provide information for a more accurate assessment of occupational exposure to the P. glabrum complex in cork industry.
Directory of Open Access Journals (Sweden)
I. L. Dyachok
2016-08-01
Full Text Available Aim. The development of sensible, economical and expressive method of quantitative determination of organic acids in complex poly herbal extraction counted on izovaleric acid with the use of digital technologies. Materials and methods. Model complex poly herbal extraction of sedative action was chosen as a research object. Extraction is composed of these medical plants: Valeriana officinalis L., Crataégus, Melissa officinalis L., Hypericum, Mentha piperita L., Húmulus lúpulus, Viburnum. Based on chemical composition of plant components, we consider that main pharmacologically active compounds, which can be found in complex poly herbal extraction are: polyphenolic substances (flavonoids, which are contained in Crataégus, Viburnum, Hypericum, Mentha piperita L., Húmulus lúpulus; also organic acids, including izovaleric acid, which are contained in Valeriana officinalis L., Mentha piperita L., Melissa officinalis L., Viburnum; the aminoacid are contained in Valeriana officinalis L. For the determination of organic acids content in low concentration we applied instrumental method of analysis, namely conductometry titration which consisted in the dependences of water solution conductivity of complex poly herbal extraction on composition of organic acids. Result. The got analytical dependences, which describes tangent lines to the conductometry curve before and after the point of equivalence, allow to determine the volume of solution expended on titration and carry out procedure of quantitative determination of organic acids in the digital mode. Conclusion. The proposed method enables to determine the point of equivalence and carry out quantitative determination of organic acids counted on izovaleric acid with the use of digital technologies, that allows to computerize the method on the whole.
International Nuclear Information System (INIS)
Samin; Kris-Tri-Basuki; Farida-Ernawati
1996-01-01
The influence of atomic number on the complex formation constants and it's application by visible spectrophotometric method has been carried out. The complex compound have been made of Y, Nd, Sm and Gd with alizarin red sulfonic in the mole fraction range of 0.20 - 0.53 and pH range of 3.5 - 5. The optimum condition of complex formation was found in the mole fraction range of 0.30 - 0.53, range of pH 3.75 - 5, and the total concentration was 0.00030 M. It was found that the formation constant (β) of alizarin red S. complex by continued variation and matrix disintegration techniques were β : (7.00 ± 0.64).10 9 of complex 3 9γ,β : (4.09±0.34).10 8 of 6 0Nd, β : (7.26 ± 0.42).10 8 of 62 S m and β : (8.38 ± 0.70).10 8 of 64 G d. It can be concluded that the atomic number of Nd is bigger than Sm which is bigger than Gd. The atomic number of Y is the smallest. (39) and the complex formation constant is a biggest. The complex compound can be used for sample analysis with limit detection of Y : 2.2 .10 -5 M, Nd : 2.9 .10 -5 M, Sm : 2.6 .10 -5 M and Gd : 2.4 .10 -5 M. The sensitivity of analysis are Y>Gd>Sm>Nd. The Y 2 O 3 sample of product result from xenotime sand contains Y 2 O 3 : 98.96 ± 1.40 % and in the filtrate (product of monazite sand) contains Nd : 0.27 ± 0.002 M
Estimating the complexity of 3D structural models using machine learning methods
Mejía-Herrera, Pablo; Kakurina, Maria; Royer, Jean-Jacques
2016-04-01
Quantifying the complexity of 3D geological structural models can play a major role in natural resources exploration surveys, for predicting environmental hazards or for forecasting fossil resources. This paper proposes a structural complexity index which can be used to help in defining the degree of effort necessary to build a 3D model for a given degree of confidence, and also to identify locations where addition efforts are required to meet a given acceptable risk of uncertainty. In this work, it is considered that the structural complexity index can be estimated using machine learning methods on raw geo-data. More precisely, the metrics for measuring the complexity can be approximated as the difficulty degree associated to the prediction of the geological objects distribution calculated based on partial information on the actual structural distribution of materials. The proposed methodology is tested on a set of 3D synthetic structural models for which the degree of effort during their building is assessed using various parameters (such as number of faults, number of part in a surface object, number of borders, ...), the rank of geological elements contained in each model, and, finally, their level of deformation (folding and faulting). The results show how the estimated complexity in a 3D model can be approximated by the quantity of partial data necessaries to simulated at a given precision the actual 3D model without error using machine learning algorithms.
Detection of circulating immune complexes in breast cancer and melanoma by three different methods
Energy Technology Data Exchange (ETDEWEB)
Krapf, F; Renger, D; Fricke, M; Kemper, A; Schedel, I; Deicher, H
1982-08-01
By the simultaneous application of three methods, C1q-binding-test (C1q-BA), a two antibody conglutinin binding ELISA and a polyethylene-glycol 6000 precipitation with subsequent quantitative determination of immunoglobulins and complement factors in the redissolved precipitates (PPLaNT), circulating immune complexes could be demonstrated in the sera of 94% of patients with malignant melanoma and of 75% of breast cancer patients. The specific detection rates of the individual methods varied between 23% (C1q-BA) and 46% (PPLaNT), presumably due to the presence of qualitatively different immune complexes in the investigated sera. Accordingly, the simultaneous use of the afore mentioned assays resulted in an increased diagnostic sensitivity and a duplication of the predictive value. Nevertheless, because of the relatively low incidence of malignant diseases in the total population, and due to the fact that circulating immune complexes occur in other non-malignant diseases with considerable frequency, tests for circulating immune complexes must be regarded as less useful parameters in the early diagnostic of cancer.
A ghost-cell immersed boundary method for flow in complex geometry
International Nuclear Information System (INIS)
Tseng, Y.-H.; Ferziger, Joel H.
2003-01-01
An efficient ghost-cell immersed boundary method (GCIBM) for simulating turbulent flows in complex geometries is presented. A boundary condition is enforced through a ghost cell method. The reconstruction procedure allows systematic development of numerical schemes for treating the immersed boundary while preserving the overall second-order accuracy of the base solver. Both Dirichlet and Neumann boundary conditions can be treated. The current ghost cell treatment is both suitable for staggered and non-staggered Cartesian grids. The accuracy of the current method is validated using flow past a circular cylinder and large eddy simulation of turbulent flow over a wavy surface. Numerical results are compared with experimental data and boundary-fitted grid results. The method is further extended to an existing ocean model (MITGCM) to simulate geophysical flow over a three-dimensional bump. The method is easily implemented as evidenced by our use of several existing codes
A hybrid 3D SEM reconstruction method optimized for complex geologic material surfaces.
Yan, Shang; Adegbule, Aderonke; Kibbey, Tohren C G
2017-08-01
Reconstruction methods are widely used to extract three-dimensional information from scanning electron microscope (SEM) images. This paper presents a new hybrid reconstruction method that combines stereoscopic reconstruction with shape-from-shading calculations to generate highly-detailed elevation maps from SEM image pairs. The method makes use of an imaged glass sphere to determine the quantitative relationship between observed intensity and angles between the beam and surface normal, and the detector and surface normal. Two specific equations are derived to make use of image intensity information in creating the final elevation map. The equations are used together, one making use of intensities in the two images, the other making use of intensities within a single image. The method is specifically designed for SEM images captured with a single secondary electron detector, and is optimized to capture maximum detail from complex natural surfaces. The method is illustrated with a complex structured abrasive material, and a rough natural sand grain. Results show that the method is capable of capturing details such as angular surface features, varying surface roughness, and surface striations. Copyright © 2017 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Puncher, M.; Birchall, A.; Bull, R. K.
2012-01-01
Estimating uncertainties on doses from bioassay data is of interest in epidemiology studies that estimate cancer risk from occupational exposures to radionuclides. Bayesian methods provide a logical framework to calculate these uncertainties. However, occupational exposures often consist of many intakes, and this can make the Bayesian calculation computationally intractable. This paper describes a novel strategy for increasing the computational speed of the calculation by simplifying the intake pattern to a single composite intake, termed as complex intake regime (CIR). In order to assess whether this approximation is accurate and fast enough for practical purposes, the method is implemented by the Weighted Likelihood Monte Carlo Sampling (WeLMoS) method and evaluated by comparing its performance with a Markov Chain Monte Carlo (MCMC) method. The MCMC method gives the full solution (all intakes are independent), but is very computationally intensive to apply routinely. Posterior distributions of model parameter values, intakes and doses are calculated for a representative sample of plutonium workers from the United Kingdom Atomic Energy cohort using the WeLMoS method with the CIR and the MCMC method. The distributions are in good agreement: posterior means and Q 0.025 and Q 0.975 quantiles are typically within 20 %. Furthermore, the WeLMoS method using the CIR converges quickly: a typical case history takes around 10-20 min on a fast workstation, whereas the MCMC method took around 12-hr. The advantages and disadvantages of the method are discussed. (authors)
CSIR Research Space (South Africa)
Wilke, DN
2012-07-01
Full Text Available problems that utilise remeshing (i.e. the mesh topology is allowed to change) between design updates. Here, changes in mesh topology result in abrupt changes in the discretization error of the computed response. These abrupt changes in turn manifests... in shape optimization but may be present whenever (partial) differential equations are ap- proximated numerically with non-constant discretization methods e.g. remeshing of spatial domains or automatic time stepping in temporal domains. Keywords: Complex...
F. Grigoli; Simone Cesca; Torsten Dahm; L. Krieger
2012-01-01
Determining the relative orientation of the horizontal components of seismic sensors is a common problem that limits data analysis and interpretation for several acquisition setups, including linear arrays of geophones deployed in borehole installations or ocean bottom seismometers deployed at the seafloor. To solve this problem we propose a new inversion method based on a complex linear algebra approach. Relative orientation angles are retrieved by minimizing, in a least-squares sense, the l...
Adiabatic passage for a lossy two-level quantum system by a complex time method
International Nuclear Information System (INIS)
Dridi, G; Guérin, S
2012-01-01
Using a complex time method with the formalism of Stokes lines, we establish a generalization of the Davis–Dykhne–Pechukas formula which gives in the adiabatic limit the transition probability of a lossy two-state system driven by an external frequency-chirped pulse-shaped field. The conditions that allow this generalization are derived. We illustrate the result with the dissipative Allen–Eberly and Rosen–Zener models. (paper)
International Nuclear Information System (INIS)
Zhang Huiqun
2009-01-01
By using a new coupled Riccati equations, a direct algebraic method, which was applied to obtain exact travelling wave solutions of some complex nonlinear equations, is improved. And the exact travelling wave solutions of the complex KdV equation, Boussinesq equation and Klein-Gordon equation are investigated using the improved method. The method presented in this paper can also be applied to construct exact travelling wave solutions for other nonlinear complex equations.
International Nuclear Information System (INIS)
Gerner, C.
2013-01-01
Comprehensive understanding of complex biological processes is the basis for many biomedical issues of great relevance for modern society including risk assessment, drug development, quality control of industrial products and many more. Screening methods provide means for investigating biological samples without research hypothesis. However, the first boom of analytical screening efforts has passed and we again need to ask whether and how to apply screening methods. Mass spectrometry is a modern tool with unrivalled analytical capacities. This applies to all relevant characteristics of analytical methods such as specificity, sensitivity, accuracy, multiplicity and diversity of applications. Indeed, mass spectrometry qualifies to deal with complexity. Chronic inflammation is a common feature of almost all relevant diseases challenging our modern society; these diseases are apparently highly diverse and include arteriosclerosis, cancer, back pain, neurodegenerative diseases, depression and other. The complexity of mechanisms regulating chronic inflammation is the reason for the practical challenge to deal with it. The presentation shall give an overview of capabilities and limitations of the application of this analytical tool to solve critical questions with great relevance for our society. (author)
Validation of Experimental whole-body SAR Assessment Method in a Complex Indoor Environment
DEFF Research Database (Denmark)
Bamba, Aliou; Joseph, Wout; Vermeeren, Gunter
2012-01-01
Assessing experimentally the whole-body specific absorption rate (SARwb) in a complex indoor environment is very challenging. An experimental method based on room electromagnetics theory (accounting only the Line-Of-Sight as specular path) to assess the whole-body SAR is validated by numerical...... of the proposed method is that it allows discarding the computation burden because it does not use any discretizations. Results show good agreement between measurement and computation at 2.8 GHz, as long as the plane wave assumption is valid, i.e., for high distances from the transmitter. Relative deviations 0...
Accurate and simple measurement method of complex decay schemes radionuclide activity
International Nuclear Information System (INIS)
Legrand, J.; Clement, C.; Bac, C.
1975-01-01
A simple method for the measurement of the activity is described. It consists of using a well-type sodium iodide crystal whose efficiency mith monoenergetic photon rays has been computed or measured. For each radionuclide with a complex decay scheme a total efficiency is computed; it is shown that the efficiency is very high, near 100%. The associated incertainty is low, in spite of the important uncertainties on the different parameters used in the computation. The method has been applied to the measurement of the 152 Eu primary reference [fr
Zhang, Qianchun; Luo, Xialin; Li, Gongke; Xiao, Xiaohua
2015-09-01
Small polar molecules such as nucleosides, amines, amino acids are important analytes in biological, food, environmental, and other fields. It is necessary to develop efficient sample preparation and sensitive analytical methods for rapid analysis of these polar small molecules in complex matrices. Some typical materials in sample preparation, including silica, polymer, carbon, boric acid and so on, are introduced in this paper. Meanwhile, the applications and developments of analytical methods of polar small molecules, such as reversed-phase liquid chromatography, hydrophilic interaction chromatography, etc., are also reviewed.
A method for the determination of ascorbic acid using the iron(II)-pyridine-dimethylglyoxime complex
International Nuclear Information System (INIS)
Arya, S. P.; Mahajan, M.
1998-01-01
A simple and rapid spectrophotometric method for the determination of ascorbic acid is proposed. Ascorbic acid reduces iron (III) to iron (II) which forms a red colored complex with dimethylglyoxime in the presence of pyridine. The absorbance of the resulting solution is measured at 514 nm and a linear relationship between absorbance and concentration of ascorbic acid is observed up to 14 μg ml -1 . Studies on the interference of substances usually associated with ascorbic acid have been carried out and the applicability of the method has been tested by analysing pharmaceutical preparations of vitamin C [it
Chardon, Gilles; Daudet, Laurent
2013-11-01
This paper extends the method of particular solutions (MPS) to the computation of eigenfrequencies and eigenmodes of thin plates, in the framework of the Kirchhoff-Love plate theory. Specific approximation schemes are developed, with plane waves (MPS-PW) or Fourier-Bessel functions (MPS-FB). This framework also requires a suitable formulation of the boundary conditions. Numerical tests, on two plates with various boundary conditions, demonstrate that the proposed approach provides competitive results with standard numerical schemes such as the finite element method, at reduced complexity, and with large flexibility in the implementation choices.
Determination of rhenium in ores of complex composition by the kinetic method
Energy Technology Data Exchange (ETDEWEB)
Pavlova, L G; Gurkina, T V [Kazakhskij Gosudarstvennyj Univ., Alma-Ata (USSR); Tsentral' naya Lab. Yuzhno-Kazakhstanskogo Geologicheskogo Upravleniya, Alma-Ata (USSR))
1979-09-01
The kinetic rhenium determination method is proposed based on rhenium catalytic effect in the reaction of malachite green with thiourea. The accompanying elements, excluding molybdenum, do not interfere with the rhenium determination at their concentration of up to 0.1 M. The interfering influence of molybdenum can be eliminated by addition of tartaric acid to the solution up to the concentration of 0.1 M. This enables to determine rhenium in presence of 1000-fold quantity of molybdenum. The method is applicable for the analysis of complex copper-zinc sulphide ores.
[TVT (transvaginal mesh) surgical method for complex resolution of pelvic floor defects].
Adamík, Z
2006-01-01
Assessment of the effects of a new surgical method for complex resolution of pelvic floor defects. Case study. Department of Obstetrics and Gynaecology, Bata Hospital, Zlín. We evaluated the procedures and results of the new TVM (transvaginal mesh) surgical method which we used in a group of 12 patients. Ten patients had vaginal prolapse following vaginal hysterectomy and in two cases there was uterine prolapse and vaginal prolapse. Only in one case there was a small protrusion in the range of 0.5 cm which we resolved by removal of the penetrated section. The resulting anatomic effect was very good in all the cases.
DEFF Research Database (Denmark)
Deyerl, Hans-Jürgen; Plougmann, Nikolai; Jensen, Jesper Bo Damm
2004-01-01
The polarization control method offers a flexible, robust, and low-cost route for the parallel fabrication of gratings with complex apodization profiles including several discrete phase shifts and chirp. The performance of several test gratings is evaluated in terms of their spectral response...... and compared with theoretical predictions. Short gratings with sidelobe-suppression levels in excess of 32 dB and transmission dips lower than 80 dB have been realized. Finally, most of the devices fabricated by the polarization control method show comparable quality to gratings manufactured by far more...
Evaluating polymer degradation with complex mixtures using a simplified surface area method.
Steele, Kandace M; Pelham, Todd; Phalen, Robert N
2017-09-01
Chemical-resistant gloves, designed to protect workers from chemical hazards, are made from a variety of polymer materials such as plastic, rubber, and synthetic rubber. One material does not provide protection against all chemicals, thus proper polymer selection is critical. Standardized testing, such as chemical degradation tests, are used to aid in the selection process. The current methods of degradation ratings based on changes in weight or tensile properties can be expensive and data often do not exist for complex chemical mixtures. There are hundreds of thousands of chemical products on the market that do not have chemical resistance data for polymer selection. The method described in this study provides an inexpensive alternative to gravimetric analysis. This method uses surface area change to evaluate degradation of a polymer material. Degradation tests for 5 polymer types against 50 complex mixtures were conducted using both gravimetric and surface area methods. The percent change data were compared between the two methods. The resulting regression line was y = 0.48x + 0.019, in units of percent, and the Pearson correlation coefficient was r = 0.9537 (p ≤ 0.05), which indicated a strong correlation between percent weight change and percent surface area change. On average, the percent change for surface area was about half that of the weight change. Using this information, an equivalent rating system was developed for determining the chemical degradation of polymer gloves using surface area.
A Method to Predict the Structure and Stability of RNA/RNA Complexes.
Xu, Xiaojun; Chen, Shi-Jie
2016-01-01
RNA/RNA interactions are essential for genomic RNA dimerization and regulation of gene expression. Intermolecular loop-loop base pairing is a widespread and functionally important tertiary structure motif in RNA machinery. However, computational prediction of intermolecular loop-loop base pairing is challenged by the entropy and free energy calculation due to the conformational constraint and the intermolecular interactions. In this chapter, we describe a recently developed statistical mechanics-based method for the prediction of RNA/RNA complex structures and stabilities. The method is based on the virtual bond RNA folding model (Vfold). The main emphasis in the method is placed on the evaluation of the entropy and free energy for the loops, especially tertiary kissing loops. The method also uses recursive partition function calculations and two-step screening algorithm for large, complicated structures of RNA/RNA complexes. As case studies, we use the HIV-1 Mal dimer and the siRNA/HIV-1 mutant (T4) to illustrate the method.
Risthaus, Tobias; Grimme, Stefan
2013-03-12
A new test set (S12L) containing 12 supramolecular noncovalently bound complexes is presented and used to evaluate seven different methods to account for dispersion in DFT (DFT-D3, DFT-D2, DFT-NL, XDM, dDsC, TS-vdW, M06-L) at different basis set levels against experimental, back-corrected reference energies. This allows conclusions about the performance of each method in an explorative research setting on "real-life" problems. Most DFT methods show satisfactory performance but, due to the largeness of the complexes, almost always require an explicit correction for the nonadditive Axilrod-Teller-Muto three-body dispersion interaction to get accurate results. The necessity of using a method capable of accounting for dispersion is clearly demonstrated in that the two-body dispersion contributions are on the order of 20-150% of the total interaction energy. MP2 and some variants thereof are shown to be insufficient for this while a few tested D3-corrected semiempirical MO methods perform reasonably well. Overall, we suggest the use of this benchmark set as a "sanity check" against overfitting to too small molecular cases.
PAFit: A Statistical Method for Measuring Preferential Attachment in Temporal Complex Networks.
Directory of Open Access Journals (Sweden)
Thong Pham
Full Text Available Preferential attachment is a stochastic process that has been proposed to explain certain topological features characteristic of complex networks from diverse domains. The systematic investigation of preferential attachment is an important area of research in network science, not only for the theoretical matter of verifying whether this hypothesized process is operative in real-world networks, but also for the practical insights that follow from knowledge of its functional form. Here we describe a maximum likelihood based estimation method for the measurement of preferential attachment in temporal complex networks. We call the method PAFit, and implement it in an R package of the same name. PAFit constitutes an advance over previous methods primarily because we based it on a nonparametric statistical framework that enables attachment kernel estimation free of any assumptions about its functional form. We show this results in PAFit outperforming the popular methods of Jeong and Newman in Monte Carlo simulations. What is more, we found that the application of PAFit to a publically available Flickr social network dataset yielded clear evidence for a deviation of the attachment kernel from the popularly assumed log-linear form. Independent of our main work, we provide a correction to a consequential error in Newman's original method which had evidently gone unnoticed since its publication over a decade ago.
International Nuclear Information System (INIS)
Choi, Joonsung; Kim, Dongchan; Oh, Changhyun; Han, Yeji; Park, HyunWook
2013-01-01
In MRI (magnetic resonance imaging), signal sampling along a radial k-space trajectory is preferred in certain applications due to its distinct advantages such as robustness to motion, and the radial sampling can be beneficial for reconstruction algorithms such as parallel MRI (pMRI) due to the incoherency. For radial MRI, the image is usually reconstructed from projection data using analytic methods such as filtered back-projection or Fourier reconstruction after gridding. However, the quality of the reconstructed image from these analytic methods can be degraded when the number of acquired projection views is insufficient. In this paper, we propose a novel reconstruction method based on the expectation maximization (EM) method, where the EM algorithm is remodeled for MRI so that complex images can be reconstructed. Then, to optimize the proposed method for radial pMRI, a reconstruction method that uses coil sensitivity information of multichannel RF coils is formulated. Experiment results from synthetic and in vivo data show that the proposed method introduces better reconstructed images than the analytic methods, even from highly subsampled data, and provides monotonic convergence properties compared to the conjugate gradient based reconstruction method. (paper)
International Nuclear Information System (INIS)
Caldarola, L.
1976-01-01
A method is proposed for the analytical evaluation of the cumulative failure probability distribution of complex repairable systems. The method is based on a set of integral equations each one referring to a specific minimal cut set of the system. Each integral equation links the unavailability of a minimal cut set to its failure probability density distribution and to the probability that the minimal cut set is down at the time t under the condition that it was down at time t'(t'<=t). The limitations for the applicability of the method are also discussed. It has been concluded that the method is applicable if the process describing the failure of a minimal cut set is a 'delayed semi-regenerative process'. (Auth.)
International Nuclear Information System (INIS)
Kingsmore, S.F.; Crockard, A.D.; Fay, A.C.; McNeill, T.A.; Roberts, S.D.; Thompson, J.M.
1988-01-01
Several flow cytometric methods for the measurement of circulating immune complexes (CIC) have recently become available. We report a Raji cell flow cytometric assay (FCMA) that uses aggregated human globulin (AHG) as primary calibrator. Technical advantages of the Raji cell flow cytometric assay are discussed, and its clinical usefulness is evaluated in a method comparison study with the widely used Raji cell immunoradiometric assay. FCMA is more precise and has greater analytic sensitivity for AHG. Diagnostic sensitivity by the flow cytometric method is superior in systemic lupus erythematosus (SLE), rheumatoid arthritis, and vasculitis patients: however, diagnostic specificity is similar for both assays, but the reference interval of FCMA is narrower. Significant correlations were found between CIC levels obtained with both methods in SLE, rheumatoid arthritis, and vasculitis patients and in longitudinal studies of two patients with cerebral SLE. The Raji cell FCMA is recommended for measurement of CIC levels to clinical laboratories with access to a flow cytometer
A Porosity Method to Describe Complex 3D-Structures Theory and Application to an Explosion
Directory of Open Access Journals (Sweden)
M.-F. Robbe
2006-01-01
Full Text Available A theoretical method was developed to be able to describe the influence of structures of complex shape on a transient fluid flow without meshing the structures. Structures are considered as solid pores inside the fluid and act as an obstacle for the flow. The method was specifically adapted to fast transient cases.The porosity method was applied to the simulation of a Hypothetical Core Disruptive Accident in a small-scale replica of a Liquid Metal Fast Breeder Reactor. A 2D-axisymmetrical simulation of the MARS test was performed with the EUROPLEXUS code. Whereas the central internal structures of the mock-up could be described with a classical shell model, the influence of the 3D peripheral structures was taken into account with the porosity method.
Directory of Open Access Journals (Sweden)
Mohammed Khair E. A. Al-Shwaiyat
2014-12-01
Full Text Available New approach has been proposed for the simultaneous determination of two reducing agents based on the dependence of their reaction rate with 18-molybdo-2-phosphate heteropoly complex on pH. The method was automated using the manifold typical for the sequential analysis method. Ascorbic acid and rutin were determined by successive injection of two samples acidified to different pH. The linear range for rutin determination was 0.6-20 mg/L and the detection limit was 0.2 mg/L (l = 1 cm. The determination of rutin was possible in the presence of up to a 20-fold excess of ascorbic acid. The method was successfully applied to the determination of ascorbic acid and rutin in ascorutin tablets. The applicability of the proposed method for the determination of total polyphenol content in natural plant samples was shown.
Design Analysis Method for Multidisciplinary Complex Product using SysML
Directory of Open Access Journals (Sweden)
Liu Jihong
2017-01-01
Full Text Available In the design of multidisciplinary complex products, model-based systems engineering methods are widely used. However, the methodologies only contain only modeling order and simple analysis steps, and lack integrated design analysis methods supporting the whole process. In order to solve the problem, a conceptual design analysis method with integrating modern design methods has been proposed. First, based on the requirement analysis of the quantization matrix, the user’s needs are quantitatively evaluated and translated to system requirements. Then, by the function decomposition of the function knowledge base, the total function is semi-automatically decomposed into the predefined atomic function. The function is matched into predefined structure through the behaviour layer using function-structure mapping based on the interface matching. Finally based on design structure matrix (DSM, the structure reorganization is completed. The process of analysis is implemented with SysML, and illustrated through an aircraft air conditioning system for the system validation.
Complex transformation method and resonances in one-body quantum systems
International Nuclear Information System (INIS)
Sigal, I.M.
1984-01-01
We develop a new spectral deformation method in order to treat the resonance problem in one-body systems. Our result on the meromorphic continuation of matrix elements of the resolvent across the continuous spectrum overlaps considerably with an earlier result of E. Balslev [B] but our method is much simpler and more convenient, we believe, in applications. It is inspired by the local distortion technique of Nuttall-Thomas-Babbitt-Balslev, further developed in [B] but patterned on the complex scaling method of Combes and Balslev. The method is applicable to the multicenter problems in which each potential can be represented, roughly speaking, as a sum of exponentially decaying and dilation-analytic, spherically symmetric parts
Review of analytical methods for the quantification of iodine in complex matrices
Energy Technology Data Exchange (ETDEWEB)
Shelor, C. Phillip [Department of Chemistry and Biochemistry, University of Texas at Arlington, Arlington, TX 76019-0065 (United States); Dasgupta, Purnendu K., E-mail: Dasgupta@uta.edu [Department of Chemistry and Biochemistry, University of Texas at Arlington, Arlington, TX 76019-0065 (United States)
2011-09-19
Highlights: {yields} We focus on iodine in biological samples, notably urine and milk. {yields} Sample preparation and the Sandell-Kolthoff method are extensively discussed. - Abstract: Iodine is an essential element of human nutrition. Nearly a third of the global population has insufficient iodine intake and is at risk of developing Iodine Deficiency Disorders (IDD). Most countries have iodine supplementation and monitoring programs. Urinary iodide (UI) is the biomarker used for epidemiological studies; only a few methods are currently used routinely for analysis. These methods either require expensive instrumentation with qualified personnel (inductively coupled plasma-mass spectrometry, instrumental nuclear activation analysis) or oxidative sample digestion to remove potential interferences prior to analysis by a kinetic colorimetric method originally introduced by Sandell and Kolthoff {approx}75 years ago. The Sandell-Kolthoff (S-K) method is based on the catalytic effect of iodide on the reaction between Ce{sup 4+} and As{sup 3+}. No available technique fully fits the needs of developing countries; research into inexpensive reliable methods and instrumentation are needed. There have been multiple reviews of methods used for epidemiological studies and specific techniques. However, a general review of iodine determination on a wide-ranging set of complex matrices is not available. While this review is not comprehensive, we cover the principal developments since the original development of the S-K method.
A Systematic Optimization Design Method for Complex Mechatronic Products Design and Development
Directory of Open Access Journals (Sweden)
Jie Jiang
2018-01-01
Full Text Available Designing a complex mechatronic product involves multiple design variables, objectives, constraints, and evaluation criteria as well as their nonlinearly coupled relationships. The design space can be very big consisting of many functional design parameters, structural design parameters, and behavioral design (or running performances parameters. Given a big design space and inexplicit relations among them, how to design a product optimally in an optimization design process is a challenging research problem. In this paper, we propose a systematic optimization design method based on design space reduction and surrogate modelling techniques. This method firstly identifies key design parameters from a very big design space to reduce the design space, secondly uses the identified key design parameters to establish a system surrogate model based on data-driven modelling principles for optimization design, and thirdly utilizes the multiobjective optimization techniques to achieve an optimal design of a product in the reduced design space. This method has been tested with a high-speed train design. With comparison to others, the research results show that this method is practical and useful for optimally designing complex mechatronic products.
Iteratively-coupled propagating exterior complex scaling method for electron-hydrogen collisions
International Nuclear Information System (INIS)
Bartlett, Philip L; Stelbovics, Andris T; Bray, Igor
2004-01-01
A newly-derived iterative coupling procedure for the propagating exterior complex scaling (PECS) method is used to efficiently calculate the electron-impact wavefunctions for atomic hydrogen. An overview of this method is given along with methods for extracting scattering cross sections. Differential scattering cross sections at 30 eV are presented for the electron-impact excitation to the n = 1, 2, 3 and 4 final states, for both PECS and convergent close coupling (CCC), which are in excellent agreement with each other and with experiment. PECS results are presented at 27.2 eV and 30 eV for symmetric and asymmetric energy-sharing triple differential cross sections, which are in excellent agreement with CCC and exterior complex scaling calculations, and with experimental data. At these intermediate energies, the efficiency of the PECS method with iterative coupling has allowed highly accurate partial-wave solutions of the full Schroedinger equation, for L ≤ 50 and a large number of coupled angular momentum states, to be obtained with minimal computing resources. (letter to the editor)
Baxter, Susan K; Blank, Lindsay; Woods, Helen Buckley; Payne, Nick; Rimmer, Melanie; Goyder, Elizabeth
2014-05-10
There is increasing interest in innovative methods to carry out systematic reviews of complex interventions. Theory-based approaches, such as logic models, have been suggested as a means of providing additional insights beyond that obtained via conventional review methods. This paper reports the use of an innovative method which combines systematic review processes with logic model techniques to synthesise a broad range of literature. The potential value of the model produced was explored with stakeholders. The review identified 295 papers that met the inclusion criteria. The papers consisted of 141 intervention studies and 154 non-intervention quantitative and qualitative articles. A logic model was systematically built from these studies. The model outlines interventions, short term outcomes, moderating and mediating factors and long term demand management outcomes and impacts. Interventions were grouped into typologies of practitioner education, process change, system change, and patient intervention. Short-term outcomes identified that may result from these interventions were changed physician or patient knowledge, beliefs or attitudes and also interventions related to changed doctor-patient interaction. A range of factors which may influence whether these outcomes lead to long term change were detailed. Demand management outcomes and intended impacts included content of referral, rate of referral, and doctor or patient satisfaction. The logic model details evidence and assumptions underpinning the complex pathway from interventions to demand management impact. The method offers a useful addition to systematic review methodologies. PROSPERO registration number: CRD42013004037.
A Comparison of Multidimensional Item Selection Methods in Simple and Complex Test Designs
Directory of Open Access Journals (Sweden)
Eren Halil ÖZBERK
2017-03-01
Full Text Available In contrast with the previous studies, this study employed various test designs (simple and complex which allow the evaluation of the overall ability score estimations across multiple real test conditions. In this study, four factors were manipulated, namely the test design, number of items per dimension, correlation between dimensions and item selection methods. Using the generated item and ability parameters, dichotomous item responses were generated in by using M3PL compensatory multidimensional IRT model with specified correlations. MCAT composite ability score accuracy was evaluated using absolute bias (ABSBIAS, correlation and the root mean square error (RMSE between true and estimated ability scores. The results suggest that the multidimensional test structure, number of item per dimension and correlation between dimensions had significant effect on item selection methods for the overall score estimations. For simple structure test design it was found that V1 item selection has the lowest absolute bias estimations for both long and short tests while estimating overall scores. As the model gets complex KL item selection method performed better than other two item selection method.
Impact of a Modified Jigsaw Method for Learning an Unfamiliar, Complex Topic
Directory of Open Access Journals (Sweden)
Denise Kolanczyk
2017-09-01
Full Text Available Objective: The aim of this study was to use the jigsaw method with an unfamiliar, complex topic and to evaluate the effectiveness of the jigsaw teaching method on student learning of assigned material (“jigsaw expert” versus non-assigned material (“jigsaw learner”. Innovation: The innovation was implemented in an advanced cardiology elective. Forty students were assigned a pre-reading and one of four valvular heart disorders, a topic not previously taught in the curriculum. A pre-test and post-test evaluated overall student learning. Student performance on pre/post tests as the “jigsaw expert” and “jigsaw learner” was also compared. Critical Analysis: Overall, the post-test mean score of 85.75% was significantly higher than that of the pre-test score of 56.75% (p<0.05. There was significant improvement in scores regardless of whether the material was assigned (“jigsaw experts” pre=58.8% and post=82.5%; p<0.05 or not assigned (“jigsaw learners” pre= 56.25% and post= 86.56%, p<0.05 for pre-study. Next Steps: The use of the jigsaw method to teach unfamiliar, complex content helps students to become both teachers and active listeners, which are essential to the skills and professionalism of a health care provider. Further studies are needed to evaluate use of the jigsaw method to teach unfamiliar, complex content on long-term retention and to further examine the effects of expert vs. non-expert roles. Conflict of Interest We declare no conflicts of interest or financial interests that the authors or members of their immediate families have in any product or service discussed in the manuscript, including grants (pending or received, employment, gifts, stock holdings or options, honoraria, consultancies, expert testimony, patents and royalties. Type: Note
International Nuclear Information System (INIS)
Zhao, Huaying; Schuck, Peter
2015-01-01
Global multi-method analysis for protein interactions (GMMA) can increase the precision and complexity of binding studies for the determination of the stoichiometry, affinity and cooperativity of multi-site interactions. The principles and recent developments of biophysical solution methods implemented for GMMA in the software SEDPHAT are reviewed, their complementarity in GMMA is described and a new GMMA simulation tool set in SEDPHAT is presented. Reversible macromolecular interactions are ubiquitous in signal transduction pathways, often forming dynamic multi-protein complexes with three or more components. Multivalent binding and cooperativity in these complexes are often key motifs of their biological mechanisms. Traditional solution biophysical techniques for characterizing the binding and cooperativity are very limited in the number of states that can be resolved. A global multi-method analysis (GMMA) approach has recently been introduced that can leverage the strengths and the different observables of different techniques to improve the accuracy of the resulting binding parameters and to facilitate the study of multi-component systems and multi-site interactions. Here, GMMA is described in the software SEDPHAT for the analysis of data from isothermal titration calorimetry, surface plasmon resonance or other biosensing, analytical ultracentrifugation, fluorescence anisotropy and various other spectroscopic and thermodynamic techniques. The basic principles of these techniques are reviewed and recent advances in view of their particular strengths in the context of GMMA are described. Furthermore, a new feature in SEDPHAT is introduced for the simulation of multi-method data. In combination with specific statistical tools for GMMA in SEDPHAT, simulations can be a valuable step in the experimental design
Del Carpio R., Maikol; Hashemi, M. Javad; Mosqueda, Gilberto
2017-10-01
This study examines the performance of integration methods for hybrid simulation of large and complex structural systems in the context of structural collapse due to seismic excitations. The target application is not necessarily for real-time testing, but rather for models that involve large-scale physical sub-structures and highly nonlinear numerical models. Four case studies are presented and discussed. In the first case study, the accuracy of integration schemes including two widely used methods, namely, modified version of the implicit Newmark with fixed-number of iteration (iterative) and the operator-splitting (non-iterative) is examined through pure numerical simulations. The second case study presents the results of 10 hybrid simulations repeated with the two aforementioned integration methods considering various time steps and fixed-number of iterations for the iterative integration method. The physical sub-structure in these tests consists of a single-degree-of-freedom (SDOF) cantilever column with replaceable steel coupons that provides repeatable highlynonlinear behavior including fracture-type strength and stiffness degradations. In case study three, the implicit Newmark with fixed-number of iterations is applied for hybrid simulations of a 1:2 scale steel moment frame that includes a relatively complex nonlinear numerical substructure. Lastly, a more complex numerical substructure is considered by constructing a nonlinear computational model of a moment frame coupled to a hybrid model of a 1:2 scale steel gravity frame. The last two case studies are conducted on the same porotype structure and the selection of time steps and fixed number of iterations are closely examined in pre-test simulations. The generated unbalance forces is used as an index to track the equilibrium error and predict the accuracy and stability of the simulations.
Simulating Engineering Flows through Complex Porous Media via the Lattice Boltzmann Method
Directory of Open Access Journals (Sweden)
Vesselin Krassimirov Krastev
2018-03-01
Full Text Available In this paper, recent achievements in the application of the lattice Boltzmann method (LBM to complex fluid flows are reported. More specifically, we focus on flows through reactive porous media, such as the flow through the substrate of a selective catalytic reactor (SCR for the reduction of gaseous pollutants in the automotive field; pulsed-flow analysis through heterogeneous catalyst architectures; and transport and electro-chemical phenomena in microbial fuel cells (MFC for novel waste-to-energy applications. To the authors’ knowledge, this is the first known application of LBM modeling to the study of MFCs, which represents by itself a highly innovative and challenging research area. The results discussed here essentially confirm the capabilities of the LBM approach as a flexible and accurate computational tool for the simulation of complex multi-physics phenomena of scientific and technological interest, across physical scales.
International Nuclear Information System (INIS)
Lay-Ekuakille, Aimé; Pariset, Carlo; Trotta, Amerigo
2010-01-01
The FDM (filter diagonalization method), an interesting technique used in nuclear magnetic resonance data processing for tackling FFT (fast Fourier transform) limitations, can be used by considering pipelines, especially complex configurations, as a vascular apparatus with arteries, veins, capillaries, etc. Thrombosis, which might occur in humans, can be considered as a leakage for the complex pipeline, the human vascular apparatus. The choice of eigenvalues in FDM or in spectra-based techniques is a key issue in recovering the solution of the main equation (for FDM) or frequency domain transformation (for FFT) in order to determine the accuracy in detecting leaks in pipelines. This paper deals with the possibility of improving the leak detection accuracy of the FDM technique thanks to a robust algorithm by assessing the problem of eigenvalues, making it less experimental and more analytical using Tikhonov-based regularization techniques. The paper starts from the results of previous experimental procedures carried out by the authors
Deffuant, Guillaume
2011-01-01
One common characteristic of a complex system is its ability to withstand major disturbances and the capacity to rebuild itself. Understanding how such systems demonstrate resilience by absorbing or recovering from major external perturbations requires both quantitative foundations and a multidisciplinary view of the topic. This book demonstrates how new methods can be used to identify the actions favouring the recovery from perturbations on a variety of examples including the dynamics of bacterial biofilms, grassland savannahs, language competition and Internet social networking sites. The reader is taken through an introduction to the idea of resilience and viability and shown the mathematical basis of the techniques used to analyse systems. The idea of individual or agent-based modelling of complex systems is introduced and related to analytically tractable approximations of such models. A set of case studies illustrates the use of the techniques in real applications, and the final section describes how on...
Reinhardt, Katja; Samimi, Cyrus
2018-01-01
While climatological data of high spatial resolution are largely available in most developed countries, the network of climatological stations in many other regions of the world still constitutes large gaps. Especially for those regions, interpolation methods are important tools to fill these gaps and to improve the data base indispensible for climatological research. Over the last years, new hybrid methods of machine learning and geostatistics have been developed which provide innovative prospects in spatial predictive modelling. This study will focus on evaluating the performance of 12 different interpolation methods for the wind components \\overrightarrow{u} and \\overrightarrow{v} in a mountainous region of Central Asia. Thereby, a special focus will be on applying new hybrid methods on spatial interpolation of wind data. This study is the first evaluating and comparing the performance of several of these hybrid methods. The overall aim of this study is to determine whether an optimal interpolation method exists, which can equally be applied for all pressure levels, or whether different interpolation methods have to be used for the different pressure levels. Deterministic (inverse distance weighting) and geostatistical interpolation methods (ordinary kriging) were explored, which take into account only the initial values of \\overrightarrow{u} and \\overrightarrow{v} . In addition, more complex methods (generalized additive model, support vector machine and neural networks as single methods and as hybrid methods as well as regression-kriging) that consider additional variables were applied. The analysis of the error indices revealed that regression-kriging provided the most accurate interpolation results for both wind components and all pressure heights. At 200 and 500 hPa, regression-kriging is followed by the different kinds of neural networks and support vector machines and for 850 hPa it is followed by the different types of support vector machine and
Fast methods for long-range interactions in complex systems. Lecture notes
International Nuclear Information System (INIS)
Sutmann, Godehard; Gibbon, Paul; Lippert, Thomas
2011-01-01
Parallel computing and computer simulations of complex particle systems including charges have an ever increasing impact in a broad range of fields in the physical sciences, e.g. in astrophysics, statistical physics, plasma physics, material sciences, physical chemistry, and biophysics. The present summer school, funded by the German Heraeus-Foundation, took place at the Juelich Supercomputing Centre from 6 - 10 September 2010. The focus was on providing an introduction and overview over different methods, algorithms and new trends for the computational treatment of long-range interactions in particle systems. The Lecture Notes contain an introduction into particle simulation, as well as five different fast methods, i.e. the Fast Multipole Method, Barnes-Hut Tree Method, Multigrid, FFT based methods, and Fast Summation using the non-equidistant FFT. In addition to introducing the methods, efficient parallelization of the methods is presented in detail. This publication was edited at the Juelich Supercomputing Centre (JSC) which is an integral part of the Institute for Advanced Simulation (IAS). The IAS combines the Juelich simulation sciences and the supercomputer facility in one organizational unit. It includes those parts of the scientific institutes at Forschungszentrum Juelich which use simulation on supercomputers as their main research methodology. (orig.)
Fast methods for long-range interactions in complex systems. Lecture notes
Energy Technology Data Exchange (ETDEWEB)
Sutmann, Godehard; Gibbon, Paul; Lippert, Thomas (eds.)
2011-10-13
Parallel computing and computer simulations of complex particle systems including charges have an ever increasing impact in a broad range of fields in the physical sciences, e.g. in astrophysics, statistical physics, plasma physics, material sciences, physical chemistry, and biophysics. The present summer school, funded by the German Heraeus-Foundation, took place at the Juelich Supercomputing Centre from 6 - 10 September 2010. The focus was on providing an introduction and overview over different methods, algorithms and new trends for the computational treatment of long-range interactions in particle systems. The Lecture Notes contain an introduction into particle simulation, as well as five different fast methods, i.e. the Fast Multipole Method, Barnes-Hut Tree Method, Multigrid, FFT based methods, and Fast Summation using the non-equidistant FFT. In addition to introducing the methods, efficient parallelization of the methods is presented in detail. This publication was edited at the Juelich Supercomputing Centre (JSC) which is an integral part of the Institute for Advanced Simulation (IAS). The IAS combines the Juelich simulation sciences and the supercomputer facility in one organizational unit. It includes those parts of the scientific institutes at Forschungszentrum Juelich which use simulation on supercomputers as their main research methodology. (orig.)
A digital processing method for the analysis of complex nuclear spectra
International Nuclear Information System (INIS)
Madan, V.K.; Abani, M.C.; Bairi, B.R.
1994-01-01
This paper describes a digital processing method using frequency power spectra for the analysis of complex nuclear spectra. The power spectra were estimated by employing modified discrete Fourier transform. The method was applied to observed spectral envelopes. The results for separating closely-spaced doublets in nuclear spectra of low statistical precision compared favorably with those obtained by using a popular peak fitting program SAMPO. The paper also describes limitations of the peak fitting methods. It describes the advantages of digital processing techniques for type II digital signals including nuclear spectra. A compact computer program occupying less than 2.5 kByte of memory space was written in BASIC for the processing of observed spectral envelopes. (orig.)
New method for rekindling the nonlinear solitary waves in Maxwellian complex space plasma
Das, G. C.; Sarma, Ridip
2018-04-01
Our interest is to study the nonlinear wave phenomena in complex plasma constituents with Maxwellian electrons and ions. The main reason for this consideration is to exhibit the effects of dust charge fluctuations on acoustic modes evaluated by the use of a new method. A special method (G'/G) has been developed to yield the coherent features of nonlinear waves augmented through the derivation of a Korteweg-de Vries equation and found successfully the different nature of solitons recognized in space plasmas. Evolutions have shown with the input of appropriate typical plasma parameters to support our theoretical observations in space plasmas. All conclusions are in good accordance with the actual occurrences and could be of interest to further the investigations in experiments and satellite observations in space. In this paper, we present not only the model that exhibited nonlinear solitary wave propagation but also a new mathematical method to the execution.
Application of a non-contiguous grid generation method to complex configurations
International Nuclear Information System (INIS)
Chen, S.; McIlwain, S.; Khalid, M.
2003-01-01
An economical non-contiguous grid generation method was developed to efficiently generate structured grids for complex 3D problems. Compared with traditional contiguous grids, this new approach generated grids for different block clusters independently and was able to distribute the grid points more economically according to the user's specific topology design. The method was evaluated by applying it to a Navier-Stokes computation of flow past a hypersonic projectile. Both the flow velocity and the heat transfer characteristics of the projectile agreed qualitatively with other numerical data in the literature and with available field data. Detailed grid topology designs for 3D geometries were addressed, and the advantages of this approach were analysed and compared with traditional contiguous grid generation methods. (author)
A New Method to Develop Human Dental Pulp Cells and Platelet-rich Fibrin Complex.
He, Xuan; Chen, Wen-Xia; Ban, Guifei; Wei, Wei; Zhou, Jun; Chen, Wen-Jin; Li, Xian-Yu
2016-11-01
Platelet-rich fibrin (PRF) has been used as a scaffold material in various tissue regeneration studies. In the previous methods to combine seed cells with PRF, the structure of PRF was damaged, and the manipulation time in vitro was also increased. The objective of this in vitro study was to explore an appropriate method to develop a PRF-human dental pulp cell (hDPC) complex to maintain PRF structure integrity and to find out the most efficient part of PRF. The PRF-hDPC complex was developed at 3 different time points during PRF preparation: (1) the before centrifugation (BC) group, the hDPC suspension was added to the venous blood before blood centrifugation; (2) the immediately after centrifugation (IAC) group, the hDPC suspension was added immediately after blood centrifugation; (3) the after centrifugation (AC) group, the hDPC suspension was added 10 minutes after blood centrifugation; and (4) the control group, PRF without hDPC suspension. The prepared PRF-hDPC complexes were cultured for 7 days. The samples were fixed for histologic, immunohistochemistry, and scanning electron microscopic evaluation. Real-time polymerase chain reaction was performed to evaluate messenger RNA expression of alkaline phosphatase and dentin sialophosphoprotein. Enzyme-linked immunosorbent assay quantification for growth factors was performed within the different parts of the PRF. Histologic, immunohistochemistry, and scanning electron microscopic results revealed that hDPCs were only found in the BC group and exhibited favorable proliferation. Real-time polymerase chain reaction revealed that alkaline phosphatase and dentin sialophosphoprotein expression increased in the cultured PRF-hDPC complex. The lower part of the PRF released the maximum quantity of growth factors. Our new method to develop a PRF-hDPCs complex maintained PRF structure integrity. The hDPCs were distributed in the buffy coat, which might be the most efficient part of PRF. Copyright © 2016 American
Identifying influential spreaders in complex networks based on kshell hybrid method
Namtirtha, Amrita; Dutta, Animesh; Dutta, Biswanath
2018-06-01
Influential spreaders are the key players in maximizing or controlling the spreading in a complex network. Identifying the influential spreaders using kshell decomposition method has become very popular in the recent time. In the literature, the core nodes i.e. with the largest kshell index of a network are considered as the most influential spreaders. We have studied the kshell method and spreading dynamics of nodes using Susceptible-Infected-Recovered (SIR) epidemic model to understand the behavior of influential spreaders in terms of its topological location in the network. From the study, we have found that every node in the core area is not the most influential spreader. Even a strategically placed lower shell node can also be a most influential spreader. Moreover, the core area can also be situated at the periphery of the network. The existing indexing methods are only designed to identify the most influential spreaders from core nodes and not from lower shells. In this work, we propose a kshell hybrid method to identify highly influential spreaders not only from the core but also from lower shells. The proposed method comprises the parameters such as kshell power, node's degree, contact distance, and many levels of neighbors' influence potential. The proposed method is evaluated using nine real world network datasets. In terms of the spreading dynamics, the experimental results show the superiority of the proposed method over the other existing indexing methods such as the kshell method, the neighborhood coreness centrality, the mixed degree decomposition, etc. Furthermore, the proposed method can also be applied to large-scale networks by considering the three levels of neighbors' influence potential.
Advances in complexity of beam halo-chaos and its control methods for beam transport networks
International Nuclear Information System (INIS)
Fang Jinqing
2004-11-01
The complexity theory of beam halo-chaos in beam transport networks and its control methods for a new subject of high-tech field is discussed. It is pointed that in recent years, there has been growing interest in proton beams of high power linear accelerator due to its attractive features in possible breakthrough applications in national defense and industry. In particular, high-current accelerator driven clean activity nuclear power systems for various applications as energy resources has been one of the most focusing issues in the current research, because it provides a safer, cleaner and cheaper nuclear energy resource. However, halo-chaos in high-current beam transport networks become a key concerned issue because it can generate excessive radioactivity therefore significantly limits its applications. It is very important to study the complexity properties of beam halo-chaos and to understand the basic physical mechanisms for halo chaos formation as well as to develop effective control methods for its suppression. These are very challenging subjects for the current research. The main research advances in the subjects, including experimental investigation and the oretical research, especially some very efficient control methods developed through many years of efforts of authors are reviewed and summarized. Finally, some research outlooks are given. (author)
Evolutionary analysis of apolipoprotein E by Maximum Likelihood and complex network methods
Directory of Open Access Journals (Sweden)
Leandro de Jesus Benevides
Full Text Available Abstract Apolipoprotein E (apo E is a human glycoprotein with 299 amino acids, and it is a major component of very low density lipoproteins (VLDL and a group of high-density lipoproteins (HDL. Phylogenetic studies are important to clarify how various apo E proteins are related in groups of organisms and whether they evolved from a common ancestor. Here, we aimed at performing a phylogenetic study on apo E carrying organisms. We employed a classical and robust method, such as Maximum Likelihood (ML, and compared the results using a more recent approach based on complex networks. Thirty-two apo E amino acid sequences were downloaded from NCBI. A clear separation could be observed among three major groups: mammals, fish and amphibians. The results obtained from ML method, as well as from the constructed networks showed two different groups: one with mammals only (C1 and another with fish (C2, and a single node with the single sequence available for an amphibian. The accordance in results from the different methods shows that the complex networks approach is effective in phylogenetic studies. Furthermore, our results revealed the conservation of apo E among animal groups.
A method of reconstructing complex stratigraphic surfaces with multitype fault constraints
Deng, Shi-Wu; Jia, Yu; Yao, Xing-Miao; Liu, Zhi-Ning
2017-06-01
The construction of complex stratigraphic surfaces is widely employed in many fields, such as petroleum exploration, geological modeling, and geological structure analysis. It also serves as an important foundation for data visualization and visual analysis in these fields. The existing surface construction methods have several deficiencies and face various difficulties, such as the presence of multitype faults and roughness of resulting surfaces. In this paper, a surface modeling method that uses geometric partial differential equations (PDEs) is introduced for the construction of stratigraphic surfaces. It effectively solves the problem of surface roughness caused by the irregularity of stratigraphic data distribution. To cope with the presence of multitype complex faults, a two-way projection algorithm between threedimensional space and a two-dimensional plane is proposed. Using this algorithm, a unified method based on geometric PDEs is developed for dealing with multitype faults. Moreover, the corresponding geometric PDE is derived, and an algorithm based on an evolutionary solution is developed. The algorithm proposed for constructing spatial surfaces with real data verifies its computational efficiency and its ability to handle irregular data distribution. In particular, it can reconstruct faulty surfaces, especially those with overthrust faults.
Minenkov, Yury
2018-05-22
A series of semiempirical PM6* and PM7 methods has been tested in reproducing of relative conformational energies of 27 realistic-size complexes of 16 different transition metals (TMs). An analysis of relative energies derived from single-point energy evaluations on density functional theory (DFT) optimized conformers revealed pronounced deviations between semiempirical and DFT methods indicating fundamental difference in potential energy surfaces (PES). To identify the origin of the deviation, we compared fully optimized PM7 and respective DFT conformers. For many complexes, differences in PM7 and DFT conformational energies have been confirmed often manifesting themselves in false coordination of some atoms (H, O) to TMs and chemical transformations/distortion of coordination center geometry in PM7 structures. Despite geometry optimization with fixed coordination center geometry leads to some improvements in conformational energies, the resulting accuracy is still too low to recommend explored semiempirical methods for out-of-the-box conformational search/sampling: careful testing is always needed.
International Nuclear Information System (INIS)
Menezes, Filipe; Fedorov, Alexander; Baleizão, Carlos; Berberan-Santos, Mário N; Valeur, Bernard
2013-01-01
Ensemble fluorescence decays are usually analyzed with a sum of exponentials. However, broad continuous distributions of lifetimes, either unimodal or multimodal, occur in many situations. A simple and flexible fitting function for these cases that encompasses the exponential is the Becquerel function. In this work, the applicability of the Becquerel function for the analysis of complex decays of several kinds is tested. For this purpose, decays of mixtures of four different fluorescence standards (binary, ternary and quaternary mixtures) are measured and analyzed. For binary and ternary mixtures, the expected sum of narrow distributions is well recovered from the Becquerel functions analysis, if the correct number of components is used. For ternary mixtures, however, satisfactory fits are also obtained with a number of Becquerel functions smaller than the true number of fluorophores in the mixture, at the expense of broadening the lifetime distributions of the fictitious components. The quaternary mixture studied is well fitted with both a sum of three exponentials and a sum of two Becquerel functions, showing the inevitable loss of information when the number of components is large. Decays of a fluorophore in a heterogeneous environment, known to be represented by unimodal and broad continuous distributions (as previously obtained by the maximum entropy method), are also measured and analyzed. It is concluded that these distributions can be recovered by the Becquerel function method with an accuracy similar to that of the much more complex maximum entropy method. It is also shown that the polar (or phasor) plot is not always helpful for ascertaining the degree (and kind) of complexity of a fluorescence decay. (paper)
Calculation of seismic response of a flexible rotor by complex modal method, 1
International Nuclear Information System (INIS)
Azuma, Takao; Saito, Shinobu
1984-01-01
In rotary machines, at the time of earthquakes, whether the rotating part and stationary part touch or whether the bearings and seals are damaged or not are problems. In order to examine these problems, it is necessary to analyze the seismic response of a rotary shaft or sometimes a casing system. But the conventional analysis methods are unsatisfactory. Accordingly, in the case of a general shaft system supported with slide bearings and on which gyro effect acts, complex modal method must be used. This calculation method is explained in detail in the book of Lancaster, however, when this method is applied to the seismic response of rotary shafts, the calculation time is considerably different according to the method of final integration. In this study, good results were obtained when the method which did not depend on numerical integration was attempted. The equation of motion and its solution, the displacement vector of a foundation, the verification of the calculation program and the example of calculating the seismic response of two coupled rotor shafts are reported. (Kako, I.)
International Nuclear Information System (INIS)
Yazykov, A.S.; Telichko, F.F.
1989-01-01
A retrospective evaluation of the total quantity of X-ray procedures and the radiation degree in 310 patients with chronic kidney diseases is given. It is ascertained that only account of integral absorbed dose in the organ tissues, comprising the doses of X-ray examinations of other organs during the patient lifetime, can serve as the main condition for developing well-grounded recommendations concerning rational complex of examination methods during prophylactic examination of patients with chronic kidney disease. 9 refs.; 4 figs
Continuum level density of a coupled-channel system in the complex scaling method
International Nuclear Information System (INIS)
Suzuki, Ryusuke; Kato, Kiyoshi; Kruppa, Andras; Giraud, Bertrand G.
2008-01-01
We study the continuum level density (CLD) in the formalism of the complex scaling method (CSM) for coupled-channel systems. We apply the formalism to the 4 He=[ 3 H+p]+[ 3 He+n] coupled-channel cluster model where there are resonances at low energy. Numerical calculations of the CLD in the CSM with a finite number of L 2 basis functions are consistent with the exact result calculated from the S-matrix by solving coupled-channel equations. We also study channel densities. In this framework, the extended completeness relation (ECR) plays an important role. (author)
CSIR Research Space (South Africa)
Moolman, FS
2006-10-01
Full Text Available on Bioencapsulation, Lausanne, CH. Oct.6-7, 2006 O5-3 – page 1 A novel encapsulation method for probiotics using an interpolymer complex in supercriticial carbon dioxide F.S Moolman1, P.W. Labuschagne1, M.S. Thantsha2, T.L. van der Merwe1, H. Rolfes2 and T....cloete@up.ac.za 1. Introduction Evidence for the health benefits of probiotics is increasing. These benefits include protection against pathogenic bacteria, stimulation of the immune system, reduction in carcinogenesis, vitamin production and degradation...
International Nuclear Information System (INIS)
Mazurek, M.A.; Hildemann, L.M.; Simoneit, B.R.T.
1990-10-01
Organic aerosols comprise approximately 30% by mass of the total fine particulate matter present in urban atmospheres. The chemical composition of such aerosols is complex and reflects input from multiple sources of primary emissions to the atmosphere, as well as from secondary production of carbonaceous aerosol species via photochemical reactions. To identify discrete sources of fine carbonaceous particles in urban atmospheres, analytical methods must reconcile both bulk chemical and molecular properties of the total carbonaceous aerosol fraction. This paper presents an overview of the analytical protocol developed and used in a study of the major sources of fine carbon particles emitted to an urban atmosphere. 23 refs., 1 fig., 2 tabs
The complex variable boundary element method: Applications in determining approximative boundaries
Hromadka, T.V.
1984-01-01
The complex variable boundary element method (CVBEM) is used to determine approximation functions for boundary value problems of the Laplace equation such as occurs in potential theory. By determining an approximative boundary upon which the CVBEM approximator matches the desired constant (level curves) boundary conditions, the CVBEM is found to provide the exact solution throughout the interior of the transformed problem domain. Thus, the acceptability of the CVBEM approximation is determined by the closeness-of-fit of the approximative boundary to the study problem boundary. ?? 1984.
A Proactive Complex Event Processing Method for Large-Scale Transportation Internet of Things
Wang, Yongheng; Cao, Kening
2014-01-01
The Internet of Things (IoT) provides a new way to improve the transportation system. The key issue is how to process the numerous events generated by IoT. In this paper, a proactive complex event processing method is proposed for large-scale transportation IoT. Based on a multilayered adaptive dynamic Bayesian model, a Bayesian network structure learning algorithm using search-and-score is proposed to support accurate predictive analytics. A parallel Markov decision processes model is design...
International Nuclear Information System (INIS)
Chalabreysse, Jacques.
1978-05-01
A 13 year experience gained from daily radiotoxicological supervision of personnel at the PIERRELATTE industrial complex is presented. This study is divided into two parts: part one is theoretical: bibliographical synthesis of all scattered documents and publications; a homogeneous survey of all literature on the subject is thus available. Part two reviews the experience gained in professional surroundings: laboratory measurements and analyses (development of methods and daily applications); mathematical formulae to answer the first questions which arise before an individual liable to be contaminated; results obtained at PIERRELATTE [fr
SACS2: Dynamic and Formal Safety Analysis Method for Complex Safety Critical System
International Nuclear Information System (INIS)
Koh, Kwang Yong; Seong, Poong Hyun
2009-01-01
Fault tree analysis (FTA) is one of the most widely used safety analysis technique in the development of safety critical systems. However, over the years, several drawbacks of the conventional FTA have become apparent. One major drawback is that conventional FTA uses only static gates and hence can not capture dynamic behaviors of the complex system precisely. Although several attempts such as dynamic fault tree (DFT), PANDORA, formal fault tree (FFT) and so on, have been made to overcome this problem, they can not still do absolute or actual time modeling because they adapt relative time concept and can capture only sequential behaviors of the system. Second drawback of conventional FTA is its lack of rigorous semantics. Because it is informal in nature, safety analysis results heavily depend on an analyst's ability and are error-prone. Finally reasoning process which is to check whether basic events really cause top events is done manually and hence very labor-intensive and timeconsuming for the complex systems. In this paper, we propose a new safety analysis method for complex safety critical system in qualitative manner. We introduce several temporal gates based on timed computational tree logic (TCTL) which can represent quantitative notion of time. Then, we translate the information of the fault trees into UPPAAL query language and the reasoning process is automatically done by UPPAAL which is the model checker for time critical system
The method of measurement and synchronization control for large-scale complex loading system
International Nuclear Information System (INIS)
Liao Min; Li Pengyuan; Hou Binglin; Chi Chengfang; Zhang Bo
2012-01-01
With the development of modern industrial technology, measurement and control system was widely used in high precision, complex industrial control equipment and large-tonnage loading device. The measurement and control system is often used to analyze the distribution of stress and displacement in the complex bearing load or the complex nature of the mechanical structure itself. In ITER GS mock-up with 5 flexible plates, for each load combination, detect and measure potential slippage between the central flexible plate and the neighboring spacers is necessary as well as the potential slippage between each pre-stressing bar and its neighboring plate. The measurement and control system consists of seven sets of EDC controller and board, computer system, 16-channel quasi-dynamic strain gauge, 25 sets of displacement sensors, 7 sets of load and displacement sensors in the cylinders. This paper demonstrates the principles and methods of EDC220 digital controller to achieve synchronization control, and R and D process of multi-channel loading control software and measurement software. (authors)
A complex guided spectral transform Lanczos method for studying quantum resonance states
International Nuclear Information System (INIS)
Yu, Hua-Gen
2014-01-01
A complex guided spectral transform Lanczos (cGSTL) algorithm is proposed to compute both bound and resonance states including energies, widths and wavefunctions. The algorithm comprises of two layers of complex-symmetric Lanczos iterations. A short inner layer iteration produces a set of complex formally orthogonal Lanczos (cFOL) polynomials. They are used to span the guided spectral transform function determined by a retarded Green operator. An outer layer iteration is then carried out with the transform function to compute the eigen-pairs of the system. The guided spectral transform function is designed to have the same wavefunctions as the eigenstates of the original Hamiltonian in the spectral range of interest. Therefore the energies and/or widths of bound or resonance states can be easily computed with their wavefunctions or by using a root-searching method from the guided spectral transform surface. The new cGSTL algorithm is applied to bound and resonance states of HO, and compared to previous calculations
Rahayu, Iman; Anggraeni, Anni; Ukun, MSS; Bahti, Husein H.
2017-05-01
Nowdays, the utilization of rare earth elements has been carried out widely in industry and medicine, one of them is gadolinium in Gd-DTPA complex is used as a contrast agent in a magnetic resonance imaging (MRI) diagnostic to increase the visual contrast between normal tissue and diseased. Although the stability of a given complex may be high enough, the complexation step couldnot have been completed, so there is possible to gadolinium(III) in the complex compound. Therefore, the function of that compounds should be dangerous because of the toxicity of gadolinium(III) in human body. So, it is necessarry to separate free gadolinium(III) from Gd-DTPA complex by nanofiltration-complexation. The method of this study is complexing of Gd2O3 with DTPA ligand by reflux and separation of Gd-DTPA complex from gadolinium(III) with a nanofiltration membrane on the variation of pressures(2, 3, 4, 5, 6 bars) and temperature (25, 30, 35, 40 °C) and determined the flux and rejection. The results of this study are the higher of pressures and temperatures, permeation flux are increasing and ion rejections are decreasing and gave the free gadolinium(III) rejection until 86.26%.
International Nuclear Information System (INIS)
Zheng, Z.C.; Xie, G.; Du, Q.H.
1987-01-01
Because of the existence of nonlinear characteristics in practical engineering structures, such as large steam turbine-foundation system and offshore platform, it is necessary to predict nonlinear dynamic responses for these very large and complex structural systems subjected extreme load. Due to the limited storage and high executing cost of computers, there are still some difficulties in the analysis for such systems although the traditional finite element methods provide basic available methods to the problems. The dynamic substructure methods, which were developed as a branch of general structural dynamics in the past more than 20 years and have been widely used from aircraft, space vehicles to other mechanical and civil engineering structures, present a powerful method to the analysis of very large structural systems. The key to success is due to the considerable reduction in the number of degrees of freedom while not changing the physical essence of the problems investigated. The dynamic substructure method has been extended to nonlinear system and applicated to the analysis of nonlinear dynamic response of an offshore platform by Z.C. Zheng, et al. (1983, 1985a, b, c). In this paper, the method is presented to analyze dynamic responses of the systems contained intrinsic nonlinearities and with nonlinear attachments and nonlinear supports of nuclear structural systems. The efficiency of the method becomes more clear for nonlinear dynamic problems due to the adoption of iterating processes. For simplicity, the analysis procedure is demonstrated briefly. The generalized substructure method of nonlinear systems is similar to linear systems, only the nonlinear terms are treated as pseudo-forces. Interface coordinates are classified into two categories, the connecting interface coordinates which connect with each other directly in the global system and the linking interface coordinates which link to each other through attachments. (orig./GL)
Directory of Open Access Journals (Sweden)
Ismael de Moura Costa
2017-04-01
Full Text Available Introduction: Paper to presentation the MAIA Method for Architecture of Information Applied evolution, its structure, results obtained and three practical applications.Objective: Proposal of a methodological constructo for treatment of complex information, distinguishing information spaces and revealing inherent configurations of those spaces. Metodology: The argument is elaborated from theoretical research of analitical hallmark, using distinction as a way to express concepts. Phenomenology is used as a philosophical position, which considers the correlation between Subject↔Object. The research also considers the notion of interpretation as an integrating element for concepts definition. With these postulates, the steps to transform the information spaces are formulated. Results: This article explores not only how the method is structured to process information in its contexts, starting from a succession of evolutive cicles, divided in moments, which, on their turn, evolve to transformation acts. Conclusions: This article explores not only how the method is structured to process information in its contexts, starting from a succession of evolutive cicles, divided in moments, which, on their turn, evolve to transformation acts. Besides that, the article presents not only possible applications as a cientific method, but also as configuration tool in information spaces, as well as generator of ontologies. At last, but not least, presents a brief summary of the analysis made by researchers who have already evaluated the method considering the three aspects mentioned.
S-curve networks and an approximate method for estimating degree distributions of complex networks
International Nuclear Information System (INIS)
Guo Jin-Li
2010-01-01
In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference values for optimizing the distribution of IPv4 address resource and the development of IPv6. Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing limit, it proposes a finite network model with a bulk growth. The model is said to be an S-curve network. Analysis demonstrates that the analytic method based on uniform distributions (i.e., Barabási-Albert method) is not suitable for the network. It develops an approximate method to predict the growth dynamics of the individual nodes, and uses this to calculate analytically the degree distribution and the scaling exponents. The analytical result agrees with the simulation well, obeying an approximately power-law form. This method can overcome a shortcoming of Barabási-Albert method commonly used in current network research. (general)
S-curve networks and an approximate method for estimating degree distributions of complex networks
Guo, Jin-Li
2010-12-01
In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference values for optimizing the distribution of IPv4 address resource and the development of IPv6. Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing limit, it proposes a finite network model with a bulk growth. The model is said to be an S-curve network. Analysis demonstrates that the analytic method based on uniform distributions (i.e., Barabási-Albert method) is not suitable for the network. It develops an approximate method to predict the growth dynamics of the individual nodes, and uses this to calculate analytically the degree distribution and the scaling exponents. The analytical result agrees with the simulation well, obeying an approximately power-law form. This method can overcome a shortcoming of Barabási-Albert method commonly used in current network research.
On the complexity of a combined homotopy interior method for convex programming
Yu, Bo; Xu, Qing; Feng, Guochen
2007-03-01
In [G.C. Feng, Z.H. Lin, B. Yu, Existence of an interior pathway to a Karush-Kuhn-Tucker point of a nonconvex programming problem, Nonlinear Anal. 32 (1998) 761-768; G.C. Feng, B. Yu, Combined homotopy interior point method for nonlinear programming problems, in: H. Fujita, M. Yamaguti (Eds.), Advances in Numerical Mathematics, Proceedings of the Second Japan-China Seminar on Numerical Mathematics, Lecture Notes in Numerical and Applied Analysis, vol. 14, Kinokuniya, Tokyo, 1995, pp. 9-16; Z.H. Lin, B. Yu, G.C. Feng, A combined homotopy interior point method for convex programming problem, Appl. Math. Comput. 84 (1997) 193-211.], a combined homotopy was constructed for solving non-convex programming and convex programming with weaker conditions, without assuming the logarithmic barrier function to be strictly convex and the solution set to be bounded. It was proven that a smooth interior path from an interior point of the feasible set to a K-K-T point of the problem exists. This shows that combined homotopy interior point methods can solve the problem that commonly used interior point methods cannot solveE However, so far, there is no result on its complexity, even for linear programming. The main difficulty is that the objective function is not monotonically decreasing on the combined homotopy path. In this paper, by taking a piecewise technique, under commonly used conditions, polynomiality of a combined homotopy interior point method is given for convex nonlinear programming.
A Low-Complexity ESPRIT-Based DOA Estimation Method for Co-Prime Linear Arrays.
Sun, Fenggang; Gao, Bin; Chen, Lizhen; Lan, Peng
2016-08-25
The problem of direction-of-arrival (DOA) estimation is investigated for co-prime array, where the co-prime array consists of two uniform sparse linear subarrays with extended inter-element spacing. For each sparse subarray, true DOAs are mapped into several equivalent angles impinging on the traditional uniform linear array with half-wavelength spacing. Then, by applying the estimation of signal parameters via rotational invariance technique (ESPRIT), the equivalent DOAs are estimated, and the candidate DOAs are recovered according to the relationship among equivalent and true DOAs. Finally, the true DOAs are estimated by combining the results of the two subarrays. The proposed method achieves a better complexity-performance tradeoff as compared to other existing methods.
An Embedded Ghost-Fluid Method for Compressible Flow in Complex Geometry
Almarouf, Mohamad Abdulilah Alhusain Alali; Samtaney, Ravi
2016-01-01
We present an embedded ghost-fluid method for numerical solutions of the compressible Navier Stokes (CNS) equations in arbitrary complex domains. The PDE multidimensional extrapolation approach of Aslam [1] is used to reconstruct the solution in the ghost-fluid regions and impose boundary conditions at the fluid-solid interface. The CNS equations are numerically solved by the second order multidimensional upwind method of Colella [2] and Saltzman [3]. Block-structured adaptive mesh refinement implemented under the Chombo framework is utilized to reduce the computational cost while keeping high-resolution mesh around the embedded boundary and regions of high gradient solutions. Numerical examples with different Reynolds numbers for low and high Mach number flow will be presented. We compare our simulation results with other reported experimental and computational results. The significance and advantages of our implementation, which revolve around balancing between the solution accuracy and implementation difficulties, are briefly discussed as well. © 2016 Trans Tech Publications.
An Embedded Ghost-Fluid Method for Compressible Flow in Complex Geometry
Almarouf, Mohamad Abdulilah Alhusain Alali
2016-06-03
We present an embedded ghost-fluid method for numerical solutions of the compressible Navier Stokes (CNS) equations in arbitrary complex domains. The PDE multidimensional extrapolation approach of Aslam [1] is used to reconstruct the solution in the ghost-fluid regions and impose boundary conditions at the fluid-solid interface. The CNS equations are numerically solved by the second order multidimensional upwind method of Colella [2] and Saltzman [3]. Block-structured adaptive mesh refinement implemented under the Chombo framework is utilized to reduce the computational cost while keeping high-resolution mesh around the embedded boundary and regions of high gradient solutions. Numerical examples with different Reynolds numbers for low and high Mach number flow will be presented. We compare our simulation results with other reported experimental and computational results. The significance and advantages of our implementation, which revolve around balancing between the solution accuracy and implementation difficulties, are briefly discussed as well. © 2016 Trans Tech Publications.
Complex absorbing potentials within EOM-CC family of methods: Theory, implementation, and benchmarks
Energy Technology Data Exchange (ETDEWEB)
Zuev, Dmitry; Jagau, Thomas-C.; Krylov, Anna I. [Department of Chemistry, University of Southern California, Los Angeles, California 90089-0482 (United States); Bravaya, Ksenia B. [Department of Chemistry, Boston University, Boston, Massachusetts 02215-2521 (United States); Epifanovsky, Evgeny [Department of Chemistry, University of Southern California, Los Angeles, California 90089-0482 (United States); Department of Chemistry, University of California, Berkeley, California 94720 (United States); Q-Chem, Inc., 6601 Owens Drive, Suite 105 Pleasanton, California 94588 (United States); Shao, Yihan [Q-Chem, Inc., 6601 Owens Drive, Suite 105 Pleasanton, California 94588 (United States); Sundstrom, Eric; Head-Gordon, Martin [Department of Chemistry, University of California, Berkeley, California 94720 (United States)
2014-07-14
A production-level implementation of equation-of-motion coupled-cluster singles and doubles (EOM-CCSD) for electron attachment and excitation energies augmented by a complex absorbing potential (CAP) is presented. The new method enables the treatment of metastable states within the EOM-CC formalism in a similar manner as bound states. The numeric performance of the method and the sensitivity of resonance positions and lifetimes to the CAP parameters and the choice of one-electron basis set are investigated. A protocol for studying molecular shape resonances based on the use of standard basis sets and a universal criterion for choosing the CAP parameters are presented. Our results for a variety of π{sup *} shape resonances of small to medium-size molecules demonstrate that CAP-augmented EOM-CCSD is competitive relative to other theoretical approaches for the treatment of resonances and is often able to reproduce experimental results.
Directory of Open Access Journals (Sweden)
Yan Chen
2017-03-01
Full Text Available Based on the vectorised and cache optimised kernel, a parallel lower upper decomposition with a novel communication avoiding pivoting scheme is developed to solve dense complex matrix equations generated by the method of moments. The fine-grain data rearrangement and assembler instructions are adopted to reduce memory accessing times and improve CPU cache utilisation, which also facilitate vectorisation of the code. Through grouping processes in a binary tree, a parallel pivoting scheme is designed to optimise the communication pattern and thus reduces the solving time of the proposed solver. Two large electromagnetic radiation problems are solved on two supercomputers, respectively, and the numerical results demonstrate that the proposed method outperforms those in open source and commercial libraries.
A Corner-Point-Grid-Based Voxelization Method for Complex Geological Structure Model with Folds
Chen, Qiyu; Mariethoz, Gregoire; Liu, Gang
2017-04-01
3D voxelization is the foundation of geological property modeling, and is also an effective approach to realize the 3D visualization of the heterogeneous attributes in geological structures. The corner-point grid is a representative data model among all voxel models, and is a structured grid type that is widely applied at present. When carrying out subdivision for complex geological structure model with folds, we should fully consider its structural morphology and bedding features to make the generated voxels keep its original morphology. And on the basis of which, they can depict the detailed bedding features and the spatial heterogeneity of the internal attributes. In order to solve the shortage of the existing technologies, this work puts forward a corner-point-grid-based voxelization method for complex geological structure model with folds. We have realized the fast conversion from the 3D geological structure model to the fine voxel model according to the rule of isocline in Ramsay's fold classification. In addition, the voxel model conforms to the spatial features of folds, pinch-out and other complex geological structures, and the voxels of the laminas inside a fold accords with the result of geological sedimentation and tectonic movement. This will provide a carrier and model foundation for the subsequent attribute assignment as well as the quantitative analysis and evaluation based on the spatial voxels. Ultimately, we use examples and the contrastive analysis between the examples and the Ramsay's description of isoclines to discuss the effectiveness and advantages of the method proposed in this work when dealing with the voxelization of 3D geologic structural model with folds based on corner-point grids.
Energy Technology Data Exchange (ETDEWEB)
Shu, Yu-Chen, E-mail: ycshu@mail.ncku.edu.tw [Department of Mathematics, National Cheng Kung University, Tainan 701, Taiwan (China); Mathematics Division, National Center for Theoretical Sciences (South), Tainan 701, Taiwan (China); Chern, I-Liang, E-mail: chern@math.ntu.edu.tw [Department of Applied Mathematics, National Chiao Tung University, Hsin Chu 300, Taiwan (China); Department of Mathematics, National Taiwan University, Taipei 106, Taiwan (China); Mathematics Division, National Center for Theoretical Sciences (Taipei Office), Taipei 106, Taiwan (China); Chang, Chien C., E-mail: mechang@iam.ntu.edu.tw [Institute of Applied Mechanics, National Taiwan University, Taipei 106, Taiwan (China); Department of Mathematics, National Taiwan University, Taipei 106, Taiwan (China)
2014-10-15
Most elliptic interface solvers become complicated for complex interface problems at those “exceptional points” where there are not enough neighboring interior points for high order interpolation. Such complication increases especially in three dimensions. Usually, the solvers are thus reduced to low order accuracy. In this paper, we classify these exceptional points and propose two recipes to maintain order of accuracy there, aiming at improving the previous coupling interface method [26]. Yet the idea is also applicable to other interface solvers. The main idea is to have at least first order approximations for second order derivatives at those exceptional points. Recipe 1 is to use the finite difference approximation for the second order derivatives at a nearby interior grid point, whenever this is possible. Recipe 2 is to flip domain signatures and introduce a ghost state so that a second-order method can be applied. This ghost state is a smooth extension of the solution at the exceptional point from the other side of the interface. The original state is recovered by a post-processing using nearby states and jump conditions. The choice of recipes is determined by a classification scheme of the exceptional points. The method renders the solution and its gradient uniformly second-order accurate in the entire computed domain. Numerical examples are provided to illustrate the second order accuracy of the presently proposed method in approximating the gradients of the original states for some complex interfaces which we had tested previous in two and three dimensions, and a real molecule ( (1D63)) which is double-helix shape and composed of hundreds of atoms.
Direct infusion-SIM as fast and robust method for absolute protein quantification in complex samples
Directory of Open Access Journals (Sweden)
Christina Looße
2015-06-01
Full Text Available Relative and absolute quantification of proteins in biological and clinical samples are common approaches in proteomics. Until now, targeted protein quantification is mainly performed using a combination of HPLC-based peptide separation and selected reaction monitoring on triple quadrupole mass spectrometers. Here, we show for the first time the potential of absolute quantification using a direct infusion strategy combined with single ion monitoring (SIM on a Q Exactive mass spectrometer. By using complex membrane fractions of Escherichia coli, we absolutely quantified the recombinant expressed heterologous human cytochrome P450 monooxygenase 3A4 (CYP3A4 comparing direct infusion-SIM with conventional HPLC-SIM. Direct-infusion SIM revealed only 14.7% (±4.1 (s.e.m. deviation on average, compared to HPLC-SIM and a decreased processing and analysis time of 4.5 min (that could be further decreased to 30 s for a single sample in contrast to 65 min by the LC–MS method. Summarized, our simplified workflow using direct infusion-SIM provides a fast and robust method for quantification of proteins in complex protein mixtures.
International Nuclear Information System (INIS)
Butorin, Sergei; Nordgren, Joseph; Ollila, Kaija; Albinsson, Yngve; Werme, Lars
2003-10-01
Sweden and Finland plan to dispose of spent fuel from commercial nuclear power plants in deep underground repositories sited in granitic rocks. The fuel assemblies will be placed in canisters consisting of an outer corrosion-resistant copper shell with an inner cast iron insert that gives mechanical strength and reduces void space in the canister. The canister will be placed in a disposal borehole lined with compacted bentonite blocks. After sealing of the borehole, groundwater seepage will saturate the bentonite. The water flow path and transport mechanism between the host rock and the canister will be via diffusion through the swollen bentonite. Any oxygen trapped in the repository will be consumed by reaction with the host rock, pyrite in the bentonite and through microbial activity, giving long-term conditions with low redox potentials. Under these conditions, uranium dioxide - the matrix of unirradiated fuel - is a stable phase. This reducing near-field environment can upset by radiolysis of water caused by the radioactivity of the fuel, which after a few hundred years will be primarily alpha activity. Radiolysis of water produces equal amounts of oxidising and reducing species, but the reducing species produced by alpha radiolysis is molecular hydrogen, which is expected to be far less reactive than the produced oxidising species, H 2 O 2 . Alpha radiolysis could create locally oxidising conditions close to the fuel surface and oxidise the U(IV) in the uranium dioxide fuel to the more soluble U(VI) oxidation state. Furthermore, the solubility of U(VI) is enhanced in the presence of bicarbonate/carbonate by the formation of strong anionic uranyl carbonate complexes. This increase in solubility can amount to 4 to 5 orders of magnitude depending on the composition of the groundwater in contact with the fuel. The other tetravalent actinides in the fuel, Np and Pu, also have higher solubilities when oxidised beyond 4 + to neptunyl and plutonyl species. Once these
Narayanan, Roshni; Nugent, Rebecca; Nugent, Kenneth
2015-10-01
Accreditation Council for Graduate Medical Education guidelines require internal medicine residents to develop skills in the interpretation of medical literature and to understand the principles of research. A necessary component is the ability to understand the statistical methods used and their results, material that is not an in-depth focus of most medical school curricula and residency programs. Given the breadth and depth of the current medical literature and an increasing emphasis on complex, sophisticated statistical analyses, the statistical foundation and education necessary for residents are uncertain. We reviewed the statistical methods and terms used in 49 articles discussed at the journal club in the Department of Internal Medicine residency program at Texas Tech University between January 1, 2013 and June 30, 2013. We collected information on the study type and on the statistical methods used for summarizing and comparing samples, determining the relations between independent variables and dependent variables, and estimating models. We then identified the typical statistics education level at which each term or method is learned. A total of 14 articles came from the Journal of the American Medical Association Internal Medicine, 11 from the New England Journal of Medicine, 6 from the Annals of Internal Medicine, 5 from the Journal of the American Medical Association, and 13 from other journals. Twenty reported randomized controlled trials. Summary statistics included mean values (39 articles), category counts (38), and medians (28). Group comparisons were based on t tests (14 articles), χ2 tests (21), and nonparametric ranking tests (10). The relations between dependent and independent variables were analyzed with simple regression (6 articles), multivariate regression (11), and logistic regression (8). Nine studies reported odds ratios with 95% confidence intervals, and seven analyzed test performance using sensitivity and specificity calculations
Screening tests for hazard classification of complex waste materials – Selection of methods
International Nuclear Information System (INIS)
Weltens, R.; Vanermen, G.; Tirez, K.; Robbens, J.; Deprez, K.; Michiels, L.
2012-01-01
In this study we describe the development of an alternative methodology for hazard characterization of waste materials. Such an alternative methodology for hazard assessment of complex waste materials is urgently needed, because the lack of a validated instrument leads to arbitrary hazard classification of such complex waste materials. False classification can lead to human and environmental health risks and also has important financial consequences for the waste owner. The Hazardous Waste Directive (HWD) describes the methodology for hazard classification of waste materials. For mirror entries the HWD classification is based upon the hazardous properties (H1–15) of the waste which can be assessed from the hazardous properties of individual identified waste compounds or – if not all compounds are identified – from test results of hazard assessment tests performed on the waste material itself. For the latter the HWD recommends toxicity tests that were initially designed for risk assessment of chemicals in consumer products (pharmaceuticals, cosmetics, biocides, food, etc.). These tests (often using mammals) are not designed nor suitable for the hazard characterization of waste materials. With the present study we want to contribute to the development of an alternative and transparent test strategy for hazard assessment of complex wastes that is in line with the HWD principles for waste classification. It is necessary to cope with this important shortcoming in hazardous waste classification and to demonstrate that alternative methods are available that can be used for hazard assessment of waste materials. Next, by describing the pros and cons of the available methods, and by identifying the needs for additional or further development of test methods, we hope to stimulate research efforts and development in this direction. In this paper we describe promising techniques and argument on the test selection for the pilot study that we have performed on different
Posch, Andreas E; Spadiut, Oliver; Herwig, Christoph
2012-06-22
Filamentous fungi are versatile cell factories and widely used for the production of antibiotics, organic acids, enzymes and other industrially relevant compounds at large scale. As a fact, industrial production processes employing filamentous fungi are commonly based on complex raw materials. However, considerable lot-to-lot variability of complex media ingredients not only demands for exhaustive incoming components inspection and quality control, but unavoidably affects process stability and performance. Thus, switching bioprocesses from complex to defined media is highly desirable. This study presents a strategy for strain characterization of filamentous fungi on partly complex media using redundant mass balancing techniques. Applying the suggested method, interdependencies between specific biomass and side-product formation rates, production of fructooligosaccharides, specific complex media component uptake rates and fungal strains were revealed. A 2-fold increase of the overall penicillin space time yield and a 3-fold increase in the maximum specific penicillin formation rate were reached in defined media compared to complex media. The newly developed methodology enabled fast characterization of two different industrial Penicillium chrysogenum candidate strains on complex media based on specific complex media component uptake kinetics and identification of the most promising strain for switching the process from complex to defined conditions. Characterization at different complex/defined media ratios using only a limited number of analytical methods allowed maximizing the overall industrial objectives of increasing both, method throughput and the generation of scientific process understanding.
Directory of Open Access Journals (Sweden)
Jun Ren
2014-01-01
Full Text Available Many evidences have demonstrated that protein complexes are overlapping and hierarchically organized in PPI networks. Meanwhile, the large size of PPI network wants complex detection methods have low time complexity. Up to now, few methods can identify overlapping and hierarchical protein complexes in a PPI network quickly. In this paper, a novel method, called MCSE, is proposed based on λ-module and “seed-expanding.” First, it chooses seeds as essential PPIs or edges with high edge clustering values. Then, it identifies protein complexes by expanding each seed to a λ-module. MCSE is suitable for large PPI networks because of its low time complexity. MCSE can identify overlapping protein complexes naturally because a protein can be visited by different seeds. MCSE uses the parameter λ_th to control the range of seed expanding and can detect a hierarchical organization of protein complexes by tuning the value of λ_th. Experimental results of S. cerevisiae show that this hierarchical organization is similar to that of known complexes in MIPS database. The experimental results also show that MCSE outperforms other previous competing algorithms, such as CPM, CMC, Core-Attachment, Dpclus, HC-PIN, MCL, and NFC, in terms of the functional enrichment and matching with known protein complexes.
Improved Dynamic Analysis method for quantitative PIXE and SXRF element imaging of complex materials
International Nuclear Information System (INIS)
Ryan, C.G.; Laird, J.S.; Fisher, L.A.; Kirkham, R.; Moorhead, G.F.
2015-01-01
The Dynamic Analysis (DA) method in the GeoPIXE software provides a rapid tool to project quantitative element images from PIXE and SXRF imaging event data both for off-line analysis and in real-time embedded in a data acquisition system. Initially, it assumes uniform sample composition, background shape and constant model X-ray relative intensities. A number of image correction methods can be applied in GeoPIXE to correct images to account for chemical concentration gradients, differential absorption effects, and to correct images for pileup effects. A new method, applied in a second pass, uses an end-member phase decomposition obtained from the first pass, and DA matrices determined for each end-member, to re-process the event data with each pixel treated as an admixture of end-member terms. This paper describes the new method and demonstrates through examples and Monte-Carlo simulations how it better tracks spatially complex composition and background shape while still benefitting from the speed of DA.
Complexity and accuracy of image registration methods in SPECT-guided radiation therapy
Energy Technology Data Exchange (ETDEWEB)
Yin, L S; Duzenli, C; Moiseenko, V [Physics and Astronomy, University of British Columbia, 6224 Agricultural Road, Vancouver, BC, V6T 1Z1 (Canada); Tang, L; Hamarneh, G [Computing Science, Simon Fraser University, 9400 TASC1, Burnaby, BC, V5A 1S6 (Canada); Gill, B [Medical Physics, Vancouver Cancer Centre, BC Cancer Agency, 600 West 10th Ave, Vancouver, BC, V5Z 4E6 (Canada); Celler, A; Shcherbinin, S [Department of Radiology, University of British Columbia, 828 West 10th Ave, Vancouver, BC, V5Z 1L8 (Canada); Fua, T F; Thompson, A; Sheehan, F [Radiation Oncology, Vancouver Cancer Centre, BC Cancer Agency, 600 West 10th Ave, Vancouver, BC, V5Z 4E6 (Canada); Liu, M [Radiation Oncology, Fraser Valley Cancer Centre, BC Cancer Agency, 13750 9th Ave, Surrey, BC, V3V 1Z2 (Canada)], E-mail: lyin@bccancer.bc.ca
2010-01-07
The use of functional imaging in radiotherapy treatment (RT) planning requires accurate co-registration of functional imaging scans to CT scans. We evaluated six methods of image registration for use in SPECT-guided radiotherapy treatment planning. Methods varied in complexity from 3D affine transform based on control points to diffeomorphic demons and level set non-rigid registration. Ten lung cancer patients underwent perfusion SPECT-scans prior to their radiotherapy. CT images from a hybrid SPECT/CT scanner were registered to a planning CT, and then the same transformation was applied to the SPECT images. According to registration evaluation measures computed based on the intensity difference between the registered CT images or based on target registration error, non-rigid registrations provided a higher degree of accuracy than rigid methods. However, due to the irregularities in some of the obtained deformation fields, warping the SPECT using these fields may result in unacceptable changes to the SPECT intensity distribution that would preclude use in RT planning. Moreover, the differences between intensity histograms in the original and registered SPECT image sets were the largest for diffeomorphic demons and level set methods. In conclusion, the use of intensity-based validation measures alone is not sufficient for SPECT/CT registration for RTTP. It was also found that the proper evaluation of image registration requires the use of several accuracy metrics.
A time-minimizing hybrid method for fitting complex Moessbauer spectra
International Nuclear Information System (INIS)
Steiner, K.J.
2000-07-01
The process of fitting complex Moessbauer-spectra is known to be time-consuming. The fitting process involves a mathematical model for the combined hyperfine interaction which can be solved by an iteration method only. The iteration method is very sensitive to its input-parameters. In other words, with arbitrary input-parameters it is most unlikely that the iteration method will converge. Up to now a scientist has to spent her/his time to guess appropriate input parameters for the iteration process. The idea is to replace the guessing phase by a genetic algorithm. The genetic algorithm starts with an initial population of arbitrary input parameters. Each parameter set is called an individual. The first step is to evaluate the fitness of all individuals. Afterwards the current population is recombined to form a new population. The process of recombination involves the successive application of genetic operators which are selection, crossover, and mutation. These operators mimic the process of natural evolution, i.e. the concept of the survival of the fittest. Even though there is no formal proof that the genetic algorithm will eventually converge, there is an excellent chance that there will be a population with very good individuals after some generations. The hybrid method presented in the following combines a very modern version of a genetic algorithm with a conventional least-square routine solving the combined interaction Hamiltonian i.e. providing a physical solution with the original Moessbauer parameters by a minimum of input. (author)
Addressing Phase Errors in Fat-Water Imaging Using a Mixed Magnitude/Complex Fitting Method
Hernando, D.; Hines, C. D. G.; Yu, H.; Reeder, S.B.
2012-01-01
Accurate, noninvasive measurements of liver fat content are needed for the early diagnosis and quantitative staging of nonalcoholic fatty liver disease. Chemical shift-based fat quantification methods acquire images at multiple echo times using a multiecho spoiled gradient echo sequence, and provide fat fraction measurements through postprocessing. However, phase errors, such as those caused by eddy currents, can adversely affect fat quantification. These phase errors are typically most significant at the first echo of the echo train, and introduce bias in complex-based fat quantification techniques. These errors can be overcome using a magnitude-based technique (where the phase of all echoes is discarded), but at the cost of significantly degraded signal-to-noise ratio, particularly for certain choices of echo time combinations. In this work, we develop a reconstruction method that overcomes these phase errors without the signal-to-noise ratio penalty incurred by magnitude fitting. This method discards the phase of the first echo (which is often corrupted) while maintaining the phase of the remaining echoes (where phase is unaltered). We test the proposed method on 104 patient liver datasets (from 52 patients, each scanned twice), where the fat fraction measurements are compared to coregistered spectroscopy measurements. We demonstrate that mixed fitting is able to provide accurate fat fraction measurements with high signal-to-noise ratio and low bias over a wide choice of echo combinations. PMID:21713978
A fluorescence anisotropy method for measuring protein concentration in complex cell culture media.
Groza, Radu Constantin; Calvet, Amandine; Ryder, Alan G
2014-04-22
The rapid, quantitative analysis of the complex cell culture media used in biopharmaceutical manufacturing is of critical importance. Requirements for cell culture media composition profiling, or changes in specific analyte concentrations (e.g. amino acids in the media or product protein in the bioprocess broth) often necessitate the use of complicated analytical methods and extensive sample handling. Rapid spectroscopic methods like multi-dimensional fluorescence (MDF) spectroscopy have been successfully applied for the routine determination of compositional changes in cell culture media and bioprocess broths. Quantifying macromolecules in cell culture media is a specific challenge as there is a need to implement measurements rapidly on the prepared media. However, the use of standard fluorescence spectroscopy is complicated by the emission overlap from many media components. Here, we demonstrate how combining anisotropy measurements with standard total synchronous fluorescence spectroscopy (TSFS) provides a rapid, accurate quantitation method for cell culture media. Anisotropy provides emission resolution between large and small fluorophores while TSFS provides a robust measurement space. Model cell culture media was prepared using yeastolate (2.5 mg mL(-1)) spiked with bovine serum albumin (0 to 5 mg mL(-1)). Using this method, protein emission is clearly discriminated from background yeastolate emission, allowing for accurate bovine serum albumin (BSA) quantification over a 0.1 to 4.0 mg mL(-1) range with a limit of detection (LOD) of 13.8 μg mL(-1). Copyright © 2014. Published by Elsevier B.V.
Bufalo, Gennaro; Ambrosone, Luigi
2016-01-14
A method for studying the kinetics of thermal degradation of complex compounds is suggested. Although the method is applicable to any matrix whose grain size can be measured, herein we focus our investigation on thermogravimetric analysis, under a nitrogen atmosphere, of ground soft wheat and ground maize. The thermogravimetric curves reveal that there are two well-distinct jumps of mass loss. They correspond to volatilization, which is in the temperature range 298-433 K, and decomposition regions go from 450 to 1073 K. Thermal degradation is schematized as a reaction in the solid state whose kinetics is analyzed separately in each of the two regions. By means of a sieving analysis different size fractions of the material are separated and studied. A quasi-Newton fitting algorithm is used to obtain the grain size distribution as best fit to experimental data. The individual fractions are thermogravimetrically analyzed for deriving the functional relationship between activation energy of the degradation reactions and the particle size. Such functional relationship turns out to be crucial to evaluate the moments of the activation energy distribution, which is unknown in terms of the distribution calculated by sieve analysis. From the knowledge of moments one can reconstruct the reaction conversion. The method is applied first to the volatilization region, then to the decomposition region. The comparison with the experimental data reveals that the method reproduces the experimental conversion with an accuracy of 5-10% in the volatilization region and of 3-5% in the decomposition region.
Improved Dynamic Analysis method for quantitative PIXE and SXRF element imaging of complex materials
Energy Technology Data Exchange (ETDEWEB)
Ryan, C.G., E-mail: chris.ryan@csiro.au; Laird, J.S.; Fisher, L.A.; Kirkham, R.; Moorhead, G.F.
2015-11-15
The Dynamic Analysis (DA) method in the GeoPIXE software provides a rapid tool to project quantitative element images from PIXE and SXRF imaging event data both for off-line analysis and in real-time embedded in a data acquisition system. Initially, it assumes uniform sample composition, background shape and constant model X-ray relative intensities. A number of image correction methods can be applied in GeoPIXE to correct images to account for chemical concentration gradients, differential absorption effects, and to correct images for pileup effects. A new method, applied in a second pass, uses an end-member phase decomposition obtained from the first pass, and DA matrices determined for each end-member, to re-process the event data with each pixel treated as an admixture of end-member terms. This paper describes the new method and demonstrates through examples and Monte-Carlo simulations how it better tracks spatially complex composition and background shape while still benefitting from the speed of DA.
Theoretical study of the electronic structure of f-element complexes by quantum chemical methods
International Nuclear Information System (INIS)
Vetere, V.
2002-09-01
This thesis is related to comparative studies of the chemical properties of molecular complexes containing lanthanide or actinide trivalent cations, in the context of the nuclear waste disposal. More precisely, our aim was a quantum chemical analysis of the metal-ligand bonding in such species. Various theoretical approaches were compared, for the inclusion of correlation (density functional theory, multiconfigurational methods) and of relativistic effects (relativistic scalar and 2-component Hamiltonians, relativistic pseudopotentials). The performance of these methods were checked by comparing computed structural properties to published experimental data, on small model systems: lanthanide and actinide tri-halides and on X 3 M-L species (X=F, Cl; M=La, Nd, U; L = NH 3 , acetonitrile, CO). We have thus shown the good performance of density functionals combined with a quasi-relativistic method, as well as of gradient-corrected functionals associated with relativistic pseudopotentials. In contrast, functionals including some part of exact exchange are less reliable to reproduce experimental trends, and we have given a possible explanation for this result . Then, a detailed analysis of the bonding has allowed us to interpret the discrepancies observed in the structural properties of uranium and lanthanides complexes, based on a covalent contribution to the bonding, in the case of uranium(III), which does not exist in the lanthanide(III) homologues. Finally, we have examined more sizeable systems, closer to experimental species, to analyse the influence of the coordination number, of the counter-ions and of the oxidation state of uranium, on the metal-ligand bonding. (author)
Directory of Open Access Journals (Sweden)
Jing Zhou
2014-01-01
Full Text Available According to the characteristics of emergency repair in overhead transmission line accidents, a complexity quantification method for emergency repair scheme is proposed based on the entropy method in software engineering, which is improved by using group AHP (analytical hierarchical process method and Petri net. Firstly, information structure chart model and process control flowchart model could be built by Petri net. Then impact factors on complexity of emergency repair scheme could be quantified into corresponding entropy values, respectively. Finally, by using group AHP method, weight coefficient of each entropy value would be given before calculating the overall entropy value for the whole emergency repair scheme. By comparing group AHP weighting method with average weighting method, experiment results for the former showed a stronger correlation between quantified entropy values of complexity and the actual consumed time in repair, which indicates that this new method is more valid.
International Nuclear Information System (INIS)
Moskvin, A.I.; Poznyakov, A.N.; AN SSSR, Moscow. Inst. Geokhimii i Analiticheskoj Khimii)
1979-01-01
Complexing of pentavolent forms of Np, Pu, Am actinides with anions of acetic, oxalic acids and EDTA is studied using the method of coprecipitation with iron hydroxide. Composition and stability constants of the actinide complexes formed are determined. The acids anions are arranged in a row in the order of decrease of complexing tendency that is EDTA anion>C 2 O 4 2- >CH 3 COO -
Energy Technology Data Exchange (ETDEWEB)
Dutra, F.B.; Silva, M.M.S.; Moriyama, A.L.L.; Souza, C.P., E-mail: faby_qui@hotmail.com [Universidade Federal do Rio Grande do Norte (LAMNRC/UFRN), Natal, RN (Brazil). Lab. de Materiais Nanoestruturados e Reatores Catalicos
2016-07-01
The broad concerns of contemporary society with environmental problems requires legislation and more effective techniques for wastewater treatment. In recent years, ceramic materials that have properties such as high melting points and high stability have been receiving great emphasis in several studies in particular heterogeneous photocatalysis, rapid and efficient method for the complete mineralization of contaminants. In this context, the present work deals with the synthesis and characterization of molybdate Strontium (SrMoO4) doped with copper, cobalt and zinc for the purpose of photocatalytic studies. The compounds were synthesized by complexation method EDTA / Citrate basic medium. The powders were characterized by Thermogravimetric Analysis (TG), X-Ray Diffraction (XRD), Particle size distribution by laser diffraction, Spectroscopy in the UV-Visible region, Energy Dispersive Spectroscopy (EDS) and Scanning Electron Microscopy (SEM), showing promising results as the crystalline phase of development and potential uses for the purpose of heterogeneous photocatalysis. (author)
International Nuclear Information System (INIS)
Dutra, F.B.; Silva, M.M.S.; Moriyama, A.L.L.; Souza, C.P.
2016-01-01
The broad concerns of contemporary society with environmental problems requires legislation and more effective techniques for wastewater treatment. In recent years, ceramic materials that have properties such as high melting points and high stability have been receiving great emphasis in several studies in particular heterogeneous photocatalysis, rapid and efficient method for the complete mineralization of contaminants. In this context, the present work deals with the synthesis and characterization of molybdate Strontium (SrMoO4) doped with copper, cobalt and zinc for the purpose of photocatalytic studies. The compounds were synthesized by complexation method EDTA / Citrate basic medium. The powders were characterized by Thermogravimetric Analysis (TG), X-Ray Diffraction (XRD), Particle size distribution by laser diffraction, Spectroscopy in the UV-Visible region, Energy Dispersive Spectroscopy (EDS) and Scanning Electron Microscopy (SEM), showing promising results as the crystalline phase of development and potential uses for the purpose of heterogeneous photocatalysis. (author)
Energy Technology Data Exchange (ETDEWEB)
Suga, K, E-mail: suga@me.osakafu-u.ac.jp [Department of Mechanical Engineering, Osaka Prefecture University, 1-1 Gakuen-cho, Naka-ku, Sakai, Osaka 599-8531 (Japan)
2013-06-15
The extensive evaluation studies of the lattice Boltzmann method for micro-scale flows ({mu}-flow LBM) by the author's group are summarized. For the two-dimensional test cases, force-driven Poiseuille flows, Couette flows, a combined nanochannel flow, and flows in a nanochannel with a square- or triangular cylinder are discussed. The three-dimensional (3D) test cases are nano-mesh flows and a flow between 3D bumpy walls. The reference data for the complex test flow geometries are from the molecular dynamics simulations of the Lennard-Jones fluid by the author's group. The focused flows are mainly in the slip and a part of the transitional flow regimes at Kn < 1. The evaluated schemes of the {mu}-flow LBMs are the lattice Bhatnagar-Gross-Krook and the multiple-relaxation time LBMs with several boundary conditions and discrete velocity models. The effects of the discrete velocity models, the wall boundary conditions, the near-wall correction models of the molecular mean free path and the regularization process are discussed to confirm the applicability and the limitations of the {mu}-flow LBMs for complex flow geometries. (invited review)
International Nuclear Information System (INIS)
Suga, K
2013-01-01
The extensive evaluation studies of the lattice Boltzmann method for micro-scale flows (μ-flow LBM) by the author's group are summarized. For the two-dimensional test cases, force-driven Poiseuille flows, Couette flows, a combined nanochannel flow, and flows in a nanochannel with a square- or triangular cylinder are discussed. The three-dimensional (3D) test cases are nano-mesh flows and a flow between 3D bumpy walls. The reference data for the complex test flow geometries are from the molecular dynamics simulations of the Lennard-Jones fluid by the author's group. The focused flows are mainly in the slip and a part of the transitional flow regimes at Kn < 1. The evaluated schemes of the μ-flow LBMs are the lattice Bhatnagar–Gross–Krook and the multiple-relaxation time LBMs with several boundary conditions and discrete velocity models. The effects of the discrete velocity models, the wall boundary conditions, the near-wall correction models of the molecular mean free path and the regularization process are discussed to confirm the applicability and the limitations of the μ-flow LBMs for complex flow geometries. (invited review)
Development of an Evaluation Method for the Design Complexity of Computer-Based Displays
Energy Technology Data Exchange (ETDEWEB)
Kim, Hyoung Ju; Lee, Seung Woo; Kang, Hyun Gook; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of); Park, Jin Kyun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2011-10-15
The importance of the design of human machine interfaces (HMIs) for human performance and the safety of process industries has long been continuously recognized for many decades. Especially, in the case of nuclear power plants (NPPs), HMIs have significant implications for the safety of the NPPs because poor HMIs can impair the decision making ability of human operators. In order to support and increase the decision making ability of human operators, advanced HMIs based on the up-to-date computer technology are provided. Human operators in advanced main control room (MCR) acquire information through video display units (VDUs) and large display panel (LDP), which is required for the operation of NPPs. These computer-based displays contain a huge amount of information and present it with a variety of formats compared to those of a conventional MCR. For example, these displays contain more display elements such as abbreviations, labels, icons, symbols, coding, etc. As computer-based displays contain more information, the complexity of advanced displays becomes greater due to less distinctiveness of each display element. A greater understanding is emerging about the effectiveness of designs of computer-based displays, including how distinctively display elements should be designed. This study covers the early phase in the development of an evaluation method for the design complexity of computer-based displays. To this end, a series of existing studies were reviewed to suggest an appropriate concept that is serviceable to unravel this problem
International Nuclear Information System (INIS)
Bickford, D.F.; Diemer, R.B. Jr.
1985-01-01
The redox state of glass from electric melters with complex feed compositions is determined by balance between gases above the melt, and transition metals and organic compounds in the feed. Part I discusses experimental and computational methods of relating flowrates and other melter operating conditions to the redox state of glass, and composition of the melter offgas. Computerized thermodynamic computational methods are useful in predicting the sequence and products of redox reactions and in assessing individual process variations. Melter redox state can be predicted by combining monitoring of melter operating conditions, redox measurement of fused melter feed samples, and periodic redox measurement of product. Mossbauer spectroscopy, and other methods which measure Fe(II)/Fe(III) in glass, can be used to measure melter redox state. Part II develops preliminary operating limits for the vitrification of High-Level Radioactive Waste. Limits on reducing potential to preclude the accumulation of combustible gases, accumulation of sulfides and selenides, and degradation of melter components are the most critical. Problems associated with excessively oxidizing conditions, such as glass foaming and potential ruthenium volatility, are controlled when sufficient formic acid is added to adjust melter feed rheology
A versatile embedded boundary adaptive mesh method for compressible flow in complex geometry
Almarouf, Mohamad Abdulilah Alhusain Alali
2017-02-25
We present an embedded ghost-fluid method for numerical solutions of the compressible Navier Stokes (CNS) equations in arbitrary complex domains. A PDE multidimensional extrapolation approach is used to reconstruct the solution in the ghost-fluid regions and imposing boundary conditions on the fluid-solid interface, coupled with a multi-dimensional algebraic interpolation for freshly cleared cells. The CNS equations are numerically solved by the second order multidimensional upwind method. Block-structured adaptive mesh refinement, implemented with the Chombo framework, is utilized to reduce the computational cost while keeping high resolution mesh around the embedded boundary and regions of high gradient solutions. The versatility of the method is demonstrated via several numerical examples, in both static and moving geometry, ranging from low Mach number nearly incompressible flows to supersonic flows. Our simulation results are extensively verified against other numerical results and validated against available experimental results where applicable. The significance and advantages of our implementation, which revolve around balancing between the solution accuracy and implementation difficulties, are briefly discussed as well.
A versatile embedded boundary adaptive mesh method for compressible flow in complex geometry
Almarouf, Mohamad Abdulilah Alhusain Alali; Samtaney, Ravi
2017-01-01
We present an embedded ghost-fluid method for numerical solutions of the compressible Navier Stokes (CNS) equations in arbitrary complex domains. A PDE multidimensional extrapolation approach is used to reconstruct the solution in the ghost-fluid regions and imposing boundary conditions on the fluid-solid interface, coupled with a multi-dimensional algebraic interpolation for freshly cleared cells. The CNS equations are numerically solved by the second order multidimensional upwind method. Block-structured adaptive mesh refinement, implemented with the Chombo framework, is utilized to reduce the computational cost while keeping high resolution mesh around the embedded boundary and regions of high gradient solutions. The versatility of the method is demonstrated via several numerical examples, in both static and moving geometry, ranging from low Mach number nearly incompressible flows to supersonic flows. Our simulation results are extensively verified against other numerical results and validated against available experimental results where applicable. The significance and advantages of our implementation, which revolve around balancing between the solution accuracy and implementation difficulties, are briefly discussed as well.
Food deserts in Winnipeg, Canada: a novel method for measuring a complex and contested construct
Directory of Open Access Journals (Sweden)
Joyce Slater
2017-10-01
Full Text Available Introduction: "Food deserts" have emerged over the past 20 years as spaces of concern for communities, public health authorities and researchers because of their potential negative impact on dietary quality and subsequent health outcomes. Food deserts are residential geographic spaces, typically in urban settings, where low-income residents have limited or no access to retail food establishments with sufficient variety at affordable cost. Research on food deserts presents methodological challenges including retail food store identification and classification, identification of low-income populations, and transportation and proximity metrics. Furthermore, the complex methods often used in food desert research can be difficult to reproduce and communicate to key stakeholders. To address these challenges, this study sought to demonstrate the feasibility of implementing a simple and reproducible method of identifying food deserts using data easily available in the Canadian context. Methods: This study was conducted in Winnipeg, Canada in 2014. Food retail establishments were identified from Yellow Pages and verified by public health dietitians. We calculated two scenarios of food deserts based on location of the lowest-income quintile population: (a living ≥ 500 m from a national chain grocery store, or (b living ≥ 500 m from a national chain grocery store or a full-service grocery store. Results: The number of low-income residents living in a food desert ranged from 64 574 to 104 335, depending on the scenario used. Conclusion: This study shows that food deserts affect a significant proportion of the Winnipeg population, and while concentrated in the urban core, exist in suburban neighbourhoods also. The methods utilized represent an accessible and transparent, reproducible process for identifying food deserts. These methods can be used for costeffective, periodic surveillance and meaningful engagement with communities, retailers and policy
A New Efficient Analytical Method for Picolinate Ion Measurements in Complex Aqueous Solutions
Energy Technology Data Exchange (ETDEWEB)
Parazols, M.; Dodi, A. [CEA Cadarache, Lab Anal Radiochim and Chim, DEN, F-13108 St Paul Les Durance (France)
2010-07-01
This study focuses on the development of a new simple but sensitive, fast and quantitative liquid chromatography method for picolinate ion measurement in high ionic strength aqueous solutions. It involves cation separation over a chromatographic CS16 column using methane sulfonic acid as a mobile phase and detection by UV absorbance (254 nm). The CS16 column is a high-capacity stationary phase exhibiting both cation exchange and RP properties. It allows interaction with picolinate ions which are in their zwitterionic form at the pH of the mobile phase (1.3-1.7). Analysis is performed in 30 min with a detection limit of about 0.05 {mu}M and a quantification limit of about 0.15 {mu}M. Moreover, this analytical technique has been tested efficiently on complex aqueous samples from an effluent treatment facility. (authors)
Simple method for determining binding energies of fullerene and complex atomic negative ions
Felfli, Zineb; Msezane, Alfred
2017-04-01
A robust potential which embeds fully the vital core polarization interaction has been used in the Regge pole method to explore low-energy electron scattering from C60, Eu and Nb through the total cross sections (TCSs) calculations. From the characteristic dramatically sharp resonances in the TCSs manifesting negative ion formation in these systems, we extracted the binding energies for the C60, Euand Nbanions they are found to be in outstanding agreement with the measured electron affinities of C60, Eu and Nb. Common among these considered systems, including the standard atomic Au is the formation of their ground state negative ions at the second Ramsauer-Townsend (R-T) minima of their TCSs. Indeed, this is a signature of all the fullerenes and complex atoms considered thus far. Shape resonances, R-T minima and binding energies of the resultant anions are presented. This work was supported by U.S. DOE, Basic Energy Sciences, Office of Energy Research.
Method of investigation of nuclear reactions in charge-nonsymmetrical muonic complexes
Bystritsky, V M; Penkov, F M
1999-01-01
A method for experimental determination of the nuclear fusion rates in the d mu He molecules in the states with J=0 and J=1 (J is the orbital moment of the system) and of the effective rate of transition between these states (rotational transition 1-0) is proposed. It is shown that information on the desired characteristics can be found from joint analysis of the time distribution and yield of products of nuclear fusion reactions in deuterium-helium muonic molecules and muonic X-ray obtained in experiments with the D sub 2 +He mixture at three (and more) appreciably different densities. The planned experiments with the D sub 2 +He mixture at the meson facility PSI (Switzerland) are optimized to gain more accurate information about the desired parameters on the assumption that different mechanisms for the 1-0 transition of the d mu He complex are realized. (author)
Dojnov, Biljana; Grujić, Marica; Vujčić, Zoran
2015-08-01
A method for zymographic detection of specific cellulases in a complex (endocellulase, exocellulase, and cellobiase) from crude fermentation extracts, after a single electrophoretic separation, is described in this paper. Cellulases were printed onto a membrane and, subsequently, substrate gel. Cellobiase isoforms were detected on the membrane using esculine as substrate, endocellulase isoforms on substrate gel with copolymerized carboxymethyl cellulose (CMC), while exocellulase isoforms were detected in electrophoresis gel with 4-methylumbelliferyl-β-d-cellobioside (MUC). This can be a useful additional tool for monitoring and control of fungal cellulase production in industrial processes and fundamental research, screening for particular cellulase producers, or testing of new lignocellulose substrates. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Three-body Coulomb breakup of 11Li in the complex scaling method
International Nuclear Information System (INIS)
Myo, Takayuki; Aoyama, Shigeyoshi; Kato, Kiyoshi; Ikeda, Kiyomi
2003-01-01
Coulomb breakup strengths of 11 Li into a three-body 9 Li+n+n system are studied in the complex scaling method. We decompose the transition strengths into the contributions from three-body resonances, two-body '' 10 Li+n'' and three-body '' 9 Li+n+n'' continuum states. In the calculated results, we cannot find the dipole resonances with a sharp decay width in 11 Li. There is a low energy enhancement in the breakup strength, which is produced by both the two- and three-body continuum states. The enhancement given by the three-body continuum states is found to have a strong connection to the halo structure of 11 Li. The calculated breakup strength distribution is compared with the experimental data from MSU, RIKEN and GSI
The application of method supplier’s complex evaluation. Case study
Directory of Open Access Journals (Sweden)
Ekaterina Chytilová
2012-01-01
Full Text Available The main goal of this article includes the illustration of selecting bidders evaluation with help Method of complex evaluation of suppliers (MCE. Nowadays the evaluation of suppliers has more importance is in the supply chain management. For SMEs with discontinuous custom manufacturing supplier evaluation at first stage becomes a priority to maintain and enhance the competitiveness of farm output and overall competitiveness. This article presents results of control MCE. The results of this article are results of suppliers’ evaluation conditions and eliminations of MCE application on the base of real enterprise data. MCE is oriented to small and medium-sized enterprises with discontinue manufacturing to order. Research is oriented to selecting procedure of existing suppliers at the first stage of supply chain. Nationality and geographic location haven’t importance to MCE application. Illustrative case study presents the evaluation process to the specific conditions and subsequently demonstrated viability of MCE.
An efficient fringe integral equation method for optimizing the antenna location on complex bodies
DEFF Research Database (Denmark)
Jørgensen, Erik; Meincke, Peter; Breinbjerg, Olav
2001-01-01
The radiation pattern of an antenna mounted nearby, or directly on, a complex three-dimensional (3D) structure can be significantly influenced by this structure. Integral equations combined with the method of moments (MoM) provide an accurate means for calculating the scattering from the structures...... in such applications. The structure is then modelled by triangular or rectangular surface patches with corresponding surface current expansion functions. A MoM matrix which is independent of the antenna location can be obtained by modelling the antenna as an impressed electric or magnetic source, e.g., a slot antenna...... can be modelled by a magnetic Hertzian dipole. For flush-mounted antennas, or antennas mounted in close vicinity of the scattering structure, the nearby impressed source induces a highly peaked surface current on the scattering structure. For the low-order basis functions usually applied...
Monitoring Freeze Thaw Transitions in Arctic Soils using Complex Resistivity Method
Wu, Y.; Hubbard, S. S.; Ulrich, C.; Dafflon, B.; Wullschleger, S. D.
2012-12-01
The Arctic region, which is a sensitive system that has emerged as a focal point for climate change studies, is characterized by a large amount of stored carbon and a rapidly changing landscape. Seasonal freeze-thaw transitions in the Arctic alter subsurface biogeochemical processes that control greenhouse gas fluxes from the subsurface. Our ability to monitor freeze thaw cycles and associated biogeochemical transformations is critical to the development of process rich ecosystem models, which are in turn important for gaining a predictive understanding of Arctic terrestrial system evolution and feedbacks with climate. In this study, we conducted both laboratory and field investigations to explore the use of the complex resistivity method to monitor freeze thaw transitions of arctic soil in Barrow, AK. In the lab studies, freeze thaw transitions were induced on soil samples having different average carbon content through exposing the arctic soil to temperature controlled environments at +4 oC and -20 oC. Complex resistivity and temperature measurements were collected using electrical and temperature sensors installed along the soil columns. During the laboratory experiments, resistivity gradually changed over two orders of magnitude as the temperature was increased or decreased between -20 oC and 0 oC. Electrical phase responses at 1 Hz showed a dramatic and immediate response to the onset of freeze and thaw. Unlike the resistivity response, the phase response was found to be exclusively related to unfrozen water in the soil matrix, suggesting that this geophysical attribute can be used as a proxy for the monitoring of the onset and progression of the freeze-thaw transitions. Spectral electrical responses contained additional information about the controls of soil grain size distribution on the freeze thaw dynamics. Based on the demonstrated sensitivity of complex resistivity signals to the freeze thaw transitions, field complex resistivity data were collected over
A systematic method for identifying vital areas at complex nuclear facilities.
Energy Technology Data Exchange (ETDEWEB)
Beck, David Franklin; Hockert, John
2005-05-01
Identifying the areas to be protected is an important part of the development of measures for physical protection against sabotage at complex nuclear facilities. In June 1999, the International Atomic Energy Agency published INFCIRC/225/Rev.4, 'The Physical Protection of Nuclear Material and Nuclear Facilities.' This guidance recommends that 'Safety specialists, in close cooperation with physical protection specialists, should evaluate the consequences of malevolent acts, considered in the context of the State's design basis threat, to identify nuclear material, or the minimum complement of equipment, systems or devices to be protected against sabotage.' This report presents a structured, transparent approach for identifying the areas that contain this minimum complement of equipment, systems, and devices to be protected against sabotage that is applicable to complex nuclear facilities. The method builds upon safety analyses to develop sabotage fault trees that reflect sabotage scenarios that could cause unacceptable radiological consequences. The sabotage actions represented in the fault trees are linked to the areas from which they can be accomplished. The fault tree is then transformed (by negation) into its dual, the protection location tree, which reflects the sabotage actions that must be prevented in order to prevent unacceptable radiological consequences. The minimum path sets of this fault tree dual yield, through the area linkage, sets of areas, each of which contains nuclear material, or a minimum complement of equipment, systems or devices that, if protected, will prevent sabotage. This method also provides guidance for the selection of the minimum path set that permits optimization of the trade-offs among physical protection effectiveness, safety impact, cost and operational impact.
A novel method of complex evaluation of meibomian glands morphological and functional state
Directory of Open Access Journals (Sweden)
V. N. Trubilin
2014-01-01
Full Text Available A novel method that provides complex assessment of meibomian glands morphological and functional state — biometry of meibomian glands — was developed. The results of complex examination (including meibomian glands biometry, correlation analysis data and clinical findings demonstrate direct association between the objective (i.e., meibomian glands dysfunction by biomicroscopy, tear film break-up time / TBUT, symptomatic TBUT, compression testing and subjective signs of meibomian glands dysfunction (patient’s complaints and the parameters of meibomian glands biometry. High direct correlation between biometrical index and compression test result was revealed (p = 0.002, Spearman’s rank correlation coefficient = 0.6644. Meibomian glands dysfunction is characterized by biometric parameters abnormalities, i.e., dilatation of meibomian glands orifices, decrease of distance between meibomian glands orifices, partial or total atrophy of meibomian glands (even up to gland collapse with its visual reduction and increase of distance between the glands. The suppression of inflammatory process and the recovery of meibomian glands secretion improve biometric parameters and result in the opening of meibomian glands orifices, liquefaction of clogs, evacuation of meibomian glands secretion, narrowing of meibomian glands orifices and increase of distance between them. The proposed method expands the armamentarium of meibomian glands dysfunction and lipid-deficient dry eye diagnosing. Meibomian glands biometry can be applied in specialized ophthalmological hospitals and outpatient departments. It is a simple procedure of short duration that does not require any special equipment or professional skills. Meibomian glands biometry enables to prescribe pathogenically targeted therapy and to improve quality of life.
A numerical calculation method for flow discretisation in complex geometry with body-fitted grids
International Nuclear Information System (INIS)
Jin, X.
2001-04-01
A numerical calculation method basing on body fitted grids is developed in this work for computational fluid dynamics in complex geometry. The method solves the conservation equations in a general nonorthogonal coordinate system which matches the curvilinear boundary. The nonorthogonal, patched grid is generated by a grid generator which solves algebraic equations. By means of an interface its geometrical data can be used by this method. The conservation equations are transformed from the Cartesian system to a general curvilinear system keeping the physical Cartesian velocity components as dependent variables. Using a staggered arrangement of variables, the three Cartesian velocity components are defined on every cell surface. Thus the coupling between pressure and velocity is ensured, and numerical oscillations are avoided. The contravariant velocity for calculating mass flux on one cell surface is resulting from dependent Cartesian velocity components. After the discretisation and linear interpolation, a three dimensional 19-point pressure equation is found. Using the explicit treatment for cross-derivative terms, it reduces to the usual 7-point equation. Under the same data and process structure, this method is compatible with the code FLUTAN using Cartesian coordinates. In order to verify this method, several laminar flows are simulated in orthogonal grids at tilted space directions and in nonorthogonal grids with variations of cell angles. The simulated flow types are considered like various duct flows, transient heat conduction, natural convection in a chimney and natural convection in cavities. Their results achieve very good agreement with analytical solutions or empirical data. Convergence for highly nonorthogonal grids is obtained. After the successful validation of this method, it is applied for a reactor safety case. A transient natural convection flow for an optional sump cooling concept SUCO is simulated. The numerical result is comparable with the
A Sensitivity Analysis Method to Study the Behavior of Complex Process-based Models
Brugnach, M.; Neilson, R.; Bolte, J.
2001-12-01
The use of process-based models as a tool for scientific inquiry is becoming increasingly relevant in ecosystem studies. Process-based models are artificial constructs that simulate the system by mechanistically mimicking the functioning of its component processes. Structurally, a process-based model can be characterized, in terms of its processes and the relationships established among them. Each process comprises a set of functional relationships among several model components (e.g., state variables, parameters and input data). While not encoded explicitly, the dynamics of the model emerge from this set of components and interactions organized in terms of processes. It is the task of the modeler to guarantee that the dynamics generated are appropriate and semantically equivalent to the phenomena being modeled. Despite the availability of techniques to characterize and understand model behavior, they do not suffice to completely and easily understand how a complex process-based model operates. For example, sensitivity analysis studies model behavior by determining the rate of change in model output as parameters or input data are varied. One of the problems with this approach is that it considers the model as a "black box", and it focuses on explaining model behavior by analyzing the relationship input-output. Since, these models have a high degree of non-linearity, understanding how the input affects an output can be an extremely difficult task. Operationally, the application of this technique may constitute a challenging task because complex process-based models are generally characterized by a large parameter space. In order to overcome some of these difficulties, we propose a method of sensitivity analysis to be applicable to complex process-based models. This method focuses sensitivity analysis at the process level, and it aims to determine how sensitive the model output is to variations in the processes. Once the processes that exert the major influence in
A FEM-based method to determine the complex material properties of piezoelectric disks.
Pérez, N; Carbonari, R C; Andrade, M A B; Buiochi, F; Adamowski, J C
2014-08-01
Numerical simulations allow modeling piezoelectric devices and ultrasonic transducers. However, the accuracy in the results is limited by the precise knowledge of the elastic, dielectric and piezoelectric properties of the piezoelectric material. To introduce the energy losses, these properties can be represented by complex numbers, where the real part of the model essentially determines the resonance frequencies and the imaginary part determines the amplitude of each resonant mode. In this work, a method based on the Finite Element Method (FEM) is modified to obtain the imaginary material properties of piezoelectric disks. The material properties are determined from the electrical impedance curve of the disk, which is measured by an impedance analyzer. The method consists in obtaining the material properties that minimize the error between experimental and numerical impedance curves over a wide range of frequencies. The proposed methodology starts with a sensitivity analysis of each parameter, determining the influence of each parameter over a set of resonant modes. Sensitivity results are used to implement a preliminary algorithm approaching the solution in order to avoid the search to be trapped into a local minimum. The method is applied to determine the material properties of a Pz27 disk sample from Ferroperm. The obtained properties are used to calculate the electrical impedance curve of the disk with a Finite Element algorithm, which is compared with the experimental electrical impedance curve. Additionally, the results were validated by comparing the numerical displacement profile with the displacements measured by a laser Doppler vibrometer. The comparison between the numerical and experimental results shows excellent agreement for both electrical impedance curve and for the displacement profile over the disk surface. The agreement between numerical and experimental displacement profiles shows that, although only the electrical impedance curve is
Directory of Open Access Journals (Sweden)
Chiara Biscarini
2013-01-01
Full Text Available The numerical simulation of fast-moving fronts originating from dam or levee breaches is a challenging task for small scale engineering projects. In this work, the use of fully three-dimensional Navier-Stokes (NS equations and lattice Boltzmann method (LBM is proposed for testing the validity of, respectively, macroscopic and mesoscopic mathematical models. Macroscopic simulations are performed employing an open-source computational fluid dynamics (CFD code that solves the NS combined with the volume of fluid (VOF multiphase method to represent free-surface flows. The mesoscopic model is a front-tracking experimental variant of the LBM. In the proposed LBM the air-gas interface is represented as a surface with zero thickness that handles the passage of the density field from the light to the dense phase and vice versa. A single set of LBM equations represents the liquid phase, while the free surface is characterized by an additional variable, the liquid volume fraction. Case studies show advantages and disadvantages of the proposed LBM and NS with specific regard to the computational efficiency and accuracy in dealing with the simulation of flows through complex geometries. In particular, the validation of the model application is developed by simulating the flow propagating through a synthetic urban setting and comparing results with analytical and experimental laboratory measurements.
A method for the determination of ascorbic acid using the iron(II)-pyridine-dimethylglyoxime complex
Energy Technology Data Exchange (ETDEWEB)
Arya, S. P.; Mahajan, M. [Haryana, Kurukshetra Univ. (India). Dept. of Chemistry
1998-05-01
A simple and rapid spectrophotometric method for the determination of ascorbic acid is proposed. Ascorbic acid reduces iron (III) to iron (II) which forms a red colored complex with dimethylglyoxime in the presence of pyridine. The absorbance of the resulting solution is measured at 514 nm and a linear relationship between absorbance and concentration of ascorbic acid is observed up to 14 {mu}g ml{sup -1}. Studies on the interference of substances usually associated with ascorbic acid have been carried out and the applicability of the method has been tested by analysing pharmaceutical preparations of vitamin C. [Italiano] Si propone un rapido e semplice metodo spettrofotometrico per la determinazione dell`acido ascorbico. L`acido ascorbico riduce il ferro(III) a ferro(II) che forma con la dimetilgliossima, in presenza di piridina, un complesso colorato in rosso. L`assorbanza della soluzione risultante e` misurata a 514 nm e si ottiene una relazione lineare tra assorbanza e concentrazione dell`acido ascorbico fino a 14 {mu}g ml{sup -1}. Si sono condotti studi sugli interferenti usualmente associati all`acido ascorbico ed e` stata valutata l`applicabilita` del metodo all`analisi di preparati farmaceutici di vitamina C.
HS-GC-MS method for the analysis of fragrance allergens in complex cosmetic matrices.
Desmedt, B; Canfyn, M; Pype, M; Baudewyns, S; Hanot, V; Courselle, P; De Beer, J O; Rogiers, V; De Paepe, K; Deconinck, E
2015-01-01
Potential allergenic fragrances are part of the Cosmetic Regulation with labelling and concentration restrictions. This means that they have to be declared on the ingredients list, when their concentration exceeds the labelling limit of 10 ppm or 100 ppm for leave-on or rinse-off cosmetics, respectively. Labelling is important regarding consumer safety. In this way, sensitised people towards fragrances might select their products based on the ingredients list to prevent elicitation of an allergic reaction. It is therefore important to quantify potential allergenic ingredients in cosmetic products. An easy to perform liquid extraction was developed, combined with a new headspace GC-MS method. The latter was capable of analysing 24 volatile allergenic fragrances in complex cosmetic formulations, such as hydrophilic (O/W) and lipophilic (W/O) creams, lotions and gels. This method was successfully validated using the total error approach. The trueness deviations for all components were smaller than 8%, and the expectation tolerance limits did not exceed the acceptance limits of ± 20% at the labelling limit. The current methodology was used to analyse 18 cosmetic samples that were already identified as being illegal on the EU market for containing forbidden skin whitening substances. Our results showed that these cosmetic products also contained undeclared fragrances above the limit value for labelling, which imposes an additional health risk for the consumer. Copyright © 2014 Elsevier B.V. All rights reserved.
A novel pre-oxidation method for elemental mercury removal utilizing a complex vaporized absorbent
Energy Technology Data Exchange (ETDEWEB)
Zhao, Yi, E-mail: zhaoyi9515@163.com; Hao, Runlong; Guo, Qing
2014-09-15
Graphical abstract: - Highlights: • An innovative liquid-phase complex absorbent (LCA) for Hg{sup 0} removal was prepared. • A novel integrative process for Hg{sup 0} removal was proposed. • The simultaneous removal efficiencies of SO{sub 2}, NO and Hg{sup 0} were 100%, 79.5% and 80.4%, respectively. • The reaction mechanism of simultaneous removal of SO{sub 2}, NO and Hg{sup 0} was proposed. - Abstract: A novel semi-dry integrative method for elemental mercury (Hg{sup 0}) removal has been proposed in this paper, in which Hg{sup 0} was initially pre-oxidized by a vaporized liquid-phase complex absorbent (LCA) composed of a Fenton reagent, peracetic acid (CH{sub 3}COOOH) and sodium chloride (NaCl), after which Hg{sup 2+} was absorbed by the resultant Ca(OH){sub 2}. The experimental results indicated that CH{sub 3}COOOH and NaCl were the best additives for Hg{sup 0} oxidation. Among the influencing factors, the pH of the LCA and the adding rate of the LCA significantly affected the Hg{sup 0} removal. The coexisting gases, SO{sub 2} and NO, were characterized as either increasing or inhibiting in the removal process, depending on their concentrations. Under optimal reaction conditions, the efficiency for the single removal of Hg{sup 0} was 91%. Under identical conditions, the efficiencies of the simultaneous removal of SO{sub 2}, NO and Hg{sup 0} were 100%, 79.5% and 80.4%, respectively. Finally, the reaction mechanism for the simultaneous removal of SO{sub 2}, NO and Hg{sup 0} was proposed based on the characteristics of the removal products as determined by X-ray diffraction (XRD), atomic fluorescence spectrometry (AFS), the analysis of the electrode potentials, and through data from related research references.
A novel pre-oxidation method for elemental mercury removal utilizing a complex vaporized absorbent
International Nuclear Information System (INIS)
Zhao, Yi; Hao, Runlong; Guo, Qing
2014-01-01
Graphical abstract: - Highlights: • An innovative liquid-phase complex absorbent (LCA) for Hg 0 removal was prepared. • A novel integrative process for Hg 0 removal was proposed. • The simultaneous removal efficiencies of SO 2 , NO and Hg 0 were 100%, 79.5% and 80.4%, respectively. • The reaction mechanism of simultaneous removal of SO 2 , NO and Hg 0 was proposed. - Abstract: A novel semi-dry integrative method for elemental mercury (Hg 0 ) removal has been proposed in this paper, in which Hg 0 was initially pre-oxidized by a vaporized liquid-phase complex absorbent (LCA) composed of a Fenton reagent, peracetic acid (CH 3 COOOH) and sodium chloride (NaCl), after which Hg 2+ was absorbed by the resultant Ca(OH) 2 . The experimental results indicated that CH 3 COOOH and NaCl were the best additives for Hg 0 oxidation. Among the influencing factors, the pH of the LCA and the adding rate of the LCA significantly affected the Hg 0 removal. The coexisting gases, SO 2 and NO, were characterized as either increasing or inhibiting in the removal process, depending on their concentrations. Under optimal reaction conditions, the efficiency for the single removal of Hg 0 was 91%. Under identical conditions, the efficiencies of the simultaneous removal of SO 2 , NO and Hg 0 were 100%, 79.5% and 80.4%, respectively. Finally, the reaction mechanism for the simultaneous removal of SO 2 , NO and Hg 0 was proposed based on the characteristics of the removal products as determined by X-ray diffraction (XRD), atomic fluorescence spectrometry (AFS), the analysis of the electrode potentials, and through data from related research references
Borazjani, Iman; Ge, Liang; Sotiropoulos, Fotis
2008-08-01
The sharp-interface CURVIB approach of Ge and Sotiropoulos [L. Ge, F. Sotiropoulos, A numerical method for solving the 3D unsteady incompressible Navier-Stokes equations in curvilinear domains with complex immersed boundaries, Journal of Computational Physics 225 (2007) 1782-1809] is extended to simulate fluid structure interaction (FSI) problems involving complex 3D rigid bodies undergoing large structural displacements. The FSI solver adopts the partitioned FSI solution approach and both loose and strong coupling strategies are implemented. The interfaces between immersed bodies and the fluid are discretized with a Lagrangian grid and tracked with an explicit front-tracking approach. An efficient ray-tracing algorithm is developed to quickly identify the relationship between the background grid and the moving bodies. Numerical experiments are carried out for two FSI problems: vortex induced vibration of elastically mounted cylinders and flow through a bileaflet mechanical heart valve at physiologic conditions. For both cases the computed results are in excellent agreement with benchmark simulations and experimental measurements. The numerical experiments suggest that both the properties of the structure (mass, geometry) and the local flow conditions can play an important role in determining the stability of the FSI algorithm. Under certain conditions the FSI algorithm is unconditionally unstable even when strong coupling FSI is employed. For such cases, however, combining the strong coupling iteration with under-relaxation in conjunction with the Aitken's acceleration technique is shown to effectively resolve the stability problems. A theoretical analysis is presented to explain the findings of the numerical experiments. It is shown that the ratio of the added mass to the mass of the structure as well as the sign of the local time rate of change of the force or moment imparted on the structure by the fluid determine the stability and convergence of the FSI
A method for developing standardised interactive education for complex clinical guidelines
Directory of Open Access Journals (Sweden)
Vaughan Janet I
2012-11-01
Full Text Available Abstract Background Although systematic use of the Perinatal Society of Australia and New Zealand internationally endorsed Clinical Practice Guideline for Perinatal Mortality (PSANZ-CPG improves health outcomes, implementation is inadequate. Its complexity is a feature known to be associated with non-compliance. Interactive education is effective as a guideline implementation strategy, but lacks an agreed definition. SCORPIO is an educational framework containing interactive and didactic teaching, but has not previously been used to implement guidelines. Our aim was to transform the PSANZ-CPG into an education workshop to develop quality standardised interactive education acceptable to participants for learning skills in collaborative interprofessional care. Methods The workshop was developed using the construct of an educational framework (SCORPIO, the PSANZ-CPG, a transformation process and tutor training. After a pilot workshop with key target and stakeholder groups, modifications were made to this and subsequent workshops based on multisource written observations from interprofessional participants, tutors and an independent educator. This participatory action research process was used to monitor acceptability and educational standards. Standardised interactive education was defined as the attainment of content and teaching standards. Quantitative analysis of positive expressed as a percentage of total feedback was used to derive a total quality score. Results Eight workshops were held with 181 participants and 15 different tutors. Five versions resulted from the action research methodology. Thematic analysis of multisource observations identified eight recurring education themes or quality domains used for standardisation. The two content domains were curriculum and alignment with the guideline and the six teaching domains; overload, timing, didacticism, relevance, reproducibility and participant engagement. Engagement was the most
Nicholl, Jon; Jacques, Richard M; Campbell, Michael J
2013-10-29
Comparison of outcomes between populations or centres may be confounded by any casemix differences and standardisation is carried out to avoid this. However, when the casemix adjustment models are large and complex, direct standardisation has been described as "practically impossible", and indirect standardisation may lead to unfair comparisons. We propose a new method of directly standardising for risk rather than standardising for casemix which overcomes these problems. Using a casemix model which is the same model as would be used in indirect standardisation, the risk in individuals is estimated. Risk categories are defined, and event rates in each category for each centre to be compared are calculated. A weighted sum of the risk category specific event rates is then calculated. We have illustrated this method using data on 6 million admissions to 146 hospitals in England in 2007/8 and an existing model with over 5000 casemix combinations, and a second dataset of 18,668 adult emergency admissions to 9 centres in the UK and overseas and a published model with over 20,000 casemix combinations and a continuous covariate. Substantial differences between conventional directly casemix standardised rates and rates from direct risk standardisation (DRS) were found. Results based on DRS were very similar to Standardised Mortality Ratios (SMRs) obtained from indirect standardisation, with similar standard errors. Direct risk standardisation using our proposed method is as straightforward as using conventional direct or indirect standardisation, always enables fair comparisons of performance to be made, can use continuous casemix covariates, and was found in our examples to have similar standard errors to the SMR. It should be preferred when there is a risk that conventional direct or indirect standardisation will lead to unfair comparisons.
Simplified Method for Predicting a Functional Class of Proteins in Transcription Factor Complexes
Piatek, Marek J.
2013-07-12
Background:Initiation of transcription is essential for most of the cellular responses to environmental conditions and for cell and tissue specificity. This process is regulated through numerous proteins, their ligands and mutual interactions, as well as interactions with DNA. The key such regulatory proteins are transcription factors (TFs) and transcription co-factors (TcoFs). TcoFs are important since they modulate the transcription initiation process through interaction with TFs. In eukaryotes, transcription requires that TFs form different protein complexes with various nuclear proteins. To better understand transcription regulation, it is important to know the functional class of proteins interacting with TFs during transcription initiation. Such information is not fully available, since not all proteins that act as TFs or TcoFs are yet annotated as such, due to generally partial functional annotation of proteins. In this study we have developed a method to predict, using only sequence composition of the interacting proteins, the functional class of human TF binding partners to be (i) TF, (ii) TcoF, or (iii) other nuclear protein. This allows for complementing the annotation of the currently known pool of nuclear proteins. Since only the knowledge of protein sequences is required in addition to protein interaction, the method should be easily applicable to many species.Results:Based on experimentally validated interactions between human TFs with different TFs, TcoFs and other nuclear proteins, our two classification systems (implemented as a web-based application) achieve high accuracies in distinguishing TFs and TcoFs from other nuclear proteins, and TFs from TcoFs respectively.Conclusion:As demonstrated, given the fact that two proteins are capable of forming direct physical interactions and using only information about their sequence composition, we have developed a completely new method for predicting a functional class of TF interacting protein partners
Energy Technology Data Exchange (ETDEWEB)
Giffard, F.X
2000-05-19
In the field of reactor and fuel cycle physics, particle transport plays and important role. Neutronic design, operation and evaluation calculations of nuclear system make use of large and powerful computer codes. However, current limitations in terms of computer resources make it necessary to introduce simplifications and approximations in order to keep calculation time and cost within reasonable limits. Two different types of methods are available in these codes. The first one is the deterministic method, which is applicable in most practical cases but requires approximations. The other method is the Monte Carlo method, which does not make these approximations but which generally requires exceedingly long running times. The main motivation of this work is to investigate the possibility of a combined use of the two methods in such a way as to retain their advantages while avoiding their drawbacks. Our work has mainly focused on the speed-up of 3-D continuous energy Monte Carlo calculations (TRIPOLI-4 code) by means of an optimized biasing scheme derived from importance maps obtained from the deterministic code ERANOS. The application of this method to two different practical shielding-type problems has demonstrated its efficiency: speed-up factors of 100 have been reached. In addition, the method offers the advantage of being easily implemented as it is not very to the choice of the importance mesh grid. It has also been demonstrated that significant speed-ups can be achieved by this method in the case of coupled neutron-gamma transport problems, provided that the interdependence of the neutron and photon importance maps is taken into account. Complementary studies are necessary to tackle a problem brought out by this work, namely undesirable jumps in the Monte Carlo variance estimates. (author)
Energy Technology Data Exchange (ETDEWEB)
Giffard, F X
2000-05-19
In the field of reactor and fuel cycle physics, particle transport plays and important role. Neutronic design, operation and evaluation calculations of nuclear system make use of large and powerful computer codes. However, current limitations in terms of computer resources make it necessary to introduce simplifications and approximations in order to keep calculation time and cost within reasonable limits. Two different types of methods are available in these codes. The first one is the deterministic method, which is applicable in most practical cases but requires approximations. The other method is the Monte Carlo method, which does not make these approximations but which generally requires exceedingly long running times. The main motivation of this work is to investigate the possibility of a combined use of the two methods in such a way as to retain their advantages while avoiding their drawbacks. Our work has mainly focused on the speed-up of 3-D continuous energy Monte Carlo calculations (TRIPOLI-4 code) by means of an optimized biasing scheme derived from importance maps obtained from the deterministic code ERANOS. The application of this method to two different practical shielding-type problems has demonstrated its efficiency: speed-up factors of 100 have been reached. In addition, the method offers the advantage of being easily implemented as it is not very to the choice of the importance mesh grid. It has also been demonstrated that significant speed-ups can be achieved by this method in the case of coupled neutron-gamma transport problems, provided that the interdependence of the neutron and photon importance maps is taken into account. Complementary studies are necessary to tackle a problem brought out by this work, namely undesirable jumps in the Monte Carlo variance estimates. (author)
Directory of Open Access Journals (Sweden)
GEORGE DARLOS A. AQUINO
2011-06-01
Full Text Available Triamcinolone (TRI, a drug widely used in the treatment of ocular inflammatory diseases, is practically insoluble in water, which limits its use in eye drops. Cyclodextrins (CDs have been used to increase the solubility or dissolution rate of drugs. The purpose of the present study was to validate a UV-Vis spectrophotometric method for quantitative analysis of TRI in inclusion complexes with beta-cyclodextrin (B-CD associated with triethanolamine (TEA (ternary complex. The proposed analytical method was validated with respect to the parameters established by the Brazilian regulatory National Agency of Sanitary Monitoring (ANVISA. The analytical measurements of absorbance were made at 242nm, at room temperature, in a 1-cm path-length cuvette. The precision and accuracy studies were performed at five concentration levels (4, 8, 12, 18 and 20μg.mL-1. The B-CD associated with TEA did not provoke any alteration in the photochemical behavior of TRI. The results for the measured analytical parameters showed the success of the method. The standard curve was linear (r2 > 0.999 in the concentration range from 2 to 24 μg.mL-1. The method achieved good precision levels in the inter-day (relative standard deviation-RSD <3.4% and reproducibility (RSD <3.8% tests. The accuracy was about 80% and the pH changes introduced in the robustness study did not reveal any relevant interference at any of the studied concentrations. The experimental results demonstrate a simple, rapid and affordable UV-Vis spectrophotometric method that could be applied to the quantitation of TRI in this ternary complex. Keywords: Validation. Triamcinolone. Beta-cyclodextrin. UV- Vis spectrophotometry. Ternary complexes. RESUMO Validação de método de análise quantitativa para a triancinolona a partir de complexo ternário por espectrofotometria de UV-Vis A triancinolona (TRI é um fármaco amplamente utilizado no tratamento de doenças inflamatórias do globo ocular e
Stiffeners in variational-difference method for calculating shells with complex geometry
Directory of Open Access Journals (Sweden)
Ivanov Vyacheslav Nikolaevich
2014-05-01
Full Text Available We have already considered an introduction of reinforcements in the variational-difference method (VDM of shells analysis with complex shape. At the moment only ribbed shells of revolution and shallow shells can be calculated with the help of developed analytical and finite-difference methods. Ribbed shells of arbitrary shape can be calculated only using the finite element method (FEM. However there are problems, when using FEM, which are absent in finite- and variational-difference methods: rigid body motion; conforming trial functions; parameterization of a surface; independent stress strain state. In this regard stiffeners are entered in VDM. VDM is based on the Lagrange principle - the principle of minimum total potential energy. Stress-strain state of ribs is described by the Kirchhoff-Clebsch theory of curvilinear bars: tension, bending and torsion of ribs are taken into account. Stress-strain state of shells is described by the Kirchhoff-Love theory of thin elastic shells. A position of points of the middle surface is defined by curvilinear orthogonal coordinates α, β. Curved ribs are situated along coordinate lines. Strain energy of ribs is added into the strain energy to account for ribs. A matrix form of strain energy of ribs is formed similar to a matrix form of the strain energy of the shell. A matrix of geometrical characteristics of a rib is formed from components of matrices of geometric characteristics of a shell. A matrix of mechanical characteristics of a rib contains rib’s eccentricity and geometrical characteristics of a rib’s section. Derivatives of displacements in the strain vector are replaced with finite-difference relations after the middle surface of a shell gets covered with a grid (grid lines coincide with the coordinate lines of principal curvatures. By this case the total potential energy functional becomes a function of strain nodal displacements. Partial derivatives of unknown nodal displacements are
Simulations of Turbulent Flow Over Complex Terrain Using an Immersed-Boundary Method
DeLeon, Rey; Sandusky, Micah; Senocak, Inanc
2018-02-01
We present an immersed-boundary method to simulate high-Reynolds-number turbulent flow over the complex terrain of Askervein and Bolund Hills under neutrally-stratified conditions. We reconstruct both the velocity and the eddy-viscosity fields in the terrain-normal direction to produce turbulent stresses as would be expected from the application of a surface-parametrization scheme based on Monin-Obukhov similarity theory. We find that it is essential to be consistent in the underlying assumptions for the velocity reconstruction and the eddy-viscosity relation to produce good results. To this end, we reconstruct the tangential component of the velocity field using a logarithmic velocity profile and adopt the mixing-length model in the near-surface turbulence model. We use a linear interpolation to reconstruct the normal component of the velocity to enforce the impermeability condition. Our approach works well for both the Askervein and Bolund Hills when the flow is attached to the surface, but shows slight disagreement in regions of flow recirculation, despite capturing the flow reversal.
Photoluminescent BaMoO4 nanopowders prepared by complex polymerization method (CPM)
International Nuclear Information System (INIS)
Azevedo Marques, Ana Paula de; Melo, Dulce M.A. de; Paskocimas, Carlos A.; Pizani, Paulo S.; Joya, Miryam R.; Leite, Edson R.; Longo, Elson
2006-01-01
The BaMoO 4 nanopowders were prepared by the Complex Polymerization Method (CPM). The structure properties of the BaMoO 4 powders were characterized by FTIR transmittance spectra, X-ray diffraction (XRD), Raman spectra, photoluminescence spectra (PL) and high-resolution scanning electron microscopy (HR-SEM). The XRD, FTIR and Raman data showed that BaMoO 4 at 300 deg. C was disordered. At 400 deg. C and higher temperature, BaMoO 4 crystalline scheelite-type phases could be identified, without the presence of additional phases, according to the XRD, FTIR and Raman data. The calculated average crystallite sizes, calculated by XRD, around 40 nm, showed the tendency to increase with the temperature. The crystallite sizes, obtained by HR-SEM, were around of 40-50 nm. The sample that presented the highest intensity of the red emission band was the one heat treated at 400 deg. C for 2 h, and the sample that displayed the highest intensity of the green emission band was the one heat treated at 700 deg. C for 2 h. The CPM was shown to be a low cost route for the production of BaMoO 4 nanopowders, with the advantages of lower temperature, smaller time and reduced cost. The optical properties observed for BaMoO 4 nanopowders suggested that this material is a highly promising candidate for photoluminescent applications
Simulations of Turbulent Flow Over Complex Terrain Using an Immersed-Boundary Method
DeLeon, Rey; Sandusky, Micah; Senocak, Inanc
2018-06-01
We present an immersed-boundary method to simulate high-Reynolds-number turbulent flow over the complex terrain of Askervein and Bolund Hills under neutrally-stratified conditions. We reconstruct both the velocity and the eddy-viscosity fields in the terrain-normal direction to produce turbulent stresses as would be expected from the application of a surface-parametrization scheme based on Monin-Obukhov similarity theory. We find that it is essential to be consistent in the underlying assumptions for the velocity reconstruction and the eddy-viscosity relation to produce good results. To this end, we reconstruct the tangential component of the velocity field using a logarithmic velocity profile and adopt the mixing-length model in the near-surface turbulence model. We use a linear interpolation to reconstruct the normal component of the velocity to enforce the impermeability condition. Our approach works well for both the Askervein and Bolund Hills when the flow is attached to the surface, but shows slight disagreement in regions of flow recirculation, despite capturing the flow reversal.
THE COMPLEX ANALYSIS METHOD OF SEMANTIC ASSOCIATIONS IN STUDYING THE STUDENTS’ CREATIVE ETHOS
Directory of Open Access Journals (Sweden)
P. A. Starikov
2013-01-01
Full Text Available The paper demonstrates the sociological research findings concerning the students’ ideas of creativity based on the questionnaires and testing of the students of the natural science, humanities and technical profiles at Siberian Federal University over the period of 2007-2011.The author suggests a new method of semantic association analysis in order to identify the latent groups of notions related to the concept of creativity. The range of students’ common opinions demonstrate the obvious trend for humanizing the idea of creativity, considering it as the perfect mode of human existence, which coincide with the ideas of K. Rogers, A. Maslow and other scholars. Today’s students associate creativity primarily with pleasure, self-development, self-expression, inspiration, improvisation, spontaneity; and the resulting semantic complex incorporates such characteristics of creative work as goodness, abundance of energy, integrity, health, freedom and independence, self-development and spirituality.The obtained data prove the importance of the inspiration experience in creative pedagogy; the research outcomes along with the continuing monitoring of students attitude to creativity development can optimize the learning process. The author emphasizes the necessity of introducing some special courses, based on the integral approach (including social, philosophical, psychological, psycho-social and technical aspects, and aimed at developing students’ creative competence.
Özen, Hamit; Turan, Selahattin
2017-01-01
This study was designed to develop the scale of the Complex Adaptive Leadership for School Principals (CAL-SP) and examine its psychometric properties. This was an exploratory mixed method research design (ES-MMD). Both qualitative and quantitative methods were used to develop and assess psychometric properties of the questionnaire. This study…
A method for the preparation of lipophilic macrocyclic technetium-99m complexes
International Nuclear Information System (INIS)
Troutner, D.E.; Volkert, W.A.
1991-01-01
A procedure for the preparation of technetium complexes applicable as diagnostic radiopharmaceuticals is suggested and documented with 27 examples. Technetium-99m is reacted with a suitable complexant selected from the class of alkylenamine oximes containing 2 or 3 carbon atoms in the alkylene group. The lipophilic macrocyclic complexes possess an amine, amide, carboxy, carboxy ester, hydroxy or alkoxy group or a suitable electron acceptor group. (M.D.). 7 tabs
International Nuclear Information System (INIS)
Kostromina, N.A.; Kholodnaya, G.S.; Tananaeva, N.N.; Beloshitskij, N.V.; Kirillov, A.I.
1979-01-01
Absorption spectra in the Ce-EDDA(E 2- ) and Ce-EDTA systems (B 4- ) are studied. Decomposition of spectra into individual gauss bands, which are related to different complexes is carried out. Formation of normal complexes of the composition 1:1 and 1:2 with EDDA and EDTA is established, their stability constants being determined: lgKsub(CeE)=7.66+-0.03; lgKsub(CeEsub(2))=4.75+-0.06; lgKsub(CeB)=16.66+-0.07. It is established that in the absorption spectra additivity of the bands is observed during complexing of the more complex composition
Energy Technology Data Exchange (ETDEWEB)
Bonnet, C
2006-07-15
New cyclic ligands derived from sugars and amino-acids form a scaffold carrying a coordination sphere of oxygen atoms suitable to complex Ln(III) ions. In spite of their rather low molecular weights, the complexes display surprisingly high relaxivity values, especially at high field. The ACX and BCX ligands, which are acidic derivatives of modified and cyclo-dextrins, form mono and bimetallic complexes with Ln(III). The LnACX and LnBCX complexes show affinities towards Ln(III) similar to those of tri-acidic ligands. In the bimetallic Lu2ACX complex, the cations are deeply embedded in the cavity of the ligand, as shown by the X-ray structure. In aqueous solution, the number of water molecules coordinated to the cation in the LnACX complex depends on the nature and concentration of the alkali ions of the supporting electrolyte, as shown by luminescence and relaxometric measurements. There is only one water molecule coordinated in the LnBCX complex, which enables us to highlight an important second sphere contribution to relaxivity. The NMR study of the RAFT peptidic ligand shows the complexation of Ln(III), with an affinity similar to those of natural ligands derived from calmodulin. The relaxometric study also shows an important second sphere contribution to relaxivity. To better understand the intricate molecular factors affecting relaxivity, we developed new relaxometric methods based on probe solutes. These methods allow us to determine the charge of the complex, weak affinity constants, trans-metallation constants, and the electronic relaxation rate. (author)
Energy Technology Data Exchange (ETDEWEB)
Bonnet, C
2006-07-15
New cyclic ligands derived from sugars and amino-acids form a scaffold carrying a coordination sphere of oxygen atoms suitable to complex Ln(III) ions. In spite of their rather low molecular weights, the complexes display surprisingly high relaxivity values, especially at high field. The ACX and BCX ligands, which are acidic derivatives of modified and cyclo-dextrins, form mono and bimetallic complexes with Ln(III). The LnACX and LnBCX complexes show affinities towards Ln(III) similar to those of tri-acidic ligands. In the bimetallic Lu2ACX complex, the cations are deeply embedded in the cavity of the ligand, as shown by the X-ray structure. In aqueous solution, the number of water molecules coordinated to the cation in the LnACX complex depends on the nature and concentration of the alkali ions of the supporting electrolyte, as shown by luminescence and relaxometric measurements. There is only one water molecule coordinated in the LnBCX complex, which enables us to highlight an important second sphere contribution to relaxivity. The NMR study of the RAFT peptidic ligand shows the complexation of Ln(III), with an affinity similar to those of natural ligands derived from calmodulin. The relaxometric study also shows an important second sphere contribution to relaxivity. To better understand the intricate molecular factors affecting relaxivity, we developed new relaxometric methods based on probe solutes. These methods allow us to determine the charge of the complex, weak affinity constants, trans-metallation constants, and the electronic relaxation rate. (author)
Tracing the development of complex problems and the methods of its information support
International Nuclear Information System (INIS)
Belenki, A.; Ryjov, A.
1999-01-01
This article is dedicated to the development of a technology for information monitoring of complex problems such as IAEA safeguards tasks. The main purpose of this technology is to create human-machine systems for monitoring problems with complex subject areas such as political science, social science, business, ecology and etc. (author)
Study of substitution reactions of ligands in VO2+ complexes in toluene solutions by ESR method
International Nuclear Information System (INIS)
Lundkvist, R.; Panfilov, A.T.; Kalinichenko, N.B.; Marov, I.N.; AN SSSR, Moscow. Inst. Geokhimii i Analiticheskoj Khimii)
1976-01-01
Kinetics and equilibrium of stepwise substitution of ligands have been investigated at different temperatures for the complexes of oxovanadium (4) with salicylaldoxime, 8-oxyquinoline, acetylacetone, benzoylacetone, and tenoyltrifluoroacetone. The relative complexability of these ligands in toluene has been studied. The parameters of spin-Hamiltonian of EPR spectra of the VO 2+ complexes have been determined. The equilibrium constants, the rate constants, and activation energy have been found for the substitution reactions of ligands in the complexes VOA 2 : VOA 2 +HB=VOAB+HA; VOAB+HB=VOB 2 +HA, where HA and HB are the ligands with different donor atoms. The mixed complexes have been detected of the general formula VOAB, where HA is salicylaldoxime or 8-oxyquinoline and HB is β-diketone
Li, Biao; Zhang, Lei; Sun, Hao; Shen, Steve G F; Wang, Xudong
2014-03-01
In bimaxillary orthognathic surgery, the positioning of the maxilla and the mandible is typically accomplished via 2-splint technique, which may be the sources of several types of inaccuracy. To overcome the limitations of the 2-splint technique, we developed a new navigation method, which guided the surgeon to free-hand reposition the maxillomandibular complex as a whole intraoperatively, without the intermediate splint. In this preliminary study, the feasibility was demonstrated. Five patients with dental maxillofacial deformities were enrolled. Before the surgery, 3-dimensional planning was conducted and imported into a navigation system. During the operation, a tracker was connected to the osteotomized maxillomandibular complex via a splint. The navigation system tracked the movement of the complex and displayed it on the screen in real time to guide the surgeon to reposition the complex. The postoperative result was compared with the plan by analyzing the measured distances between the maxillary landmarks and reference planes, as determined from computed tomography data. The mean absolute errors of the maxillary position were clinically acceptable (<1.0 mm). Preoperative preparation time was reduced to 100 minutes on average. All patients were satisfied with the aesthetic results. This navigation method without intraoperative image registration provided a feasible means of transferring virtual planning to the real orthognathic surgery. The real-time position of the maxillomandibular complex was displayed on a monitor to visually guide the surgeon to reposition the complex. In this method, the traditional model surgery and the intermediate splint were discarded, and the preoperative preparation was simplified.
Corrosion behaviours of the dental magnetic keeper complexes made by different alloys and methods.
Wu, Min-Ke; Song, Ning; Liu, Fei; Kou, Liang; Lu, Xiao-Wen; Wang, Min; Wang, Hang; Shen, Jie-Fei
2016-09-29
The keeper and cast dowel-coping, as a primary component for a magnetic attachment, is easily subjected to corrosion in a wet environment, such as the oral cavity, which contains electrolyte-rich saliva, complex microflora and chewing behaviour and so on. The objective of this in vitro study was to examine the corrosion resistance of a dowel and coping-keeper complex fabricated by finish keeper and three alloys (cobalt-chromium, CoCr; silver-palladium-gold, PdAu; gold-platinum, AuPt) using a laser-welding process and a casting technique. The surface morphology characteristics and microstructures of the samples were examined by means of metallographic microscope and scanning electron microscope (SEM). Energy-dispersive spectroscopy (EDS) with SEM provided elements analysis information for the test samples after 10% oxalic acid solution etching test. Tafel polarization curve recordings demonstrated parameter values indicating corrosion of the samples when subjected to electrochemical testing. This study has suggested that massive oxides are attached to the surface of the CoCr-keeper complex but not to the AuPt-keeper complex. Only the keeper area of cast CoCr-keeper complex displayed obvious intergranular corrosion and changes in the Fe and Co elements. Both cast and laser-welded AuPt-keeper complexes had the highest free corrosion potential, followed by the PdAu-keeper complex. We concluded that although the corrosion resistance of the CoCr-keeper complex was worst, the keeper surface passive film was actually preserved to its maximum extent. The laser-welded CoCr- and PdAu-keeper complexes possessed superior corrosion resistance as compared with their cast specimens, but no significant difference was found between the cast and laser-welded AuPt-keeper complexes. The Fe-poor and Cr-rich band, appearing on the edge of the keeper when casting, has been proven to be a corrosion-prone area.
Zagroba, Marek; Gawryluk, Dorota
2017-12-01
which they were reconstructed afterwards. In consequence, some elements of the original town master plans have been lost. Revitalisation is an approach whose aim is to improve the quality of space and the ability of inner town areas to function. Revitalisation goes beyond the purely spatial factors, and involves broadly understood economic and social considerations. The conclusions drawn from this research pertain to benefits of using the revitalisation method in planning a sustainable development of urban structures. The development and implementation of revitalisation programmes is a very complex process that takes many years and requires an integrated and interdisciplinary team effort. This method allows us to preserve the identity of historic town areas while enabling them to play functions in the contemporary life of a town.
Erdi, Peter
2008-01-01
This book explains why complex systems research is important in understanding the structure, function and dynamics of complex natural and social phenomena. Readers will learn the basic concepts and methods of complex system research.
Directory of Open Access Journals (Sweden)
Wei Chen
2017-01-01
Full Text Available Automated tool trajectory planning for spray painting robots is still a challenging problem, especially for a large complex curved surface. This paper presents a new method of trajectory optimization for spray painting robot based on exponential mean Bézier method. The definition and the three theorems of exponential mean Bézier curves are discussed. Then a spatial painting path generation method based on exponential mean Bézier curves is developed. A new simple algorithm for trajectory optimization on complex curved surfaces is introduced. A golden section method is adopted to calculate the values. The experimental results illustrate that the exponential mean Bézier curves enhanced flexibility of the path planning, and the trajectory optimization algorithm achieved satisfactory performance. This method can also be extended to other applications.
Energy Technology Data Exchange (ETDEWEB)
Koval' , L.A.; Dolgov, S.V.; Liokumovich, G.B.; Ovcharenko, A.V.; Priyezzhev, I.I.
1984-01-01
The system of automated processing of aerogeophysical data, ASOM-AGS/YeS, is equipped with complex interpretation of multichannel measurements. Algorithms of factor analysis, automatic classification and apparatus of a priori specified (selected) decisive rules are used. The areas of effect of these procedures can be initially limited to the specified geological information. The possibilities of the method are demonstrated by the results of automated processing of the aerogram-spectrometric measurements in the region of the known copper-porphyr manifestation in Kazakhstan. This ore deposit was clearly noted after processing by the method of main components by complex aureole of independent factors U (severe increase), Th (noticeable increase), K (decrease).
LEGO-NMR spectroscopy: a method to visualize individual subunits in large heteromeric complexes.
Mund, Markus; Overbeck, Jan H; Ullmann, Janina; Sprangers, Remco
2013-10-18
Seeing the big picture: Asymmetric macromolecular complexes that are NMR active in only a subset of their subunits can be prepared, thus decreasing NMR spectral complexity. For the hetero heptameric LSm1-7 and LSm2-8 rings NMR spectra of the individual subunits of the complete complex are obtained, showing a conserved RNA binding site. This LEGO-NMR technique makes large asymmetric complexes accessible to detailed NMR spectroscopic studies. © 2013 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA. This is an open access article under the terms of Creative Commons the Attribution Non-Commercial NoDerivs License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non-commercial and no modifications or adaptations are made.
Study of the 111In-DTPA complex by the electromigration method
International Nuclear Information System (INIS)
Ivanov, P.I.; Bozhikov, G.A.; Filossofov, D.V.; Maslov, O.D.; Milanov, M.V.; Dmitriev, S.N.; Bontchev, G.D.
2002-01-01
The electrophoretic behavior of the 111 In-DTPA radiopharmaceutical has been investigated. The stability constant, diffusion coefficient and effective charge of the complex as well as the temperature dependence of the electrophoretic mobility were determined
Directory of Open Access Journals (Sweden)
Mohammad Mehdi Mozaffari
2012-09-01
Full Text Available Complexity is one of the most important issues influencing success of any construction project and there are literally different studies devoted to detect important factors increasing complexity of projects. During the past few years, there have been growing interests in developing mass construction projects in Iran. The proposed study of this paper uses Delphi technique to find out about important factors as barriers of construction projects in Iran. The results show that among 47 project complexity factors, 19 factors are more important than others are. The study groups different factors into seven categories including environmental, organizational, objectives, tasks, stakeholders, technological, information systems and determines the relative importance of each. In each group, many other sub group activities are determined and they are carefully investigated. The study provides some detailed suggestions on each category to reduce the complexity of construction project.
International Nuclear Information System (INIS)
Ochsenfeld, W.; Schmieder, H.
1976-01-01
Fast breeder fuel elements which have been highly burnt-up are reprocessed by extracting uranium and plutonium into an organic solution containing tributyl phosphate. The tributyl phosphate degenerates at least partially into dibutyl phosphate and monobutyl phosphate, which form stable complexes with tetravalent plutonium in the organic solution. This tetravalent plutonium is released from its complexed state and stripped into aqueous phase by contacting the organic solution with an aqueous phase containing tetravalent uranium. 6 claims, 1 drawing figure
Moore, Jason H; Shestov, Maksim; Schmitt, Peter; Olson, Randal S
2018-01-01
A central challenge of developing and evaluating artificial intelligence and machine learning methods for regression and classification is access to data that illuminates the strengths and weaknesses of different methods. Open data plays an important role in this process by making it easy for computational researchers to easily access real data for this purpose. Genomics has in some examples taken a leading role in the open data effort starting with DNA microarrays. While real data from experimental and observational studies is necessary for developing computational methods it is not sufficient. This is because it is not possible to know what the ground truth is in real data. This must be accompanied by simulated data where that balance between signal and noise is known and can be directly evaluated. Unfortunately, there is a lack of methods and software for simulating data with the kind of complexity found in real biological and biomedical systems. We present here the Heuristic Identification of Biological Architectures for simulating Complex Hierarchical Interactions (HIBACHI) method and prototype software for simulating complex biological and biomedical data. Further, we introduce new methods for developing simulation models that generate data that specifically allows discrimination between different machine learning methods.
Investigation of anticancer properties of caffeinated complexes via computational chemistry methods
Sayin, Koray; Üngördü, Ayhan
2018-03-01
Computational investigations were performed for 1,3,7-trimethylpurine-2,6-dione, 3,7-dimethylpurine-2,6-dione, their Ru(II) and Os(III) complexes. B3LYP/6-311 ++G(d,p)(LANL2DZ) level was used in numerical calculations. Geometric parameters, IR spectrum, 1H-, 13C and 15N NMR spectrum were examined in detail. Additionally, contour diagram of frontier molecular orbitals (FMOs), molecular electrostatic potential (MEP) maps, MEP contour and some quantum chemical descriptors were used in the determination of reactivity rankings and active sites. The electron density on the surface was similar to each other in studied complexes. Quantum chemical descriptors were investigated and the anticancer activity of complexes were more than cisplatin and their ligands. Additionally, molecular docking calculations were performed in water between related complexes and a protein (ID: 3WZE). The most interact complex was found as Os complex. The interaction energy was calculated as 342.9 kJ/mol.
Ge, Liang; Sotiropoulos, Fotis
2007-08-01
A novel numerical method is developed that integrates boundary-conforming grids with a sharp interface, immersed boundary methodology. The method is intended for simulating internal flows containing complex, moving immersed boundaries such as those encountered in several cardiovascular applications. The background domain (e.g. the empty aorta) is discretized efficiently with a curvilinear boundary-fitted mesh while the complex moving immersed boundary (say a prosthetic heart valve) is treated with the sharp-interface, hybrid Cartesian/immersed-boundary approach of Gilmanov and Sotiropoulos [A. Gilmanov, F. Sotiropoulos, A hybrid cartesian/immersed boundary method for simulating flows with 3d, geometrically complex, moving bodies, Journal of Computational Physics 207 (2005) 457-492.]. To facilitate the implementation of this novel modeling paradigm in complex flow simulations, an accurate and efficient numerical method is developed for solving the unsteady, incompressible Navier-Stokes equations in generalized curvilinear coordinates. The method employs a novel, fully-curvilinear staggered grid discretization approach, which does not require either the explicit evaluation of the Christoffel symbols or the discretization of all three momentum equations at cell interfaces as done in previous formulations. The equations are integrated in time using an efficient, second-order accurate fractional step methodology coupled with a Jacobian-free, Newton-Krylov solver for the momentum equations and a GMRES solver enhanced with multigrid as preconditioner for the Poisson equation. Several numerical experiments are carried out on fine computational meshes to demonstrate the accuracy and efficiency of the proposed method for standard benchmark problems as well as for unsteady, pulsatile flow through a curved, pipe bend. To demonstrate the ability of the method to simulate flows with complex, moving immersed boundaries we apply it to calculate pulsatile, physiological flow
Rocha, Alby D.; Groen, Thomas A.; Skidmore, Andrew K.; Darvishzadeh, Roshanak; Willemen, Louise
2017-11-01
The growing number of narrow spectral bands in hyperspectral remote sensing improves the capacity to describe and predict biological processes in ecosystems. But it also poses a challenge to fit empirical models based on such high dimensional data, which often contain correlated and noisy predictors. As sample sizes, to train and validate empirical models, seem not to be increasing at the same rate, overfitting has become a serious concern. Overly complex models lead to overfitting by capturing more than the underlying relationship, and also through fitting random noise in the data. Many regression techniques claim to overcome these problems by using different strategies to constrain complexity, such as limiting the number of terms in the model, by creating latent variables or by shrinking parameter coefficients. This paper is proposing a new method, named Naïve Overfitting Index Selection (NOIS), which makes use of artificially generated spectra, to quantify the relative model overfitting and to select an optimal model complexity supported by the data. The robustness of this new method is assessed by comparing it to a traditional model selection based on cross-validation. The optimal model complexity is determined for seven different regression techniques, such as partial least squares regression, support vector machine, artificial neural network and tree-based regressions using five hyperspectral datasets. The NOIS method selects less complex models, which present accuracies similar to the cross-validation method. The NOIS method reduces the chance of overfitting, thereby avoiding models that present accurate predictions that are only valid for the data used, and too complex to make inferences about the underlying process.
A primary method for the complex calibration of a hydrophone from 1 Hz to 2 kHz
Slater, W. H.; E Crocker, S.; Baker, S. R.
2018-02-01
A primary calibration method is demonstrated to obtain the magnitude and phase of the complex sensitivity for a hydrophone at frequencies between 1 Hz and 2 kHz. The measurement is performed in a coupler reciprocity chamber (‘coupler’) a closed test chamber where time harmonic oscillations in pressure can be achieved and the reciprocity conditions required for a primary calibration can be realized. Relevant theory is reviewed and the reciprocity parameter updated for the complex measurement. Systematic errors and corrections for magnitude are reviewed and more added for phase. The combined expanded uncertainties of the magnitude and phase of the complex sensitivity at 1 Hz were 0.1 dB re 1 V μ Pa-1 and ± 1\\circ , respectively. Complex sensitivity, sensitivity magnitude, and phase measurements are presented on an example primary reference hydrophone.
Energy Technology Data Exchange (ETDEWEB)
Kuhn, Tilmann E.; Herkel, Sebastian [Fraunhofer Institute for Solar Energy Systems ISE, Heidenhofstr. 2, 79110 Freiburg (Germany); Frontini, Francesco [Fraunhofer Institute for Solar Energy Systems ISE, Heidenhofstr. 2, 79110 Freiburg (Germany); Politecnico di Milano, Dipartimento BEST, Via Bonardi 9, 20133 Milano (Italy); Strachan, Paul; Kokogiannakis, Georgios [ESRU, Dept. of Mechanical Eng., University of Strathclyde, Glasgow G1 1XJ (United Kingdom)
2011-01-15
This paper describes a new general method for building simulation programs which is intended to be used for the modelling of complex facades. The term 'complex facades' is used to designate facades with venetian blinds, prismatic layers, light re-directing surfaces, etc. In all these cases, the facade properties have a complex angular dependence. In addition to this, such facades very often have non-airtight layers and/or imperfect components (e.g. non-ideal sharp edges, non-flat surfaces,..). Therefore building planners often had to neglect some of the innovative features and to use 'work-arounds' in order to approximate the properties of complex facades in building simulation programs. A well-defined methodology for these cases was missing. This paper presents such a general methodology. The main advantage of the new method is that it only uses measureable quantities of the transparent or translucent part of the facade as a whole. This is the main difference in comparison with state of the art modelling based on the characteristics of the individual subcomponents, which is often impossible due to non-existing heat- and/or light-transfer models within the complex facade. It is shown that the new method can significantly increase the accuracy of heating/cooling loads and room temperatures. (author)
Niu, Shuqiang; Huang, Dao-Ling; Dau, Phuong D; Liu, Hong-Tao; Wang, Lai-Sheng; Ichiye, Toshiko
2014-03-11
Broken-symmetry density functional theory (BS-DFT) calculations are assessed for redox energetics [Cu(SCH 3 ) 2 ] 1-/0 , [Cu(NCS) 2 ] 1-/0 , [FeCl 4 ] 1-/0 , and [Fe(SCH 3 ) 4 ] 1-/0 against vertical detachment energies (VDE) from valence photoelectron spectroscopy (PES), as a prelude to studies of metalloprotein analogs. The M06 and B3LYP hybrid functionals give VDE that agree with the PES VDE for the Fe complexes, but both underestimate it by ∼400 meV for the Cu complexes; other hybrid functionals give VDEs that are an increasing function of the amount of Hartree-Fock (HF) exchange and so cannot show good agreement for both Cu and Fe complexes. Range-separated (RS) functionals appear to give a better distribution of HF exchange since the negative HOMO energy is approximately equal to the VDEs but also give VDEs dependent on the amount of HF exchange, sometimes leading to ground states with incorrect electron configurations; the LRC- ω PBEh functional reduced to 10% HF exchange at short-range give somewhat better values for both, although still ∼150 meV too low for the Cu complexes and ∼50 meV too high for the Fe complexes. Overall, the results indicate that while HF exchange compensates for self-interaction error in DFT calculations of both Cu and Fe complexes, too much may lead to more sensitivity to nondynamical correlation in the spin-polarized Fe complexes.
International Nuclear Information System (INIS)
Kim, H; Ryue, J; Thompson, D J; Müller, A D
2016-01-01
Recently, complex shaped aluminium panels have been adopted in many structures to make them lighter and stronger. The vibro-acoustic behaviour of these complex panels has been of interest for many years but conventional finite element and boundary element methods are not efficient to predict their performance at higher frequencies. Where the cross-sectional properties of the panels are constant in one direction, wavenumber domain numerical analysis can be applied and this becomes more suitable for panels with complex cross-sectional geometries. In this paper, a coupled wavenumber domain finite element and boundary element method is applied to predict the sound radiation from and sound transmission through a double-layered aluminium extruded panel, having a typical shape used in railway carriages. The predicted results are compared with measured ones carried out on a finite length panel and good agreement is found. (paper)
Directory of Open Access Journals (Sweden)
Roberto Viau
2017-02-01
Full Text Available Molecular typing using repetitive sequenced-based PCR (rep-PCR and hsp60 sequencing were applied to a collection of diverse Enterobacter cloacae complex isolates. To determine the most practical method for reference laboratories, we analyzed 71 E. cloacae complex isolates from sporadic and outbreak occurrences originating from 4 geographic areas. While rep-PCR was more discriminating, hsp60 sequencing provided a broader and a more objective geographical tracking method similar to multilocus sequence typing (MLST. In addition, we suggest that MLST may have higher discriminative power compared to hsp60 sequencing, although rep-PCR remains the most discriminative method for local outbreak investigations. In addition, rep-PCR can be an effective and inexpensive method for local outbreak investigation.
Energy conserving numerical methods for the computation of complex vortical flows
Allaneau, Yves
One of the original goals of this thesis was to develop numerical tools to help with the design of micro air vehicles. Micro Air Vehicles (MAVs) are small flying devices of only a few inches in wing span. Some people consider that as their size becomes smaller and smaller, it would be increasingly more difficult to keep all the classical control surfaces such as the rudders, the ailerons and the usual propellers. Over the years, scientists took inspiration from nature. Birds, by flapping and deforming their wings, are capable of accurate attitude control and are able to generate propulsion. However, the biomimicry design has its own limitations and it is difficult to place a hummingbird in a wind tunnel to study precisely the motion of its wings. Our approach was to use numerical methods to tackle this challenging problem. In order to precisely evaluate the lift and drag generated by the wings, one needs to be able to capture with high fidelity the extremely complex vortical flow produced in the wake. This requires a numerical method that is stable yet not too dissipative, so that the vortices do not get diffused in an unphysical way. We solved this problem by developing a new Discontinuous Galerkin scheme that, in addition to conserving mass, momentum and total energy locally, also preserves kinetic energy globally. This property greatly improves the stability of the simulations, especially in the special case p=0 when the approximation polynomials are taken to be piecewise constant (we recover a finite volume scheme). In addition to needing an adequate numerical scheme, a high fidelity solution requires many degrees of freedom in the computations to represent the flow field. The size of the smallest eddies in the flow is given by the Kolmogoroff scale. Capturing these eddies requires a mesh counting in the order of Re³ cells, where Re is the Reynolds number of the flow. We show that under-resolving the system, to a certain extent, is acceptable. However our
Energy Technology Data Exchange (ETDEWEB)
Lesch, David A; Adriaan Sachtler, J.W. J.; Low, John J; Jensen, Craig M; Ozolins, Vidvuds; Siegel, Don; Harmon, Laurel
2011-02-14
UOP LLC, a Honeywell Company, Ford Motor Company, and Striatus, Inc., collaborated with Professor Craig Jensen of the University of Hawaii and Professor Vidvuds Ozolins of University of California, Los Angeles on a multi-year cost-shared program to discover novel complex metal hydrides for hydrogen storage. This innovative program combined sophisticated molecular modeling with high throughput combinatorial experiments to maximize the probability of identifying commercially relevant, economical hydrogen storage materials with broad application. A set of tools was developed to pursue the medium throughput (MT) and high throughput (HT) combinatorial exploratory investigation of novel complex metal hydrides for hydrogen storage. The assay programs consisted of monitoring hydrogen evolution as a function of temperature. This project also incorporated theoretical methods to help select candidate materials families for testing. The Virtual High Throughput Screening served as a virtual laboratory, calculating structures and their properties. First Principles calculations were applied to various systems to examine hydrogen storage reaction pathways and the associated thermodynamics. The experimental program began with the validation of the MT assay tool with NaAlH4/0.02 mole Ti, the state of the art hydrogen storage system given by decomposition of sodium alanate to sodium hydride, aluminum metal, and hydrogen. Once certified, a combinatorial 21-point study of the NaAlH4 LiAlH4Mg(AlH4)2 phase diagram was investigated with the MT assay. Stability proved to be a problem as many of the materials decomposed during synthesis, altering the expected assay results. This resulted in repeating the entire experiment with a mild milling approach, which only temporarily increased capacity. NaAlH4 was the best performer in both studies and no new mixed alanates were observed, a result consistent with the VHTS. Powder XRD suggested that the reverse reaction, the regeneration of the
A method to compute the inverse of a complex n-block tridiagonal quasi-hermitian matrix
International Nuclear Information System (INIS)
Godfrin, Elena
1990-01-01
This paper presents a method to compute the inverse of a complex n-block tridiagonal quasi-hermitian matrix using adequate partitions of the complete matrix. This type of matrix is very usual in quantum mechanics and, more specifically, in solid state physics (e.g., interfaces and superlattices), when the tight-binding approximation is used. The efficiency of the method is analyzed comparing the required CPU time and work-area for different usual techniques. (Author)
International Nuclear Information System (INIS)
Khan, M.N.; Hussain, R.; Kalsoom, S.; Saadiq, M.
2016-01-01
A simple, accurate and indirect spectrophotometric method was developed for the quantification of cephalexin in pure form and pharmaceutical products using complexation reaction. The developed method is based on the oxidation of the cephalexin with Fe/sup 3+/ in acidic medium. Then 1, 10- phenanthroline reacts with Fe/sup 2+/ and a red colored complex was formed. The absorbance of the complex was measured at 510 nm by spectrophotometer. Different experimental parameters affecting the complexation reactions were studied and optimized. Beer law was obeyed in the concentration range 0.4 -10 micro gmL/sup -1/ with a good correlation of 0.992. The limit of detection and limit of quantification were found to be 0.065 micro gmL/sup -1/ and 0.218 micro gmL/sup -1/ , respectively. The method have good reproducibility with a relative standard deviation of 6.26 percent (n = 6). The method was successfully applied for the determination of cephalexin in bulk powder and commercial formulation. Percent recoveries were found to range from 95.47 to 103.87 percent for the pure form and 98.62 to 103.35 percent for commercial formulations. (author)
Dekker, B.G.; Sluyters-Rehbach, M.; Sluyters, J.H.
1969-01-01
The applicability of the complex plane method for the evaluation of the impedance parameters in the case of two simultaneously proceeding electrode reactions is discussed. It is shown that the possibility of the evaluation depends strongly on the values of the irreversibility quotients of both
Methods to assess secondary volatile lipid oxidation products in complex food matrices
DEFF Research Database (Denmark)
Jacobsen, Charlotte; Yesiltas, Betül
A range of different methods are available to determine secondary volatile lipid oxidation products. These methods include e.g. spectrophotometric determination of anisidine values and TBARS as well as GC based methods for determination of specific volatile oxidation products such as pentanal...... headspace methods on the same food matrices will be presented....
Wang, Lin; Lv, Xiangguo; Jin, Chongrui; Guo, Hailin; Shu, Huiquan; Fu, Qiang; Sa, Yinglong
2018-02-01
To develop a standardized PU-score (posterior urethral stenosis score), with the goal of using this scoring system as a preliminary predictor of surgical complexity and prognosis of posterior urethral stenosis. We retrospectively reviewed records of all patients who underwent posterior urethral surgery at our institution from 2013 to 2015. The PU-score is based on 5 components, namely etiology (1 or 2 points), location (1-3 points), length (1-3 points), urethral fistula (1 or 2 points), and posterior urethral false passage (1 point). We calculated the score of all patients and analyzed its association with surgical complexity, stenosis recurrence, intraoperative blood loss, erectile dysfunction, and urinary incontinence. There were 144 patients who underwent low complexity urethral surgery (direct vision internal urethrotomy, anastomosis with or without crural separation) with a mean score of 5.1 points, whereas 143 underwent high complexity urethroplasty (anastomosis with inferior pubectomy or urethrorectal fistula repair, perineal or scrotum skin flap urethroplasty, bladder flap urethroplasty) with a mean score of 6.9 points. The increase of PU-score was predictive of higher surgical complexity (P = .000), higher recurrence (P = .002), more intraoperative blood loss (P = .000), and decrease of preoperative (P = .037) or postoperative erectile function (P = .047). However, no association was observed between PU-score and urinary incontinence (P = .213). The PU-score is a novel and meaningful scoring system that describes the essential factors in determining the complexity and prognosis for posterior urethral stenosis. Copyright © 2017. Published by Elsevier Inc.
The maintenance management framework models and methods for complex systems maintenance
Crespo Márquez, Adolfo
2010-01-01
“The Maintenance Management Framework” describes and reviews the concept, process and framework of modern maintenance management of complex systems; concentrating specifically on modern modelling tools (deterministic and empirical) for maintenance planning and scheduling. It will be bought by engineers and professionals involved in maintenance management, maintenance engineering, operations management, quality, etc. as well as graduate students and researchers in this field.
A method for the realization of complex concrete gridshell structures in pre-cast concrete
DEFF Research Database (Denmark)
Larsen, Niels Martin; Egholm Pedersen, Ole; Pigram, Dave
2012-01-01
concrete casting techniques, complex funicular structures can be constructed using prefabricated elements in a practical, affordable and materially efficient manner. A recent case study is examined, in which the methodology has been used to construct a pavilion. Custom written dynamic relaxation software...
The LOCI-method : Collaboration building in complex endeavors based on analysis of interdependencies
Kamphuis, W.; Essens, P.J.M.D.
2011-01-01
In complex endeavors, characterized by multiple interdependent participants with different functions and objectives, it is difficult for an entity to determine how to cooperate with other entities. Simply striving to cooperate at the highest level possible comes at high costs. But how should an
Human practice in the life cycle of complex systems. Challenges and methods
Energy Technology Data Exchange (ETDEWEB)
Nuutinen, M. (ed.) [VTT Building and Transport, Espoo (Finland); Luoma, J. (ed.) [VTT Industrial Systems, Espoo (Finland)
2005-12-15
This book describes the current and near future challenges in work and traffic environments in light of the rapid technology development. It focuses on the following domains: road and vessel traffic, nuclear power production, automatic mining, steel factory and the pulp and paper industry. Each example concerns complex technical systems where human practice and behaviour has an important role for the safety, efficiency and productivity of the system. The articles illustrate the enormous field of humanrelated research when considering the design, validation, implementation, operation and maintenance of complex sociotechnical systems. Nevertheless, these 14 chapters are only examples of the range of questions related to the issue. The authors of the book are VTT experts in work or traffic psychology and research, system usability, risk and safety analysis, virtual environments and they have experience in studying different domains. This book is an attempt to open up the complex world of human-technology interaction for readers facing practical problems with complex systems. It is aimed to help a technical or organisational designer, a policy- maker, an expert or a user, the one who works or lives within the technology. (orig.)
Kim, Jeong-eun
2012-01-01
This dissertation investigates optimal conditions for form-focused instruction (FFI) by considering effects of internal (i.e., timing and types of FFI) and external (i.e., complexity and familiarity) variables of FFI when it is offered within a primarily meaning-focused context of adult second language (L2) learning. Ninety-two Korean-speaking…
The Complex Neutrosophic Soft Expert Relation and Its Multiple Attribute Decision-Making Method
Directory of Open Access Journals (Sweden)
Ashraf Al-Quran
2018-01-01
Full Text Available This paper introduces a novel soft computing technique, called the complex neutrosophic soft expert relation (CNSER, to evaluate the degree of interaction between two hybrid models called complex neutrosophic soft expert sets (CNSESs. CNSESs are used to represent two-dimensional data that are imprecise, uncertain, incomplete and indeterminate. Moreover, it has a mechanism to incorporate the parameter set and the opinions of all experts in one model, thus making it highly suitable for use in decision-making problems where the time factor plays a key role in determining the final decision. The complex neutrosophic soft expert set and complex neutrosophic soft expert relation are both defined. Utilizing the properties of CNSER introduced, an empirical study is conducted on the relationship between the variability of the currency exchange rate and Malaysian exports and the time frame (phase of the interaction between these two variables. This study is supported further by an algorithm to determine the type and the degree of this relationship. A comparison between different existing relations and CNSER to show the ascendancy of our proposed CNSER is provided. Then, the notion of the inverse, complement and composition of CNSERs along with some related theorems and properties are introduced. Finally, we define the symmetry, transitivity and reflexivity of CNSERs, as well as the equivalence relation and equivalence classes on CNSESs. Some interesting properties are also obtained.
Kingston, Greer B.; Rajabali Nejad, Mohammadreza; Gouldby, Ben P.; van Gelder, Pieter H.A.J.M.
2011-01-01
With the continual rise of sea levels and deterioration of flood defence structures over time, it is no longer appropriate to define a design level of flood protection, but rather, it is necessary to estimate the reliability of flood defences under varying and uncertain conditions. For complex
Human practice in the life cycle of complex systems. Challenges and methods
International Nuclear Information System (INIS)
Nuutinen, M.; Luoma, J.
2005-12-01
This book describes the current and near future challenges in work and traffic environments in light of the rapid technology development. It focuses on the following domains: road and vessel traffic, nuclear power production, automatic mining, steel factory and the pulp and paper industry. Each example concerns complex technical systems where human practice and behaviour has an important role for the safety, efficiency and productivity of the system. The articles illustrate the enormous field of human-related research when considering the design, validation, implementation, operation and maintenance of complex sociotechnical systems. Nevertheless, these 14 chapters are only examples of the range of questions related to the issue. The authors of the book are VTT experts in work or traffic psychology and research, system usability, risk and safety analysis, virtual environments and they have experience in studying different domains. This book is an attempt to open up the complex world of human-technology interaction for readers facing practical problems with complex systems. It is aimed to help a technical or organisational designer, a policy-maker, an expert or 'a user', the one who works or lives within the technology. (orig.)
International Nuclear Information System (INIS)
Kim, J.I.; Rhee, D.S.; Wimmer, H.; Buckau, G.; Klenze, R.
1993-01-01
The complexation of trivalent metal ions with humic acid has been studied at pH 4 and 5 in 0.1 M NaClO 4 by three different experimental methods, i.e. UV spectroscopy, time resolved laser fluorescence spectroscopy (TRLFS) and ultrafiltration. The direct speciation of the metal ion and its humate complex in the reaction process has been made by UV spectroscopy for Am(III) in the micromolar concentration range and by TRLFS for Cm(III) in the nanomolar concentration range. The ultrafiltration is used with the lowest pore size of filter (ca. 1 nm) to separate the uncomplexed metal ion from its complexed species. The concentrations of both metal ion and humic acid are varied in such a manner that the effective functional groups of the humic acid becomes loaded with metal ions from 1% to nearly 100%. The loading capacity of the humic acid for the trivalent metal ion, determined separately at each pH, is introduced into the evaluation of complexation constants. The variation of the metal ion concentration from 6 x 10 -8 mol/l to 4 x 10 -5 mol/l does not show any effect on the complexation reaction. The three different methods give rise to constants being comparable with one another. The average value of the constants thus determined is log β = 6.24±0.28 for the trivalent actinide ions. (orig.)
Lehtinen, Julia; Hyvönen, Zanna; Subrizi, Astrid; Bunjes, Heike; Urtti, Arto
2008-10-21
Cationic polymers are efficient gene delivery vectors in in vitro conditions, but these carriers can fail in vivo due to interactions with extracellular polyanions, i.e. glycosaminoglycans (GAG). The aim of this study was to develop a stable gene delivery vector that is activated at the acidic endosomal pH. Cationic DNA/PEI complexes were coated by 1,2-dioleylphosphatidylethanolamine (DOPE) and cholesteryl hemisuccinate (CHEMS) (3:2 mol/mol) using two coating methods: detergent removal and mixing with liposomes prepared by ethanol injection. Only detergent removal produced lipid-coated DNA complexes that were stable against GAGs, but were membrane active at low pH towards endosome mimicking liposomes. In relation to the low cellular uptake of the coated complexes, their transfection efficacy was relatively high. PEGylation of the coated complexes increased their cellular uptake but reduced the pH-sensitivity. Detergent removal was thus a superior method for the production of stable, but acid activatable, lipid-coated DNA complexes.
Boes, Kelsey S.; Roberts, Michael S.; Vinueza, Nelson R.
2018-03-01
Complex mixture analysis is a costly and time-consuming task facing researchers with foci as varied as food science and fuel analysis. When faced with the task of quantifying oxygen-rich bio-oil molecules in a complex diesel mixture, we asked whether complex mixtures could be qualitatively and quantitatively analyzed on a single mass spectrometer with mid-range resolving power without the use of lengthy separations. To answer this question, we developed and evaluated a quantitation method that eliminated chromatography steps and expanded the use of quadrupole-time-of-flight mass spectrometry from primarily qualitative to quantitative as well. To account for mixture complexity, the method employed an ionization dopant, targeted tandem mass spectrometry, and an internal standard. This combination of three techniques achieved reliable quantitation of oxygen-rich eugenol in diesel from 300 to 2500 ng/mL with sufficient linearity (R2 = 0.97 ± 0.01) and excellent accuracy (percent error = 0% ± 5). To understand the limitations of the method, it was compared to quantitation attained on a triple quadrupole mass spectrometer, the gold standard for quantitation. The triple quadrupole quantified eugenol from 50 to 2500 ng/mL with stronger linearity (R2 = 0.996 ± 0.003) than the quadrupole-time-of-flight and comparable accuracy (percent error = 4% ± 5). This demonstrates that a quadrupole-time-of-flight can be used for not only qualitative analysis but also targeted quantitation of oxygen-rich lignin molecules in complex mixtures without extensive sample preparation. The rapid and cost-effective method presented here offers new possibilities for bio-oil research, including: (1) allowing for bio-oil studies that demand repetitive analysis as process parameters are changed and (2) making this research accessible to more laboratories. [Figure not available: see fulltext.
DEFF Research Database (Denmark)
Jacobsen, Charlotte; Horn, Anna Frisenfeldt; Lu, Henna Fung Sieng
Volatile secondary lipid oxidation products can be identified and quantified by GC-FID or GC-MS. An extraction step is, however, needed before GC analysis. A range of different extraction methods are available such as static headspace, dynamic headspace and SPME. Each of these methods has its...... advantages and drawbacks. Among the advantages of the SPME method are its high sensitivity compared to static headspace and that it is less laborious than the dynamic headspace method. For these reasons, the use of SPME has increased in both academia and industry during the last decade. The extraction...... for analysis of lipid oxidation during storage of complex food matrices. Examples on how uncontrollable factors have affected results obtained with the SPME method in the authors’ lab will be given and the appropriateness of the SPME method for the analysis of volatile oxidation products in selected food...
Riad, Safaa M.; Salem, Hesham; Elbalkiny, Heba T.; Khattab, Fatma I.
2015-04-01
Five, accurate, precise, and sensitive univariate and multivariate spectrophotometric methods were developed for the simultaneous determination of a ternary mixture containing Trimethoprim (TMP), Sulphamethoxazole (SMZ) and Oxytetracycline (OTC) in waste water samples collected from different cites either production wastewater or livestock wastewater after their solid phase extraction using OASIS HLB cartridges. In univariate methods OTC was determined at its λmax 355.7 nm (0D), while (TMP) and (SMZ) were determined by three different univariate methods. Method (A) is based on successive spectrophotometric resolution technique (SSRT). The technique starts with the ratio subtraction method followed by ratio difference method for determination of TMP and SMZ. Method (B) is successive derivative ratio technique (SDR). Method (C) is mean centering of the ratio spectra (MCR). The developed multivariate methods are principle component regression (PCR) and partial least squares (PLS). The specificity of the developed methods is investigated by analyzing laboratory prepared mixtures containing different ratios of the three drugs. The obtained results are statistically compared with those obtained by the official methods, showing no significant difference with respect to accuracy and precision at p = 0.05.
CFD and Experimental Studies on Wind Turbines in Complex Terrain by Improved Actuator Disk Method
Liu, Xin; Yan, Shu; Mu, Yanfei; Chen, Xinming; Shi, Shaoping
2017-05-01
In this paper, an onshore wind farm in mountainous area of southwest China was investigated through numerical and experimental methods. An improved actuator disk method, taking rotor data (i.e. blade geometry information, attack angle, blade pitch angle) into account, was carried out to investigate the flow characteristic of the wind farm, especially the wake developing behind the wind turbines. Comparing to the classic AD method and the situ measurements, the improved AD shows better agreements with the measurements. The turbine power was automatically predicted in CFD by blade element method, which agreed well with the measurement results. The study proved that the steady CFD simulation with improved actuator disk method was able to evaluate wind resource well and give good balance between computing efficiency and accuracy, in contrary to much more expensive computation methods such as actuator-line/actuator-surface transient model, or less accurate methods such as linear velocity reduction wake model.
An argumentation-based method for managing complex issues in design of infrastructural systems
International Nuclear Information System (INIS)
Marashi, Emad; Davis, John P.
2006-01-01
The many interacting and conflicting requirements of a wide range of stakeholders are the main sources of complexity in the infrastructure and utility systems. We propose a systemic methodology based on negotiation and argumentation to help in the resolution of complex issues and to facilitate options appraisal during design of such systems. A process-based approach is used to assemble and propagate the evidence on performance and reliability of the system and its components, providing a success measure for different scenarios or design alternatives. The reliability of information sources and experts opinions are dealt with through an extension of the mathematical theory of evidence. This framework helps not only in capturing the reasoning behind design decisions, but also enables the decision-makers to assess and compare the evidential support for each design option
Energy Technology Data Exchange (ETDEWEB)
Badiei, Alireza, E-mail: abadiei@khayam.ut.ac.ir [School of Chemistry, College of Science, University of Tehran, P.O. Box 14155-6455, Tehran (Iran, Islamic Republic of); Goldooz, Hassan [Department of Chemistry, Tarbiat Modares University, Tehran (Iran, Islamic Republic of); Ziarani, Ghodsi Mohammadi [Department of Chemistry, Faculty of Science, Alzahra University, Tehran (Iran, Islamic Republic of)
2011-03-15
8-Hydroxyquinoline (8-HQ) was attached to mesoporous silica by sulfonamide bond formation between 8-hydroxyquinoline-5-sulfonyl chloride (8-HQ-SO{sub 2}Cl) and aminopropyl functionalized SBA-15 (designated as SBA-SPS-Q) and then aluminum complexes of 8-HQ was covalently bonded to SBA-SPS-Q using coordinating ability of grafted 8-HQ.The prepared materials were characterized by powder X-ray diffraction (XRD), nitrogen adsorption-desorption, Fourier transform infrared (FT-IR), thermal analysis (TGA-DTA), scanning electron microscopy (SEM), transmission electron microscopy (TEM), elemental analysis and fluorescence spectra. The environmental effects on the emission spectra of grafted 8-HQ and its complexes were studied and discussed in details.
Computation of resonances by two methods involving the use of complex coordinates
International Nuclear Information System (INIS)
Bylicki, M.; Nicolaides, C.A.
1993-01-01
We have studied two different systems producing resonances, a highly excited multielectron Coulombic negative ion (the He - 2s2p 2 4 P state) and a hydrogen atom in a magnetic field, via the complex-coordinate rotation (CCR) and the state-specific complex-eigenvalue Schroedinger equation (CESE) approaches. For the He - 2s2p 2 4 P resonance, a series of large CCR calculations, up to 353 basis functions with explicit r ij dependence, were carried out to serve as benchmarks. For the magnetic-field problem, the CCR results were taken from the literature. Comparison shows that the state-specific CESE theory allows the physics of the problem to be incorporated systematically while keeping the overall size of the computation tractable regardless of the number of electrons
Shiman, A G; Klocheva, E G; Kaiumov, S F; Shoferova, S D; Zhukova, M V
2012-01-01
This article reports the results of applying basic pharmacotherapy (enalapril, cytoflavin) and its combination with physical factors (transcranial electrostimulation, combined application oftranscranial electrostimulation and low-frequency magnetic therapy) in the complex treatment of patients with stage I-II dyscirculatory encephalopathy. The study has demonstrated that the combined treatment with cytoflavin, enalapril, transcranial electrostimulation and low-frequency magnetic therapy produced the most pronounced therapeutic effect (82.5%), as confirmed by positive dynamics of clinical and functional parameters.
A comparison of iterative methods to solve complex valued linear algebraic systems
Czech Academy of Sciences Publication Activity Database
Axelsson, Owe; Neytcheva, M.; Ahmad, B.
2013-01-01
Roč. 66, č. 4 (2013), s. 811-841 ISSN 1017-1398 R&D Projects: GA MŠk ED1.1.00/02.0070 Institutional support: RVO:68145535 Keywords : linear systems * complex symmetric * real valued form * preconditioning Subject RIV: BA - General Mathematics Impact factor: 1.005, year: 2013 http://www.it.uu.se/research/publications/reports/2013-005/2013-005-nc.pdf
S-curve networks and an approximate method for estimating degree distributions of complex networks
Guo, Jin-Li
2010-01-01
In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (Logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference value for optimizing the distribution of IPv4 address resource and the development of IPv6. Based o...
Inverse method for temperature and stress monitoring in complex-shaped bodies
International Nuclear Information System (INIS)
Duda, Piotr; Taler, Jan E- mail: aler@ss5.mech.pk.edu.pl; Roos, Eberhard
2004-01-01
The purpose of this work is to formulate a space marching method, which an be used to solve inverse multidimensional heat conduction problems. The method is designed to reconstruct the transient temperature distribution in a hole construction element based on measured temperatures taken at selected points on the outer surface of the construction element. Next, the Finite element Method is used to calculate thermal stresses and stresses caused by other loads such as, for instance, internal pressure. The developed method or solving temperature and total stress distribution is tested using the measured temperatures generated from a direct solution. Transient temperature nd total stress distributions obtained from the method presented below are compared with the values obtained from the direct solution. Finally, the resented method is experimentally verified during the cooling of a hick-walled cylindrical element. The model of a pressure vessel was reheated at 300 deg.C and then cooled by cold water injection. The comparison of results obtained from the inverse method with experimental data hows the high accuracy of the developed method. The presented method allows o optimize the power block's start-up and shut-down operations, contributes o the reduction of heat loss during these operations and to the extension of power block's life. The fatigue and creep usage factor can be computed in an n-line mode. The presented method herein can be applied to monitoring systems that work in conventional as well as in nuclear power plants
McMahon, Michelle A; Christopher, Kimberly A
2011-08-19
As the complexity of health care delivery continues to increase, educators are challenged to determine educational best practices to prepare BSN students for the ambiguous clinical practice setting. Integrative, active, and student-centered curricular methods are encouraged to foster student ability to use clinical judgment for problem solving and informed clinical decision making. The proposed pedagogical model of progressive complexity in nursing education suggests gradually introducing students to complex and multi-contextual clinical scenarios through the utilization of case studies and problem-based learning activities, with the intention to transition nursing students into autonomous learners and well-prepared practitioners at the culmination of a nursing program. Exemplar curricular activities are suggested to potentiate student development of a transferable problem solving skill set and a flexible knowledge base to better prepare students for practice in future novel clinical experiences, which is a mutual goal for both educators and students.
Designed-walk replica-exchange method for simulations of complex systems
Urano, Ryo; Okamoto, Yuko
2015-01-01
We propose a new implementation of the replica-exchange method (REM) in which replicas follow a pre-planned route in temperature space instead of a random walk. Our method satisfies the detailed balance condition in the proposed route. The method forces tunneling events between the highest and lowest temperatures to happen with an almost constant period. The number of tunneling counts is proportional to that of the random-walk REM multiplied by the square root of moving distance in temperatur...
Bhatia, A. K.; Temkin, A.; Fisher, Richard R. (Technical Monitor)
2001-01-01
We report on the first part of a study of electron-hydrogen scattering, using a method which allows for the ab initio calculation of total and elastic cross sections at higher energies. In its general form the method uses complex 'radial' correlation functions, in a (Kohn) T-matrix formalism. The titled method, abbreviated Complex Correlation Kohn T (CCKT) method, is reviewed, in the context of electron-hydrogen scattering, including the derivation of the equation for the (complex) scattering function, and the extraction of the scattering information from the latter. The calculation reported here is restricted to S-waves in the elastic region, where the correlation functions can be taken, without loss of generality, to be real. Phase shifts are calculated using Hylleraas-type correlation functions with up to 95 terms. Results are rigorous lower bounds; they are in general agreement with those of Schwartz, but they are more accurate and outside his error bounds at a couple of energies,
Kosterhon, Michael; Gutenberg, Angelika; Kantelhardt, Sven R; Conrad, Jens; Nimer Amr, Amr; Gawehn, Joachim; Giese, Alf
2017-08-01
A feasibility study. To develop a method based on the DICOM standard which transfers complex 3-dimensional (3D) trajectories and objects from external planning software to any navigation system for planning and intraoperative guidance of complex spinal procedures. There have been many reports about navigation systems with embedded planning solutions but only few on how to transfer planning data generated in external software. Patients computerized tomography and/or magnetic resonance volume data sets of the affected spinal segments were imported to Amira software, reconstructed to 3D images and fused with magnetic resonance data for soft-tissue visualization, resulting in a virtual patient model. Objects needed for surgical plans or surgical procedures such as trajectories, implants or surgical instruments were either digitally constructed or computerized tomography scanned and virtually positioned within the 3D model as required. As crucial step of this method these objects were fused with the patient's original diagnostic image data, resulting in a single DICOM sequence, containing all preplanned information necessary for the operation. By this step it was possible to import complex surgical plans into any navigation system. We applied this method not only to intraoperatively adjustable implants and objects under experimental settings, but also planned and successfully performed surgical procedures, such as the percutaneous lateral approach to the lumbar spine following preplanned trajectories and a thoracic tumor resection including intervertebral body replacement using an optical navigation system. To demonstrate the versatility and compatibility of the method with an entirely different navigation system, virtually preplanned lumbar transpedicular screw placement was performed with a robotic guidance system. The presented method not only allows virtual planning of complex surgical procedures, but to export objects and surgical plans to any navigation or
Huber, Evelyn; Kleinknecht-Dolf, Michael; Müller, Marianne; Kugler, Christiane; Spirig, Rebecca
2017-06-01
To define the concept of patient-related complexity of nursing care in acute care hospitals and to operationalize it in a questionnaire. The concept of patient-related complexity of nursing care in acute care hospitals has not been conclusively defined in the literature. The operationalization in a corresponding questionnaire is necessary, given the increased significance of the topic, due to shortened lengths of stay and increased patient morbidity. Hybrid model of concept development and embedded mixed-methods design. The theoretical phase of the hybrid model involved a literature review and the development of a working definition. In the fieldwork phase of 2015 and 2016, an embedded mixed-methods design was applied with complexity assessments of all patients at five Swiss hospitals using our newly operationalized questionnaire 'Complexity of Nursing Care' over 1 month. These data will be analysed with structural equation modelling. Twelve qualitative case studies will be embedded. They will be analysed using a structured process of constructing case studies and content analysis. In the final analytic phase, the quantitative and qualitative data will be merged and added to the results of the theoretical phase for a common interpretation. Cantonal Ethics Committee Zurich judged the research programme as unproblematic in December 2014 and May 2015. Following the phases of the hybrid model and using an embedded mixed-methods design can reach an in-depth understanding of patient-related complexity of nursing care in acute care hospitals, a final version of the questionnaire and an acknowledged definition of the concept. © 2016 John Wiley & Sons Ltd.
A theoretical study of the complexes of N2O with H+, Li+, and HF using various correlation methods
International Nuclear Information System (INIS)
Del Bene, J.E.; Stahlberg, E.A.; Shavitt, I.
1990-01-01
Binding energies for complexes of N 2 O with the acids H + , Li + , and HF have been computed using the following correlation methods: many-body (Moller-Plesset) perturbation theory at second (MP2), third (MP3), and fourth (MP4) order; the quadratic CI method with single and double excitations (QCISD) and with noniterative inclusion of triple excitations (QCISD(T)); the linearized coupled-cluster method (LCCM); the averaged coupled-pair functional (ACPF); configuration interaction with all single and double excitations (CISD); and CISD with the Davidson and Pople corrections. The convergence of the Moller-Plesset expansion is erratic, predicting that the terminal nitrogen is the preferred binding site for the complexes at the MP2 and MP4 levels, in disagreement with Hartree-Fock and MP3 and all other models (including the infinite-order QCI). The effect of triple excitations at MP4 and QCI is to destabilize complexes bound at O and stabilize those bound at N, but this effect is greatly overestimated at MP4 relative to QCI. Except for the LCCM result for N-protonated N 2 O, ACPF and LCCM binding energies are similar to the QCISD values. The size-consistency error in the ACPF binding energies of the complexes of N 2 O with HF is about 0.5 kcal/mol. The CISD size-consistency error for these complexes is 23 kcal/mol, leading to negative binding energies when computed relative to isolated N 2 O and HF
A new method of stabilizing zygomatic complex and arch fractures. a ...
African Journals Online (AJOL)
Background: Antral packing cannot support fractures of the zygomatic arch properly because of the position, therefore the aim of this report was to document a new method by which both the zygomatic bone and arch can be stabilized. Method and Materials: Iodine soaked gauze was placed in the subzygomatic space ...
Ishino, Ryota; Iehata, Shunpei; Nakano, Miyo; Tanaka, Reiji; Yoshimatsu, Takao; Maeda, Hiroto
2012-03-01
The bacterial communities associated with rotifers (Brachionus plicatilis sp. complex) and their culture water were determined using culture-dependent and -independent methods (16S rRNA gene clone library). The bacterial communities determined by the culture-independent method were more diverse than those determined by the culture-dependent method. Although the culture-dependent method indicated the bacterial community of rotifers was relatively similar to that of the culture water, 16S rRNA gene clone library analyses revealed a great difference between the two microbiotas. Our results suggest that most bacteria associated with rotifers are not easily cultured using conventional methods, and that the microbiota of rotifers do not correspond with that of the culture water completely.
International Nuclear Information System (INIS)
Dung, Le Thi Kim; Imai, Tomoki; Tomioka, Osamu; Nakashima, Mikio; Takahashi, Kuniaki; Meguro, Yoshihiro
2006-01-01
The supercritical fluid extraction (SFE) method using CO 2 as a medium with an extractant of HNO 3 -tri-n-butyl phosphate (TBP) complex was applied to extract uranium from several uranyl phosphate compounds and simulated uranium ores. An extraction method consisting of a static extraction process and a dynamic one was established, and the effects of the experimental conditions, such as pressure, temperature, and extraction time, on the extraction of uranium were ascertained. It was found that uranium could be efficiently extracted from both the uranyl phosphates and simulated ores by the SFE method using CO 2 . It was thus demonstrated that the SFE method using CO 2 is useful as a pretreatment method for the analysis of uranium in ores. (author)
Entropic algorithms and the lid method as exploration tools for complex landscapes
DEFF Research Database (Denmark)
Barettin, Daniele; Sibani, Paolo
2011-01-01
to a single valley, are key to understand the dynamical properties of such systems. In this paper we combine the lid algorithm, a tool for landscape exploration previously applied to a range of models, with the Wang-Swendsen algorithm. To test this improved exploration tool, we consider a paradigmatic complex...... system, the Edwards-Andersom model in two and three spatial dimension. We find a striking difference between the energy dependence of the local density of states in the two cases: nearly flat in the first case, and nearly exponential in the second. The lid dependence of the data is analyzed to estimate...
A Novel Method for Assessing Task Complexity in Outpatient Clinical-Performance Measures.
Hysong, Sylvia J; Amspoker, Amber B; Petersen, Laura A
2016-04-01
Clinical-performance measurement has helped improve the quality of health-care; yet success in attaining high levels of quality across multiple domains simultaneously still varies considerably. Although many sources of variability in care quality have been studied, the difficulty required to complete the clinical work itself has received little attention. We present a task-based methodology for evaluating the difficulty of clinical-performance measures (CPMs) by assessing the complexity of their component requisite tasks. Using Functional Job Analysis (FJA), subject-matter experts (SMEs) generated task lists for 17 CPMs; task lists were rated on ten dimensions of complexity, and then aggregated into difficulty composites. Eleven outpatient work SMEs; 133 VA Medical Centers nationwide. Clinical Performance: 17 outpatient CPMs (2000-2008) at 133 VA Medical Centers nationwide. Measure Difficulty: for each CPM, the number of component requisite tasks and the average rating across ten FJA complexity scales for the set of tasks comprising the measure. Measures varied considerably in the number of component tasks (M = 10.56, SD = 6.25, min = 5, max = 25). Measures of chronic care following acute myocardial infarction exhibited significantly higher measure difficulty ratings compared to diabetes or screening measures, but not to immunization measures ([Formula: see text] = 0.45, -0.04, -0.05, and -0.06 respectively; F (3, 186) = 3.57, p = 0.015). Measure difficulty ratings were not significantly correlated with the number of component tasks (r = -0.30, p = 0.23). Evaluating the difficulty of achieving recommended CPM performance levels requires more than simply counting the tasks involved; using FJA to assess the complexity of CPMs' component tasks presents an alternate means of assessing the difficulty of primary-care CPMs and accounting for performance variation among measures and performers. This in turn could be used in designing
Lundquist, Katherine Ann
Mesoscale models, such as the Weather Research and Forecasting (WRF) model, are increasingly used for high resolution simulations, particularly in complex terrain, but errors associated with terrain-following coordinates degrade the accuracy of the solution. Use of an alternative Cartesian gridding technique, known as an immersed boundary method (IBM), alleviates coordinate transformation errors and eliminates restrictions on terrain slope which currently limit mesoscale models to slowly varying terrain. In this dissertation, an immersed boundary method is developed for use in numerical weather prediction. Use of the method facilitates explicit resolution of complex terrain, even urban terrain, in the WRF mesoscale model. First, the errors that arise in the WRF model when complex terrain is present are presented. This is accomplished using a scalar advection test case, and comparing the numerical solution to the analytical solution. Results are presented for different orders of advection schemes, grid resolutions and aspect ratios, as well as various degrees of terrain slope. For comparison, results from the same simulation are presented using the IBM. Both two-dimensional and three-dimensional immersed boundary methods are then described, along with details that are specific to the implementation of IBM in the WRF code. Our IBM is capable of imposing both Dirichlet and Neumann boundary conditions. Additionally, a method for coupling atmospheric physics parameterizations at the immersed boundary is presented, making IB methods much more functional in the context of numerical weather prediction models. The two-dimensional IB method is verified through comparisons of solutions for gentle terrain slopes when using IBM and terrain-following grids. The canonical case of flow over a Witch of Agnesi hill provides validation of the basic no-slip and zero gradient boundary conditions. Specified diurnal heating in a valley, producing anabatic winds, is used to validate the
Energy Technology Data Exchange (ETDEWEB)
Lundquist, K A [Univ. of California, Berkeley, CA (United States)
2010-05-12
Mesoscale models, such as the Weather Research and Forecasting (WRF) model, are increasingly used for high resolution simulations, particularly in complex terrain, but errors associated with terrain-following coordinates degrade the accuracy of the solution. Use of an alternative Cartesian gridding technique, known as an immersed boundary method (IBM), alleviates coordinate transformation errors and eliminates restrictions on terrain slope which currently limit mesoscale models to slowly varying terrain. In this dissertation, an immersed boundary method is developed for use in numerical weather prediction. Use of the method facilitates explicit resolution of complex terrain, even urban terrain, in the WRF mesoscale model. First, the errors that arise in the WRF model when complex terrain is present are presented. This is accomplished using a scalar advection test case, and comparing the numerical solution to the analytical solution. Results are presented for different orders of advection schemes, grid resolutions and aspect ratios, as well as various degrees of terrain slope. For comparison, results from the same simulation are presented using the IBM. Both two-dimensional and three-dimensional immersed boundary methods are then described, along with details that are specific to the implementation of IBM in the WRF code. Our IBM is capable of imposing both Dirichlet and Neumann boundary conditions. Additionally, a method for coupling atmospheric physics parameterizations at the immersed boundary is presented, making IB methods much more functional in the context of numerical weather prediction models. The two-dimensional IB method is verified through comparisons of solutions for gentle terrain slopes when using IBM and terrain-following grids. The canonical case of flow over a Witch of Agnesi hill provides validation of the basic no-slip and zero gradient boundary conditions. Specified diurnal heating in a valley, producing anabatic winds, is used to validate the
International Nuclear Information System (INIS)
Trebotich, David
2007-01-01
We have developed a simulation capability to model multiscale flow and transport in complex biological systems based on algorithms and software infrastructure developed under the SciDAC APDEC CET. The foundation of this work is a new hybrid fluid-particle method for modeling polymer fluids in irregular microscale geometries that enables long-time simulation of validation experiments. Both continuum viscoelastic and discrete particle representations have been used to model the constitutive behavior of polymer fluids. Complex flow environment geometries are represented on Cartesian grids using an implicit function. Direct simulation of flow in the irregular geometry is then possible using embedded boundary/volume-of-fluid methods without loss of geometric detail. This capability has been used to simulate biological flows in a variety of application geometries including biomedical microdevices, anatomical structures and porous media
DEFF Research Database (Denmark)
Abildskov, Jens; O'Connell, J.P.
2005-01-01
This paper extends our previous simplified approach to using group contribution methods and limited data to determine differences in solubility of sparingly soluble complex chemicals as the solvent is changed. New applications include estimating temperature dependence and the effect of adding cos....... Though we present no new solution theory, the paper shows an especially efficient use of thermodynamic models for solvent and cosolvent selection for product formulations. Examples and discussion of applications are given. (c) 2004 Elsevier B.V. All rights reserved.......This paper extends our previous simplified approach to using group contribution methods and limited data to determine differences in solubility of sparingly soluble complex chemicals as the solvent is changed. New applications include estimating temperature dependence and the effect of adding...
Jatobá, Alessandro; de Carvalho, Paulo Victor R; da Cunha, Amauri Marques
2012-01-01
Work in organizations requires a minimum level of consensus on the understanding of the practices performed. To adopt technological devices to support the activities in environments where work is complex, characterized by the interdependence among a large number of variables, understanding about how work is done not only takes an even greater importance, but also becomes a more difficult task. Therefore, this study aims to present a method for modeling of work in complex systems, which allows improving the knowledge about the way activities are performed where these activities do not simply happen by performing procedures. Uniting techniques of Cognitive Task Analysis with the concept of Work Process, this work seeks to provide a method capable of providing a detailed and accurate vision of how people perform their tasks, in order to apply information systems for supporting work in organizations.
Freund, Roland
1988-01-01
Conjugate gradient type methods are considered for the solution of large linear systems Ax = b with complex coefficient matrices of the type A = T + i(sigma)I where T is Hermitian and sigma, a real scalar. Three different conjugate gradient type approaches with iterates defined by a minimal residual property, a Galerkin type condition, and an Euclidian error minimization, respectively, are investigated. In particular, numerically stable implementations based on the ideas behind Paige and Saunder's SYMMLQ and MINRES for real symmetric matrices are proposed. Error bounds for all three methods are derived. It is shown how the special shift structure of A can be preserved by using polynomial preconditioning. Results on the optimal choice of the polynomial preconditioner are given. Also, some numerical experiments for matrices arising from finite difference approximations to the complex Helmholtz equation are reported.
International Nuclear Information System (INIS)
Oliveira, M.A. de.
1986-01-01
This work is composed by a theoretical introduction studying crystal concept, interaction between X-ray and crystal medium, and methods for determining small molecular structures applied in solution of crystal structures of praseodymium, neodymium and europium complexes with perrhenate and trans - 1,4 - dithiane - 1,4 - dioxide, (TDTD), which general formula is [ Ln (H sub(2) O) sub(4) (η-TDTD) (η'Re O sub(4)) (μ-η sup(2)-TDTD)] sub(n) (Re O sub(4)) sub(2n). nTDTD, where, Ln = Eu, Pr, Nd and methyl-2,6-anhydrous-3-azido-4-0-benzoyl-3-deoxy-α-D-iodo pyranoside. The structure of C sub(14) H sub(15) N sub(3) O sub(5) organic complex was determined using direct methods. (M.C.K.)
Study of complex amalgams containing alkali metals by method of broken thermometric titration
International Nuclear Information System (INIS)
Filippova, L.M.; Zebreva, A.I.; Espenbetov, A.A.
1977-01-01
Complex potassium-cadmium and sodium-cadmium amalgams containing different amounts of the alkali metal nad cadmium have been studied by thermometric titration with mercury. The experiments have been carried out in argon atmosphere at 25 deg C. As evidenced by the titration of sodium-cadmium amalgams, in the range of concentrations studied (Csub(Na)=0.71-2.95, Csub(Cd)=4.38-6.45 g-at/lHg) no solid phase is formed in them. Potassium-cadmium amalgams where the metals content is no higher than their individual solubility in mercury, display, when being mercury-titrated, negative heat effects due to solid phase formation. An estimation is made of the solid phase composition, its solubility in mercury and the heat of dissolution. The solid phase appearing in complex K-Cd amalgams is likely to contain K and Cd in a ratio 1:1 its conventional solubility product is 5.4 g-at/l Hg, and the heat of dissolution in mercury at 25 deg is -21 +-4 kJ/g-at
Energy Technology Data Exchange (ETDEWEB)
Toh, K.C.; Trefethen, L.N. [Cornell Univ., Ithaca, NY (United States)
1994-12-31
What properties of a nonsymmetric matrix A determine the convergence rate of iterations such as GMRES, QMR, and Arnoldi? If A is far from normal, should one replace the usual Ritz values {r_arrow} eigenvalues notion of convergence of Arnoldi by alternative notions such as Arnoldi lemniscates {r_arrow} pseudospectra? Since Krylov subspace iterations can be interpreted as minimization processes involving polynomials of matrices, the answers to questions such as these depend upon mathematical problems of the following kind. Given a polynomial p(z), how can one bound the norm of p(A) in terms of (1) the size of p(z) on various sets in the complex plane, and (2) the locations of the spectrum and pseudospectra of A? This talk reports some progress towards solving these problems. In particular, the authors present theorems that generalize the Kreiss matrix theorem from the unit disk (for the monomial A{sup n}) to a class of general complex domains (for polynomials p(A)).
International Nuclear Information System (INIS)
Grigorescu, L.
1976-07-01
The efficiency extrapolation method was improved by establishing ''linearity conditions'' for the discrimination on the gamma channel of the coincidence equipment. These conditions were proved to eliminate the systematic error of the method. A control procedure for the fulfilment of linearity conditions and estimation of residual systematic error was given. For law-energy gamma transitions an ''equivalent scheme principle'' was established, which allow for a correct application of the method. Solutions of Cs-134, Co-57, Ba-133 and Zn-65 were standardized with an ''effective standard deviation'' of 0.3-0.7 per cent. For Zn-65 ''special linearity conditions'' were applied. (author)
A complex neutron activation method for the analysis of biological materials
International Nuclear Information System (INIS)
Ordogh, M.
1978-01-01
The aim of the present work was to deal primarily with a few essential trace elements and to obtain reliable results of adequate accuracy and precision for the analysis of biological samples. A few other than trace elements were determined by the nondestructive technique as they can be well evaluated from the gamma-spectra. In the development of the method BOWEN's kale was chosen as model material. To confirm the reliability of the method two samples were analysed proposed by the IAEA in the frame of an international comparative analysis series. The comparative analysis shows the present method to be reliable, the precision and accuracy are good. (author)
Energy Technology Data Exchange (ETDEWEB)
Shi, Min [Anhui University, School of Physics and Materials Science, Hefei (China); RIKEN Nishina Center, Wako (Japan); Shi, Xin-Xing; Guo, Jian-You [Anhui University, School of Physics and Materials Science, Hefei (China); Niu, Zhong-Ming [Anhui University, School of Physics and Materials Science, Hefei (China); Interdisciplinary Theoretical Science Research Group, RIKEN, Wako (Japan); Sun, Ting-Ting [Zhengzhou University, School of Physics and Engineering, Zhengzhou (China)
2017-03-15
We have extended the complex scaled Green's function method to the relativistic framework describing deformed nuclei with the theoretical formalism presented in detail. We have checked the applicability and validity of the present formalism for exploration of the resonances in deformed nuclei. Furthermore, we have studied the dependences of resonances on nuclear deformations and the shape of potential, which are helpful to recognize the evolution of resonant levels from stable nuclei to exotic nuclei with axially quadruple deformations. (orig.)
Calculation of the reliability of large complex systems by the relevant path method
International Nuclear Information System (INIS)
Richter, G.
1975-03-01
In this paper, analytical methods are presented and tested with which the probabilistic reliability data of technical systems can be determined for given fault trees and block diagrams and known reliability data of the components. (orig./AK) [de
Minenkov, Yury; Sharapa, Dmitry I.; Cavallo, Luigi
2018-01-01
-point energy evaluations on density functional theory (DFT) optimized conformers revealed pronounced deviations between semiempirical and DFT methods indicating fundamental difference in potential energy surfaces (PES). To identify the origin of the deviation
LENUS (Irish Health Repository)
O'Sullivan, S T
2012-02-03
With the introduction of low-profile mini-plating systems, a trend has developed towards open reduction and rigid internal fixation (ORIF) of fractures of the cranio-facial skeleton. The current policy for management of zygomatic fractures in our unit is to attempt primary reduction by traditional methods, and proceed to ORIF in the event of unsatisfactory fracture stability or alignment. Over a one-year period, 109 patients underwent surgical correction of fractures of the zygomatic complex. Standard Gilles\\' elevation was performed in 71 cases, percutaneous elevation in three cases, and ORIF was performed in 35 cases. Mean follow-up was 190 days. One case of persistent infraorbital step and three cases of residual malar flattening were documented in patients who underwent Gilles or percutaneous elevation. Morbidity associated with ORIF was minimal. We conclude that while ORIF of zygomatic fractures may offer better results than traditional methods in the management of complex fractures, traditional methods still have a role to play in less complex fractures.
Developments based on stochastic and determinist methods for studying complex nuclear systems
International Nuclear Information System (INIS)
Giffard, F.X.
2000-01-01
In the field of reactor and fuel cycle physics, particle transport plays and important role. Neutronic design, operation and evaluation calculations of nuclear system make use of large and powerful computer codes. However, current limitations in terms of computer resources make it necessary to introduce simplifications and approximations in order to keep calculation time and cost within reasonable limits. Two different types of methods are available in these codes. The first one is the deterministic method, which is applicable in most practical cases but requires approximations. The other method is the Monte Carlo method, which does not make these approximations but which generally requires exceedingly long running times. The main motivation of this work is to investigate the possibility of a combined use of the two methods in such a way as to retain their advantages while avoiding their drawbacks. Our work has mainly focused on the speed-up of 3-D continuous energy Monte Carlo calculations (TRIPOLI-4 code) by means of an optimized biasing scheme derived from importance maps obtained from the deterministic code ERANOS. The application of this method to two different practical shielding-type problems has demonstrated its efficiency: speed-up factors of 100 have been reached. In addition, the method offers the advantage of being easily implemented as it is not very to the choice of the importance mesh grid. It has also been demonstrated that significant speed-ups can be achieved by this method in the case of coupled neutron-gamma transport problems, provided that the interdependence of the neutron and photon importance maps is taken into account. Complementary studies are necessary to tackle a problem brought out by this work, namely undesirable jumps in the Monte Carlo variance estimates. (author)
Methods of Optimization and Systems Analysis for Problems of Transcomputational Complexity
Sergienko, Ivan V
2012-01-01
This work presents lines of investigation and scientific achievements of the Ukrainian school of optimization theory and adjacent disciplines. These include the development of approaches to mathematical theories, methodologies, methods, and application systems for the solution of applied problems in economy, finances, energy saving, agriculture, biology, genetics, environmental protection, hardware and software engineering, information protection, decision making, pattern recognition, self-adapting control of complicated objects, personnel training, etc. The methods developed include sequentia
METHODS OF ESTIMATION OF COMPLEX FREQUENCY DESCRIPTIONS OF LTM AND BRM BEARING KNOTS
A. Pohrebnyak
2015-01-01
The method of studying bearing units (ball bearings) of handling and construction of road machines in the low-frequency range of vibration-acoustic signal is presented. It includes: the choice of the method and place of vibration sensors installation, vibration signals registration modes, instruments, algorithms of processing and formation of diagnostic features of the signal, determination of the threshold values of the diagnostic parameter. The characteristic spectra of vibration velocities...
Asan, Onur; Montague, Enid
2014-01-01
The purpose of this paper is to describe the use of video-based observation research methods in primary care environment and highlight important methodological considerations and provide practical guidance for primary care and human factors researchers conducting video studies to understand patient-clinician interaction in primary care settings. We reviewed studies in the literature which used video methods in health care research, and we also used our own experience based on the video studies we conducted in primary care settings. This paper highlighted the benefits of using video techniques, such as multi-channel recording and video coding, and compared "unmanned" video recording with the traditional observation method in primary care research. We proposed a list that can be followed step by step to conduct an effective video study in a primary care setting for a given problem. This paper also described obstacles, researchers should anticipate when using video recording methods in future studies. With the new technological improvements, video-based observation research is becoming a promising method in primary care and HFE research. Video recording has been under-utilised as a data collection tool because of confidentiality and privacy issues. However, it has many benefits as opposed to traditional observations, and recent studies using video recording methods have introduced new research areas and approaches.
International Nuclear Information System (INIS)
Solodukhin, V.; Silachyov, I.; Poznyak, V.; Gorlachev, I.
2016-01-01
The paper describes the development of nuclear-physical methods of analysis and their applications in Kazakhstan for geological tasks and technology. The basic methods of this complex include instrumental neutron-activation analysis, x-ray fluorescent analysis and instrumental γ-spectrometry. The following aspects are discussed: applications of developed and adopted analytical techniques for assessment and calculations of rare-earth metal reserves at various deposits in Kazakhstan, for technology development of mining and extraction from uranium-phosphorous ore and wastes, for radioactive coal gasification technology, for studies of rare metal contents in chromite, bauxites, black shales and their processing products. (author)
DEFF Research Database (Denmark)
Politis, E.S.; Prospathopoulos, J.; Cabezon, D.
2012-01-01
turbulence closures, are used. The wind turbines are modeled as momentum absorbers by means of their thrust coefficient through the actuator disk approach. Alternative methods for estimating the reference wind speed in the calculation of the thrust are tested. The work presented in this paper is part......Computational fluid dynamic (CFD) methods are used in this paper to predict the power production from entire wind farms in complex terrain and to shed some light into the wake flow patterns. Two full three-dimensional Navier–Stokes solvers for incompressible fluid flow, employing k - ε and k - ω...
Complex of radionuclide methods in diagnosis of diffuse and focal hepatic lesions
International Nuclear Information System (INIS)
Yakovleva, L.A.
1984-01-01
For demonstrating focal affections of the liver a complex of dynamic and static examinations may be recommended with the application of hepatotropic and tumorotropic radiopharmaceuticals. The result indicates that patients with abscesses echinococcus or a cystic affection of the liver are affected by a several-fold decrease of blood flow in the ''cold'' foci or no flow. In the primary or secondary tumorous process there is no blood flow in the foci; there is, however, a prolonged half-life of accumulation of the preparation and a 50% decrease in the velocity of the uptake of the preparation in the liver. The changes of quantitative indices, characterizing an increase of blood flow to 150 to 200% of normal values, with a prevailing arterial component of the blood flow through the liver in all segments may be the sign of a cirrhotic process. (author)
Complex of radionuclide methods in diagnosis of diffuse and focal hepatic lesions
Energy Technology Data Exchange (ETDEWEB)
Yakovleva, L.A. (Centralni Vedeckovyzkumny Rentgenradiologicky Institut MZ SSSR, Leningrad (USSR))
1984-03-01
For demonstrating focal affections of the liver a complex of dynamic and static examinations may be recommended with the application of hepatotropic and tumorotropic radiopharmaceuticals. The result indicates that patients with abscesses echinococcus or a cystic affection of the liver are affected by a several-fold decrease of blood flow in the ''cold'' foci or no flow. In the primary or secondary tumorous process there is no blood flow in the foci; there is, however, a prolonged half-life of accumulation of the preparation and a 50% decrease in the velocity of the uptake of the preparation in the liver. The changes of quantitative indices, characterizing an increase of blood flow to 150 to 200% of normal values, with a prevailing arterial component of the blood flow through the liver in all segments may be the sign of a cirrhotic process.
International Nuclear Information System (INIS)
Silva, Ronaldo Lins da
2017-01-01
This study aims to present a new methodology for absolute standardization of 133 Ba, which is a complex decay radionuclide, using the peak-sum coincidence method associated with gamma spectrometry with a high resolution germanium detector. The use of the method of direct multiplication of matrices allowed identifying all the energies of sum coincidence, as well as their probabilities of detection, which made possible the calculation of the probabilities of detecting the energies of interferences. In addition, with the use of deconvolution software it was possible to obtain the areas of energy without interference of other sums, and by means of the deduced equation for the peak sum method, it was possible to standardize 133 Ba. The result of the activity was compared with those found by the absolute methods existing in the LNMRI, where the result obtained by coincidence peak-sum was highlighted among all. The estimated uncertainties were below 0.30%, compatible with the results found in the literature by other absolute methods. Thus, it was verified that the methodology was able to standardize radionuclide 133 Ba with precision, accuracy, easiness and quickness. The relevance of this doctoral thesis is to provide the National Metrology Laboratory of Ionizing Radiation (LNMRI) with a new absolute standardization methodology for complex decay radionuclides. (author)
International Nuclear Information System (INIS)
Aoba, Tomoya; Bizen, Takeshi; Suzuki, Tsuneo; Nakayama, Tadachika; Suematsu, Hisayuki; Niihara, Koichi; Katsumata, Tetsuhiro; Inaguma, Yoshiyuki
2011-01-01
Samples of a CuBa 2 Ca 3 Cu 4 O 10+ δ superconductor were synthesized under a high pressure of 5 GPa at 1100-1200degC for 30 min using precursors produced by solid-state reaction and polymerized complex methods. Compared with the precursors prepared by the solid-state reaction method, the precursors produced by the polymerized complex method have low grain sizes. The superconductive transition temperature of the samples prepared using precursors made by the polymerized complex method was found to be 113 K. The volume fractions of the superconducting phase in the samples prepared using precursors made by the solid-state reaction and polymerized complex methods were 49 and 36%, respectively. From these results, precursors made by the polymerized complex method can be used in the high-pressure synthesis of superconductors similarly to those made by the solid-state reaction method. (author)
Arulandhu, Alfred J; Staats, Martijn; Hagelaar, Rico; Voorhuijzen, Marleen M; Prins, Theo W; Scholtens, Ingrid; Costessi, Adalberto; Duijsings, Danny; Rechenmann, François; Gaspar, Frédéric B; Barreto Crespo, Maria Teresa; Holst-Jensen, Arne; Birck, Matthew; Burns, Malcolm; Haynes, Edward; Hochegger, Rupert; Klingl, Alexander; Lundberg, Lisa; Natale, Chiara; Niekamp, Hauke; Perri, Elena; Barbante, Alessandra; Rosec, Jean-Philippe; Seyfarth, Ralf; Sovová, Tereza; Van Moorleghem, Christoff; van Ruth, Saskia; Peelen, Tamara; Kok, Esther
2017-10-01
DNA metabarcoding provides great potential for species identification in complex samples such as food supplements and traditional medicines. Such a method would aid Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) enforcement officers to combat wildlife crime by preventing illegal trade of endangered plant and animal species. The objective of this research was to develop a multi-locus DNA metabarcoding method for forensic wildlife species identification and to evaluate the applicability and reproducibility of this approach across different laboratories. A DNA metabarcoding method was developed that makes use of 12 DNA barcode markers that have demonstrated universal applicability across a wide range of plant and animal taxa and that facilitate the identification of species in samples containing degraded DNA. The DNA metabarcoding method was developed based on Illumina MiSeq amplicon sequencing of well-defined experimental mixtures, for which a bioinformatics pipeline with user-friendly web-interface was developed. The performance of the DNA metabarcoding method was assessed in an international validation trial by 16 laboratories, in which the method was found to be highly reproducible and sensitive enough to identify species present in a mixture at 1% dry weight content. The advanced multi-locus DNA metabarcoding method assessed in this study provides reliable and detailed data on the composition of complex food products, including information on the presence of CITES-listed species. The method can provide improved resolution for species identification, while verifying species with multiple DNA barcodes contributes to an enhanced quality assurance. © The Authors 2017. Published by Oxford University Press.
International Nuclear Information System (INIS)
Ikramov, A.I.
2003-01-01
In dissertation work there were analyzed results of diagnosis and surgical treatment 1.741 patients with lung and liver hydatidosis. The investigations were executed with application of a complex of radiologic methods such X-ray, sonography, CT and MRI tomography. Were analyzed separate and cumulative information of the listed methods on the basis of their sensitivity, specificity and general accuracy. The questions of classification of complicated forms of the lung hydatidosis and criteria to differential diagnosis of spherical lung formations and liver focal formations are discussed. The technique of transthoracic sonography for revealing and differential diagnostics subpleural localization of the hydatidosis and congestions of a liquid in pleural cavity is developed. The indications and contraindications to use transcutaneus fine needle biopsy for diagnostics of the hydatidosis are determined. On the basis of results of complex X-ray diagnostics the original algorithms of a sequence of performance various methods of visualization are developed. The indications to performance pancreatocholangiography are proved at suspicion on ruptured hydatid cysts into biliar tract with mechanical jaundice. The analysis of results of traditional surgical treatment of the hydatidosis, and low invasive transcutaneus procedures, endovisual surgical operations and chemotherapy is carried out. The indications to performance of the listed methods are developed depending on the form and stage of disease, localization of cysts. Were analyzed results of transcutaneus aspiration and drainage of residual cavities after hydatidectomy. The comparative estimation to traditional surgical methods, CT- and US-guided transcutaneus aspirations, drainage is given. Are determined a role and place of complex radiological diagnosis with use sonography, X-ray and CT in early and late postoperative complications after hydatidectomy from the lung and liver (pleural effusion, subdiaphragmal abscess and
Minenkov, Yury; Chermak, Edrisse; Cavallo, Luigi
2015-01-01
The performance of the domain based local pair-natural orbital coupled-cluster (DLPNO-CCSD(T)) method has been tested to reproduce the experimental gas phase ligand dissociation enthalpy in a series of Cu+, Ag+ and Au+ complexes. For 33 Cu+ - non-covalent ligand dissociation enthalpies all-electron calculations with the same method result in MUE below 2.2 kcal/mol, although a MSE of 1.4 kcal/mol indicates systematic underestimation of the experimental values. Inclusion of scalar relativistic effects for Cu either via effective core potential (ECP) or Douglass-Kroll-Hess Hamiltonian, reduces the MUE below 1.7 kcal/mol and the MSE to -1.0 kcal/mol. For 24 Ag+ - non-covalent ligand dissociation enthalpies the DLPNO-CCSD(T) method results in a mean unsigned error (MUE) below 2.1 kcal/mol and vanishing mean signed error (MSE). For 15 Au+ - non-covalent ligand dissociation enthalpies the DLPNO-CCSD(T) methods provides larger MUE and MSE, equal to 3.2 and 1.7 kcal/mol, which might be related to poor precision of the experimental measurements. Overall, for the combined dataset of 72 coinage metal ion complexes DLPNO-CCSD(T) results in a MUE below 2.2 kcal/mol and an almost vanishing MSE. As for a comparison with computationally cheaper density functional theory (DFT) methods, the routinely used M06 functional results in MUE and MSE equal to 3.6 and -1.7 kca/mol. Results converge already at CC-PVTZ quality basis set, making highly accurate DLPNO-CCSD(T) estimates to be affordable for routine calculations (single-point) on large transition metal complexes of > 100 atoms.
Minenkov, Yury
2015-08-27
The performance of the domain based local pair-natural orbital coupled-cluster (DLPNO-CCSD(T)) method has been tested to reproduce the experimental gas phase ligand dissociation enthalpy in a series of Cu+, Ag+ and Au+ complexes. For 33 Cu+ - non-covalent ligand dissociation enthalpies all-electron calculations with the same method result in MUE below 2.2 kcal/mol, although a MSE of 1.4 kcal/mol indicates systematic underestimation of the experimental values. Inclusion of scalar relativistic effects for Cu either via effective core potential (ECP) or Douglass-Kroll-Hess Hamiltonian, reduces the MUE below 1.7 kcal/mol and the MSE to -1.0 kcal/mol. For 24 Ag+ - non-covalent ligand dissociation enthalpies the DLPNO-CCSD(T) method results in a mean unsigned error (MUE) below 2.1 kcal/mol and vanishing mean signed error (MSE). For 15 Au+ - non-covalent ligand dissociation enthalpies the DLPNO-CCSD(T) methods provides larger MUE and MSE, equal to 3.2 and 1.7 kcal/mol, which might be related to poor precision of the experimental measurements. Overall, for the combined dataset of 72 coinage metal ion complexes DLPNO-CCSD(T) results in a MUE below 2.2 kcal/mol and an almost vanishing MSE. As for a comparison with computationally cheaper density functional theory (DFT) methods, the routinely used M06 functional results in MUE and MSE equal to 3.6 and -1.7 kca/mol. Results converge already at CC-PVTZ quality basis set, making highly accurate DLPNO-CCSD(T) estimates to be affordable for routine calculations (single-point) on large transition metal complexes of > 100 atoms.
The Reliasep method used for the functional modeling of complex systems
International Nuclear Information System (INIS)
Dubiez, P.; Gaufreteau, P.; Pitton, J.P.
1997-07-01
The RELIASEP R method and its support tool have been recommended to carry out the functional analysis of large systems within the framework of the design of new power units. Let us first recall the principles of the method based on the breakdown of functions into tree(s). These functions are characterised by their performance and constraints. Then the main modifications made under EDF requirement and in particular the 'viewpoints' analyses are presented. The knowledge obtained from the first studies carried out are discussed. (author)
The complexity of interior point methods for solving discounted turn-based stochastic games
DEFF Research Database (Denmark)
Hansen, Thomas Dueholm; Ibsen-Jensen, Rasmus
2013-01-01
for general 2TBSGs. This implies that a number of interior point methods can be used to solve 2TBSGs. We consider two such algorithms: the unified interior point method of Kojima, Megiddo, Noma, and Yoshise, and the interior point potential reduction algorithm of Kojima, Megiddo, and Ye. The algorithms run...... states and discount factor γ we get κ=Θ(n(1−γ)2) , −δ=Θ(n√1−γ) , and 1/θ=Θ(n(1−γ)2) in the worst case. The lower bounds for κ, − δ, and 1/θ are all obtained using the same family of deterministic games....
The Reliasep method used for the functional modeling of complex systems
Energy Technology Data Exchange (ETDEWEB)
Dubiez, P.; Gaufreteau, P.; Pitton, J.P
1997-07-01
The RELIASEP{sup R} method and its support tool have been recommended to carry out the functional analysis of large systems within the framework of the design of new power units. Let us first recall the principles of the method based on the breakdown of functions into tree(s). These functions are characterised by their performance and constraints. Then the main modifications made under EDF requirement and in particular the `viewpoints` analyses are presented. The knowledge obtained from the first studies carried out are discussed. (author)
Directory of Open Access Journals (Sweden)
Aaron G Day-Williams
Full Text Available Pooled sequencing can be a cost-effective approach to disease variant discovery, but its applicability in association studies remains unclear. We compare sequence enrichment methods coupled to next-generation sequencing in non-indexed pools of 1, 2, 10, 20 and 50 individuals and assess their ability to discover variants and to estimate their allele frequencies. We find that pooled resequencing is most usefully applied as a variant discovery tool due to limitations in estimating allele frequency with high enough accuracy for association studies, and that in-solution hybrid-capture performs best among the enrichment methods examined regardless of pool size.
Directory of Open Access Journals (Sweden)
O. G. Smalyuh
2015-04-01
Full Text Available The aim of our study was to develop the method for identification and assay of essential oils of mint and turmeric in complex medicinal product in capsule form. Materials and method.The paper used samples of turmeric and mint essential oils and complex drug, in the form of capsules containing oil of peppermint, oil of Curcuma longa, a mixture of extracts sandy everlasting (Helichrysumarenarium (L. Moench, marigold (Caléndulaofficinális L, wild carrot (Daucussarota and Curcuma longa (Curcuma longa. Results and discussion. The structure of the complex drug is dry extract sand everlasting flowers, wild carrot extract of marigold flowers and fruits thick, dry extract of Curcuma longa and essential oils of peppermint and turmeric. According to the research of different samples of peppermint oil, and given the need for its identification and quantification of the finished medicinal product, we have decided to choose menthol as analytical marker. In order to establish the identity of complex drug its main components - Ar- turmeric, α-and β- turmeric, and their total content must meet the quantitative indicators "content turmerics" in the specifications for turmeric oil. Past studies of sample preparation conditions allowed to offer 96% ethanol to extract oil components from the sample; ultrasonic and centrifugation to improve removal of the capsule weight. Cromatographiccharacteristics of substances was obtained by column firm Agilent, HP-Innowax. It has been established that other active pharmaceutical ingredients capsule (placebo did not affect the quantification of the components of essential oils of mint and turmeric. Conclusions. 1. Chromatographic conditions of identification and assay of essential oils of mint and turmeric in a complex drug and optimal conditions for sample preparation and analysis by gas chromatographyhave been studied. 2. Methods for identification and assay of menthol, α-, β- and Ar- turmerics in complex drug based on
Murgha, Yusuf E; Rouillard, Jean-Marie; Gulari, Erdogan
2014-01-01
Custom-defined oligonucleotide collections have a broad range of applications in fields of synthetic biology, targeted sequencing, and cytogenetics. Also, they are used to encode information for technologies like RNA interference, protein engineering and DNA-encoded libraries. High-throughput parallel DNA synthesis technologies developed for the manufacture of DNA microarrays can produce libraries of large numbers of different oligonucleotides, but in very limited amounts. Here, we compare three approaches to prepare large quantities of single-stranded oligonucleotide libraries derived from microarray synthesized collections. The first approach, alkaline melting of double-stranded PCR amplified libraries with a biotinylated strand captured on streptavidin coated magnetic beads results in little or no non-biotinylated ssDNA. The second method wherein the phosphorylated strand of PCR amplified libraries is nucleolyticaly hydrolyzed is recommended when small amounts of libraries are needed. The third method combining in vitro transcription of PCR amplified libraries to reverse transcription of the RNA product into single-stranded cDNA is our recommended method to produce large amounts of oligonucleotide libraries. Finally, we propose a method to remove any primer binding sequences introduced during library amplification.
Larese De Tetto, Antonia; Rossi, Riccardo; Idelsohn Barg, Sergio Rodolfo; Oñate Ibáñez de Navarra, Eugenio
2006-01-01
Several comparisons between experiments and computational models are presented in the following pages. The objective is to verify the ability of Particle Finite Elements Methods (PFEM) [1] [2] to reproduce hydraulic phenomena involving large deformation of the fluid domain [4]. Peer Reviewed
Nuclear methods on service of mountain manufacture Navoi Mining-Metallurgical complex
International Nuclear Information System (INIS)
Kucherskiy, N.I.
2004-01-01
Full text: On a number of the major minerals, such as gold, uranium, copper, tungsten, potash salts, phosphorites, caolines, etc. Uzbekistan on the confirmed stocks and predicted resources occupies leading places among the states of the world. The basic deposits of gold and uranium are concentrated in Central-Kysylkum region, which is field of activity of Navoi mining-metallurgical combine. In industrial divisions of the combine, located in five areas of republic about 60000 persons are engaged. At all stages of manufacture of gold (since investigation) analytical maintenance has extremely important role. In NMMC radioanalytical methods are widely used, in particular, on mine 'Muruntau' the unique gamma-activation analysis laboratory has been constructed and entered into operation. For the period of operation of laboratory, i.e. since 1977, it is executed more than nine millions analyses of geological tests with extremely high expressness (about tens seconds). It is used x-ray-radiometric method for large-portion (by dumper) sortings and on lumpy separation of ores. With the help of high-sensitivity radiometric means of measurements it is possible to develop phosphorites for reception of phosphoric fertilizers. Nuclear-physical methods are applied to the decision of other problems. Thus, due to application of nuclear-physical methods of the operative control of technological processes of mining manufacture, quality management of ores, the account of quantity of products of extraction and their preliminary enrichment, the actual problem - increase in profitability of all mining manufacture NMMC is solved
Method Enabling Gene Expression Studies of Pathogens in a Complex Food Matrix
DEFF Research Database (Denmark)
Kjeldgaard, Jette; Henriksen, Sidsel; Cohn, Marianne Thorup
2011-01-01
We describe a simple method for stabilizing and extracting high-quality prokaryotic RNA from meat. Heat and salt stress of Escherichia coli and Salmonella spp. in minced meat reproducibly induced dnaK and otsB expression, respectively, as observed by quantitative reverse transcription-PCR (>5-fold...
Maccormack, R. W.
1978-01-01
The calculation of flow fields past aircraft configuration at flight Reynolds numbers is considered. Progress in devising accurate and efficient numerical methods, in understanding and modeling the physics of turbulence, and in developing reliable and powerful computer hardware is discussed. Emphasis is placed on efficient solutions to the Navier-Stokes equations.
Simplified Method for Predicting a Functional Class of Proteins in Transcription Factor Complexes
Piatek, Marek J.; Schramm, Michael C.; Burra, Dharani Dhar; BinShbreen, Abdulaziz; Jankovic, Boris R.; Chowdhary, Rajesh; Archer, John A.C.; Bajic, Vladimir B.
2013-01-01
initiation. Such information is not fully available, since not all proteins that act as TFs or TcoFs are yet annotated as such, due to generally partial functional annotation of proteins. In this study we have developed a method to predict, using only
Lancellotti, V.; Hon, de B.P.; Tijhuis, A.G.
2009-01-01
Linear embedding via Green's operators (LEGO) is a computational method in which the multiple scattering between adjacent objects - forming a large composite structure - is determined through the interaction of simple-shaped building domains, whose electromagnetic (EM) behavior is accounted for by
Synthesis of Complex-Alloyed Nickel Aluminides from Oxide Compounds by Aluminothermic Method
Directory of Open Access Journals (Sweden)
Victor Gostishchev
2018-06-01
Full Text Available This paper deals with the investigation of complex-alloyed nickel aluminides obtained from oxide compounds by aluminothermic reduction. The aim of the work was to study and develop the physicochemical basis for obtaining complex-alloyed nickel aluminides and their application for enhancing the properties of coatings made by electrospark deposition (ESD on steel castings, as well as their use as grain refiners for tin bronze. The peculiarities of microstructure formation of master alloys based on the Al–TM (transition metal system were studied using optical, electronic scanning microscopy and X-ray spectral microanalysis. There were regularities found in the formation of structural components of aluminum alloys (Ni–Al, Ni-Al-Cr, Ni-Al-Mo, Ni-Al-W, Ni-Al-Ti, Ni-Cr-Mo-W, Ni-Al-Cr-Mo-W-Ti, Ni-Al-Cr-V, Ni-Al-Cr-V-Mo and changes in their microhardness, depending on the composition of the charge, which consisted of oxide compounds, and on the amount of reducing agent (aluminum powder. It is shown that all the alloys obtained are formed on the basis of the β phase (solid solution of alloying elements in nickel aluminide and quasi-eutectic, consisting of the β′ phase and intermetallics of the alloying elements. The most effective alloys, in terms of increasing microhardness, were Al-Ni-Cr-Mo-W (7007 MPa and Al-Ni-Cr-V-Mo (7914 MPa. The perspective is shown for applying the synthesized intermetallic master alloys as anode materials for producing coatings by electrospark deposition on steel of C1030 grade. The obtained coatings increase the heat resistance of steel samples by 7.5 times, while the coating from NiAl-Cr-Mo-W alloy remains practically nonoxidized under the selected test conditions. The use of NiAl intermetallics as a modifying additive (0.15 wt. % in tin bronze allows increasing the microhardness of the α-solid solution by 1.9 times and the microhardness of the eutectic (α + β phase by 2.7 times.
Johnson, Ginger A; Vindrola-Padros, Cecilia
2017-09-01
The 2013-2016 Ebola outbreak in West Africa highlighted both the successes and limitations of social science contributions to emergency response operations. An important limitation was the rapid and effective communication of study findings. A systematic review was carried out to explore how rapid qualitative methods have been used during global heath emergencies to understand which methods are commonly used, how they are applied, and the difficulties faced by social science researchers in the field. We also asses their value and benefit for health emergencies. The review findings are used to propose recommendations for qualitative research in this context. Peer-reviewed articles and grey literature were identified through six online databases. An initial search was carried out in July 2016 and updated in February 2017. The PRISMA checklist was used to guide the reporting of methods and findings. The articles were assessed for quality using the MMAT and AACODS checklist. From an initial search yielding 1444 articles, 22 articles met the criteria for inclusion. Thirteen of the articles were qualitative studies and nine used a mixed-methods design. The purpose of the rapid studies included: the identification of causes of the outbreak, and assessment of infrastructure, control strategies, health needs and health facility use. The studies varied in duration (from 4 days to 1 month). The main limitations identified by the authors were: the low quality of the collected data, small sample sizes, and little time for cross-checking facts with other data sources to reduce bias. Rapid qualitative methods were seen as beneficial in highlighting context-specific issues that need to be addressed locally, population-level behaviors influencing health service use, and organizational challenges in response planning and implementation. Recommendations for carrying out rapid qualitative research in this context included the early designation of community leaders as a point of
Takayama, Yukiya; Kusamori, Kosuke; Hayashi, Mika; Tanabe, Noriko; Matsuura, Satoru; Tsujimura, Mari; Katsumi, Hidemasa; Sakane, Toshiyasu; Nishikawa, Makiya; Yamamoto, Akira
2017-12-05
Mesenchymal stem cells (MSCs) have various functions, making a significant contribution to tissue repair. On the other hand, the viability and function of MSCs are not lasting after an in vivo transplant, and the therapeutic effects of MSCs are limited. Although various chemical modification methods have been applied to MSCs to improve their viability and function, most of conventional drug modification methods are short-term and unstable and cause cytotoxicity. In this study, we developed a method for long-term drug modification to C3H10T1/2 cells, murine mesenchymal stem cells, without any damage, using the avidin-biotin complex method (ABC method). The modification of NanoLuc luciferase (Nluc), a reporter protein, to C3H10T1/2 cells by the ABC method lasted for at least 14 days in vitro without major effects on the cellular characteristics (cell viability, cell proliferation, migration ability, and differentiation ability). Moreover, in vivo, the surface Nluc modification to C3H10T1/2 cells by the ABC method lasted for at least 7 days. Therefore, these results indicate that the ABC method may be useful for long-term surface modification of drugs and for effective MSC-based therapy.
Application of the Decomposition Method to the Design Complexity of Computer-based Display
Energy Technology Data Exchange (ETDEWEB)
Kim, Hyoung Ju; Lee, Seung Woo; Seong, Poong Hyun [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Park, Jin Kyun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2012-05-15
The importance of the design of human machine interfaces (HMIs) for human performance and safety has long been recognized in process industries. In case of nuclear power plants (NPPs), HMIs have significant implications for the safety of the NPPs since poor implementation of HMIs can impair the operators' information searching ability which is considered as one of the important aspects of human behavior. To support and increase the efficiency of the operators' information searching behavior, advanced HMIs based on computer technology are provided. Operators in advanced main control room (MCR) acquire information through video display units (VDUs), and large display panel (LDP) required for the operation of NPPs. These computer-based displays contain a very large quantity of information and present them in a variety of formats than conventional MCR. For example, these displays contain more elements such as abbreviations, labels, icons, symbols, coding, and highlighting than conventional ones. As computer-based displays contain more information, complexity of the elements becomes greater due to less distinctiveness of each element. A greater understanding is emerging about the effectiveness of designs of computer-based displays, including how distinctively display elements should be designed. And according to Gestalt theory, people tend to group similar elements based on attributes such as shape, color or pattern based on the principle of similarity. Therefore, it is necessary to consider not only human operator's perception but the number of element consisting of computer-based display
Application of the Decomposition Method to the Design Complexity of Computer-based Display
International Nuclear Information System (INIS)
Kim, Hyoung Ju; Lee, Seung Woo; Seong, Poong Hyun; Park, Jin Kyun
2012-01-01
The importance of the design of human machine interfaces (HMIs) for human performance and safety has long been recognized in process industries. In case of nuclear power plants (NPPs), HMIs have significant implications for the safety of the NPPs since poor implementation of HMIs can impair the operators' information searching ability which is considered as one of the important aspects of human behavior. To support and increase the efficiency of the operators' information searching behavior, advanced HMIs based on computer technology are provided. Operators in advanced main control room (MCR) acquire information through video display units (VDUs), and large display panel (LDP) required for the operation of NPPs. These computer-based displays contain a very large quantity of information and present them in a variety of formats than conventional MCR. For example, these displays contain more elements such as abbreviations, labels, icons, symbols, coding, and highlighting than conventional ones. As computer-based displays contain more information, complexity of the elements becomes greater due to less distinctiveness of each element. A greater understanding is emerging about the effectiveness of designs of computer-based displays, including how distinctively display elements should be designed. And according to Gestalt theory, people tend to group similar elements based on attributes such as shape, color or pattern based on the principle of similarity. Therefore, it is necessary to consider not only human operator's perception but the number of element consisting of computer-based display
Development of complex electrokinetic decontamination method for soil contaminated with uranium
International Nuclear Information System (INIS)
Kim, Gye-Nam; Kim, Seung-Soo; Park, Hye-Min; Kim, Wan-Suk; Moon, Jei-Kwon; Hyeon, Jay-Hyeok
2012-01-01
520L complex electrokinetic soil decontamination equipment was manufactured to clean up uranium contaminated soils from Korean nuclear facilities. To remove uranium at more than 95% from the radioactive soil through soil washing and electrokinetic technology, decontamination experiments were carried out. To reduce the generation of large quantities of metal oxides in cathode, a pH controller is used to control the pH of the electrolyte waste solution between 0.5 and 1 for the formation of UO 2+ . More than 80% metal oxides were removed through pre-washing, an electrolyte waste solution was circulated by a pump, and a metal oxide separator filtered the metal oxide particles. 80–85% of the uranium was removed from the soil by soil washing as part of the pre-treatment. When the initial uranium concentration of the soil was 21.7 Bq/g, the required electrokinetic decontamination time was 25 days. When the initial concentration of 238 U in the soil was higher, a longer decontamination time was needed, but the removal rate of 238 U from the soil was higher.
Carbon-tuned bonding method significantly enhanced the hydrogen storage of BN-Li complexes.
Deng, Qing-ming; Zhao, Lina; Luo, You-hua; Zhang, Meng; Zhao, Li-xia; Zhao, Yuliang
2011-11-01
Through first-principles calculations, we found doping carbon atoms onto BN monolayers (BNC) could significantly strengthen the Li bond on this material. Unlike the weak bond strength between Li atoms and the pristine BN layer, it is observed that Li atoms are strongly hybridized and donate their electrons to the doped substrate, which is responsible for the enhanced binding energy. Li adsorbed on the BNC layer can serve as a high-capacity hydrogen storage medium, without forming clusters, which can be recycled at room temperature. Eight polarized H(2) molecules are attached to two Li atoms with an optimal binding energy of 0.16-0.28 eV/H(2), which results from the electrostatic interaction of the polarized charge of hydrogen molecules with the electric field induced by positive Li atoms. This practical carbon-tuned BN-Li complex can work as a very high-capacity hydrogen storage medium with a gravimetric density of hydrogen of 12.2 wt%, which is much higher than the gravimetric goal of 5.5 wt % hydrogen set by the U.S. Department of Energy for 2015.
Wu, Wei; Tang, Xiao-Ping; Ma, Xue-Qing; Liu, Hong-Bin
2016-08-01
Soil temperature variability data provide valuable information on understanding land-surface ecosystem processes and climate change. This study developed and analyzed a spatial dataset of monthly mean soil temperature at a depth of 10 cm over a complex topographical region in southwestern China. The records were measured at 83 stations during the period of 1961-2000. Nine approaches were compared for interpolating soil temperature. The accuracy indicators were root mean square error (RMSE), modelling efficiency (ME), and coefficient of residual mass (CRM). The results indicated that thin plate spline with latitude, longitude, and elevation gave the best performance with RMSE varying between 0.425 and 0.592 °C, ME between 0.895 and 0.947, and CRM between -0.007 and 0.001. A spatial database was developed based on the best model. The dataset showed that larger seasonal changes of soil temperature were from autumn to winter over the region. The northern and eastern areas with hilly and low-middle mountains experienced larger seasonal changes.
A Tractable Method for Describing Complex Couplings between Neurons and Population Rate.
Gardella, Christophe; Marre, Olivier; Mora, Thierry
2016-01-01
Neurons within a population are strongly correlated, but how to simply capture these correlations is still a matter of debate. Recent studies have shown that the activity of each cell is influenced by the population rate, defined as the summed activity of all neurons in the population. However, an explicit, tractable model for these interactions is still lacking. Here we build a probabilistic model of population activity that reproduces the firing rate of each cell, the distribution of the population rate, and the linear coupling between them. This model is tractable, meaning that its parameters can be learned in a few seconds on a standard computer even for large population recordings. We inferred our model for a population of 160 neurons in the salamander retina. In this population, single-cell firing rates depended in unexpected ways on the population rate. In particular, some cells had a preferred population rate at which they were most likely to fire. These complex dependencies could not be explained by a linear coupling between the cell and the population rate. We designed a more general, still tractable model that could fully account for these nonlinear dependencies. We thus provide a simple and computationally tractable way to learn models that reproduce the dependence of each neuron on the population rate.
Directory of Open Access Journals (Sweden)
Sarsangi V.* MSc,
2016-08-01
Full Text Available Aims The occurrence of fire in residential buildings, commercial complexes and large and small industries cause physical, environmental and financial damages to many different communities. Fire safety in hospitals is sensitive and it is believed that the society takes the responsibility to care sick people. The goal of this study was to use Fire Risk Assessment Method for Engineering (FRAME in a hospital complex environment and assess the level of fire risks. Materials & Methods This descriptive study was conducted in Kashan Shahid Beheshti hospital in 2013. The FRAME is designed based on the empirical and scientific knowledge and experiment and have acceptable reliability for assessing the building fire risk. Excel software was used to calculate the risk level and finally fire risk (R was calculated separately for different units. Findings Calculated Rs were less than 1for health, autoclave, office of nursing and infection control units. R1s were greater than 1 for all units. R2s were less than 1 for office of nursing and infection control units. Conclusion FRAME is an acceptable tool for assessing the risk of fire in buildings and the fire risk is high in Shahid Beheshti Hospital Complex of Kashan and damages can be intolerable in the case of fire.
Guise, Jeanne-Marie; Chang, Christine; Viswanathan, Meera; Glick, Susan; Treadwell, Jonathan; Umscheid, Craig A; Whitlock, Evelyn; Fu, Rongwei; Berliner, Elise; Paynter, Robin; Anderson, Johanna; Motu'apuaka, Pua; Trikalinos, Tom
2014-11-01
The purpose of this Agency for Healthcare Research and Quality Evidence-based Practice Center methods white paper was to outline approaches to conducting systematic reviews of complex multicomponent health care interventions. We performed a literature scan and conducted semistructured interviews with international experts who conduct research or systematic reviews of complex multicomponent interventions (CMCIs) or organizational leaders who implement CMCIs in health care. Challenges identified include lack of consistent terminology for such interventions (eg, complex, multicomponent, multidimensional, multifactorial); a wide range of approaches used to frame the review, from grouping interventions by common features to using more theoretical approaches; decisions regarding whether and how to quantitatively analyze the interventions, from holistic to individual component analytic approaches; and incomplete and inconsistent reporting of elements critical to understanding the success and impact of multicomponent interventions, such as methods used for implementation the context in which interventions are implemented. We provide a framework for the spectrum of conceptual and analytic approaches to synthesizing studies of multicomponent interventions and an initial list of critical reporting elements for such studies. This information is intended to help systematic reviewers understand the options and tradeoffs available for such reviews. Copyright © 2014 Elsevier Inc. All rights reserved.
Khan, Naima A.; Johnson, Michael D.; Carroll, Kenneth C.
2018-03-01
Recalcitrant organic contaminants, such as 1,4-dioxane, typically require advanced oxidation process (AOP) oxidants, such as ozone (O3), for their complete mineralization during water treatment. Unfortunately, the use of AOPs can be limited by these oxidants' relatively high reactivities and short half-lives. These drawbacks can be minimized by partial encapsulation of the oxidants within a cyclodextrin cavity to form inclusion complexes. We determined the inclusion complexes of O3 and three common co-contaminants (trichloroethene, 1,1,1-trichloroethane, and 1,4-dioxane) as guest compounds within hydroxypropyl-β-cyclodextrin. Both direct (ultraviolet or UV) and competitive (fluorescence changes with 6-p-toluidine-2-naphthalenesulfonic acid as the probe) methods were used, which gave comparable results for the inclusion constants of these species. Impacts of changing pH and NaCl concentrations were also assessed. Binding constants increased with pH and with ionic strength, which was attributed to variations in guest compound solubility. The results illustrate the versatility of cyclodextrins for inclusion complexation with various types of compounds, binding measurement methods are applicable to a wide range of applications, and have implications for both extraction of contaminants and delivery of reagents for treatment of contaminants in wastewater or contaminated groundwater.
Directory of Open Access Journals (Sweden)
T. A. Kravtsova
2016-01-01
Full Text Available The paper considers a task of generating the requirements and creating a calibration target for automated microscopy systems (AMS of biomedical specimens to provide the invariance of algorithms and software to the hardware configuration. The required number of color fields of the calibration target and their color coordinates are mostly determined by the color correction method, for which coefficients of the equations are estimated during the calibration process. The paper analyses existing color calibration techniques for digital imaging systems using an optical microscope and shows that there is a lack of published results of comparative studies to demonstrate a particular useful color correction method for microscopic images. A comparative study of ten image color correction methods in RGB space using polynomials and combinations of color coordinate of different orders was carried out. The method of conditioned least squares to estimate the coefficients in the color correction equations using captured images of 217 color fields of the calibration target Kodak Q60-E3 was applied. The regularization parameter in this method was chosen experimentally. It was demonstrated that the best color correction quality characteristics are provided by the method that uses a combination of color coordinates of the 3rd order. The study of the influence of the number and the set of color fields included in calibration target on color correction quality for microscopic images was performed. Six train sets containing 30, 35, 40, 50, 60 and 80 color fields, and test set of 47 color fields not included in any of the train sets were formed. It was found out that the train set of 60 color fields minimizes the color correction error values for both operating modes of digital camera: using "default" color settings and with automatic white balance. At the same time it was established that the use of color fields from the widely used now Kodak Q60-E3 target does not
Energy-based method for near-real time modeling of sound field in complex urban environments.
Pasareanu, Stephanie M; Remillieux, Marcel C; Burdisso, Ricardo A
2012-12-01
Prediction of the sound field in large urban environments has been limited thus far by the heavy computational requirements of conventional numerical methods such as boundary element (BE) or finite-difference time-domain (FDTD) methods. Recently, a considerable amount of work has been devoted to developing energy-based methods for this application, and results have shown the potential to compete with conventional methods. However, these developments have been limited to two-dimensional (2-D) studies (along street axes), and no real description of the phenomena at issue has been exposed. Here the mathematical theory of diffusion is used to predict the sound field in 3-D complex urban environments. A 3-D diffusion equation is implemented by means of a simple finite-difference scheme and applied to two different types of urban configurations. This modeling approach is validated against FDTD and geometrical acoustic (GA) solutions, showing a good overall agreement. The role played by diffraction near buildings edges close to the source is discussed, and suggestions are made on the possibility to predict accurately the sound field in complex urban environments, in near real time simulations.
Comprehensive risk assessment method of catastrophic accident based on complex network properties
Cui, Zhen; Pang, Jun; Shen, Xiaohong
2017-09-01
On the macro level, the structural properties of the network and the electrical characteristics of the micro components determine the risk of cascading failures. And the cascading failures, as a process with dynamic development, not only the direct risk but also potential risk should be considered. In this paper, comprehensively considered the direct risk and potential risk of failures based on uncertain risk analysis theory and connection number theory, quantified uncertain correlation by the node degree and node clustering coefficient, then established a comprehensive risk indicator of failure. The proposed method has been proved by simulation on the actual power grid. Modeling a network according to the actual power grid, and verified the rationality of the proposed method.
Segmentation of Moving Object Using Background Subtraction Method in Complex Environments
Directory of Open Access Journals (Sweden)
S. Kumar
2016-06-01
Full Text Available Background subtraction is an extensively used approach to localize the moving object in a video sequence. However, detecting an object under the spatiotemporal behavior of background such as rippling of water, moving curtain and illumination change or low resolution is not a straightforward task. To deal with the above-mentioned problem, we address a background maintenance scheme based on the updating of background pixels by estimating the current spatial variance along the temporal line. The work is focused to immune the variation of local motion in the background. Finally, the most suitable label assignment to the motion field is estimated and optimized by using iterated conditional mode (ICM under a Markovian framework. Performance evaluation and comparisons with the other well-known background subtraction methods show that the proposed method is unaffected by the problem of aperture distortion, ghost image, and high frequency noise.
Complex method of radiation purification of sewage of the pulp and paper industry
International Nuclear Information System (INIS)
Petryaev, E.P.; Kovalevskaya, A.M.; Shlyk, V.G.; Vaninskaya, Yu.M.; Zhalejko, G.A.; Stebunov, O.B.
1978-01-01
A radiation-chemistry method of the sewage purification of the cellulose sulphate production from dyed substances and lignin-base compounds is described. The method consists in addition of small quantities of a monomer (methyl methacrylate, methyl acrylate, butyl methacrylate, acrylonitrile, acrylamide, styrol) and in the following gamma irradiation. The drainage of cellulose plants have been purified by 60-90% by colour and the content of lignin compounds in the dose range of 0.02 - 0.12 Mrad (with a dose rate of 65 rad/s) with the addition of 0.4 - 0.8% of methyl methacrylate. The rest monomers are also effective as additions to sewage. The ways of using polymer wastes of radiation purification of sewage for production of structural materials have been pointed out
General algebraic method applied to control analysis of complex engine types
Boksenbom, Aaron S; Hood, Richard
1950-01-01
A general algebraic method of attack on the problem of controlling gas-turbine engines having any number of independent variables was utilized employing operational functions to describe the assumed linear characteristics for the engine, the control, and the other units in the system. Matrices were used to describe the various units of the system, to form a combined system showing all effects, and to form a single condensed matrix showing the principal effects. This method directly led to the conditions on the control system for noninteraction so that any setting disturbance would affect only its corresponding controlled variable. The response-action characteristics were expressed in terms of the control system and the engine characteristics. The ideal control-system characteristics were explicitly determined in terms of any desired response action.
Le Jeune, L.; Robert, S.; Dumas, P.; Membre, A.; Prada, C.
2015-03-01
In this paper, we propose an ultrasonic adaptive imaging method based on the phased-array technology and the synthetic focusing algorithm Total Focusing Method (TFM). The general principle is to image the surface by applying the TFM algorithm in a semi-infinite water medium. Then, the reconstructed surface is taken into account to make a second TFM image inside the component. In the surface reconstruction step, the TFM algorithm has been optimized to decrease computation time and to limit noise in water. In the second step, the ultrasonic paths through the reconstructed surface are calculated by the Fermat's principle and an iterative algorithm, and the classical TFM is applied to obtain an image inside the component. This paper presents several results of TFM imaging in components of different geometries, and a result obtained with a new technology of probes equipped with a flexible wedge filled with water (manufactured by Imasonic).
Use of simulation methods in the evaluation of reliability and availability of complex system
International Nuclear Information System (INIS)
Maigret, N.; Duchemin, B.; Robert, T.; Villeneuve, J.J. de; Lanore, J.M.
1982-04-01
After a short review of the available standard methods in the reliability field like Boolean algebra for fault tree and the semi-regeneration theory for Markov, this paper shows how the BIAF code based on state description of a system and simulation techique can solve many problems. It also shows how the use of importance sampling and biasing techniques allows us to deal with the rare event problem
Research on Statistical Flow of the Complex Background Based on Image Method
Directory of Open Access Journals (Sweden)
Yang Huanhai
2014-06-01
Full Text Available Along with our country city changes a process continues to accelerate, city road traffic system pressure increasing. Therefore, the importance of intelligent transportation system based on computer vision technology is becoming more and more significant. Using the image processing technology for the vehicle detection has become a hot topic in the research field of. Only accurately segmented from the background of vehicle can recognize and track vehicles. Therefore, the application of video vehicle detection technology and image processing technology, identify a number of the same sight many car can, types and moving characteristics, can provide real-time basis for intelligent traffic control. This paper first introduces the concept of intelligent transportation system, the importance and the image processing technology in vehicle recognition in statistics, overview of video vehicle detection method, and the video detection technology and other detection technology, puts forward the superiority of video detection technology. Finally we design a real-time and reliable background subtraction method and the area of the vehicle recognition method based on information fusion algorithm, which is implemented with the MATLAB/GUI development tool in Windows operating system platform. In this paper, the application of the algorithm to study the frame traffic flow image. The experimental results show that, the algorithm of recognition of vehicle flow statistics, the effect is very good.
Coupled Finite Volume and Finite Element Method Analysis of a Complex Large-Span Roof Structure
Szafran, J.; Juszczyk, K.; Kamiński, M.
2017-12-01
The main goal of this paper is to present coupled Computational Fluid Dynamics and structural analysis for the precise determination of wind impact on internal forces and deformations of structural elements of a longspan roof structure. The Finite Volume Method (FVM) serves for a solution of the fluid flow problem to model the air flow around the structure, whose results are applied in turn as the boundary tractions in the Finite Element Method problem structural solution for the linear elastostatics with small deformations. The first part is carried out with the use of ANSYS 15.0 computer system, whereas the FEM system Robot supports stress analysis in particular roof members. A comparison of the wind pressure distribution throughout the roof surface shows some differences with respect to that available in the engineering designing codes like Eurocode, which deserves separate further numerical studies. Coupling of these two separate numerical techniques appears to be promising in view of future computational models of stochastic nature in large scale structural systems due to the stochastic perturbation method.
Synthesis of nanostructured LiTi2(PO4)3 powder by a Pechini-type polymerizable complex method
International Nuclear Information System (INIS)
Mariappan, C.R.; Galven, C.; Crosnier-Lopez, M.-P.; Le Berre, F.; Bohnke, O.
2006-01-01
The nanostructured NASICON-type LiTi 2 (PO 4 ) 3 (LTP) material has been synthesized by Pechini-type polymerizable complex method. The use of water-soluble ammonium citratoperoxotitanate (IV) metal complex instead of alkoxides as precursor allows to prepare monophase material. Thermal analyses have been carried out on the powder precursor to check the weight loss and synthesis temperature. X-ray powder diffraction analysis (XRD) has been performed on the LTP powder obtained after heating the powder precursor over a temperature range from 550 to 1050 deg. C for 2 h. By varying the molar ratio of citric acid to metal ion (CA/Ti) and citric acid to ethylene glycol (CA/EG), the grain size of the LTP powder could be modified. The formation of small and well-crystalline grains, in the order of 50-125 nm in size, has been determined from the XRD patterns and confirmed by transmission electron microscopy
Streher, A. S.; Cordeiro, C. L. O.; Silva, T. S. F.
2017-12-01
in the past. Our work also highlights the environmental complexity of the Amazon region, often considered as "environmentally homogeneous", and shows how environmental mapping can contribute to better understanding of the processes explaining the current assembly and distribution of Amazon biodiversity.
High performance parallel computing of flows in complex geometries: I. Methods
International Nuclear Information System (INIS)
Gourdain, N; Gicquel, L; Montagnac, M; Vermorel, O; Staffelbach, G; Garcia, M; Boussuge, J-F; Gazaix, M; Poinsot, T
2009-01-01
Efficient numerical tools coupled with high-performance computers, have become a key element of the design process in the fields of energy supply and transportation. However flow phenomena that occur in complex systems such as gas turbines and aircrafts are still not understood mainly because of the models that are needed. In fact, most computational fluid dynamics (CFD) predictions as found today in industry focus on a reduced or simplified version of the real system (such as a periodic sector) and are usually solved with a steady-state assumption. This paper shows how to overcome such barriers and how such a new challenge can be addressed by developing flow solvers running on high-end computing platforms, using thousands of computing cores. Parallel strategies used by modern flow solvers are discussed with particular emphases on mesh-partitioning, load balancing and communication. Two examples are used to illustrate these concepts: a multi-block structured code and an unstructured code. Parallel computing strategies used with both flow solvers are detailed and compared. This comparison indicates that mesh-partitioning and load balancing are more straightforward with unstructured grids than with multi-block structured meshes. However, the mesh-partitioning stage can be challenging for unstructured grids, mainly due to memory limitations of the newly developed massively parallel architectures. Finally, detailed investigations show that the impact of mesh-partitioning on the numerical CFD solutions, due to rounding errors and block splitting, may be of importance and should be accurately addressed before qualifying massively parallel CFD tools for a routine industrial use.
International Nuclear Information System (INIS)
Abazi, F.
2011-01-01
Increased level of complexity in almost every discipline and operation today raises the demand for knowledge in order to successfully run an organization whether to generate profit or to attain a non-profit mission. Traditional way of transferring knowledge to information systems rich in data structures and complex algorithms continue to hinder the ability to swiftly turnover concepts into operations. Diagrammatic modelling commonly applied in engineering in order to represent concepts or reality remains to be an excellent way of converging knowledge from domain experts. The nuclear verification domain represents ever more a matter which has great importance to the World safety and security. Demand for knowledge about nuclear processes and verification activities used to offset potential misuse of nuclear technology will intensify with the growth of the subject technology. This Doctoral thesis contributes with a model-based approach for representing complex process such as nuclear inspections. The work presented contributes to other domains characterized with knowledge intensive and complex processes. Based on characteristics of a complex process a conceptual framework was established as the theoretical basis for creating a number of modelling languages to represent the domain. The integrated Safeguards Modelling Method (iSMM) is formalized through an integrated meta-model. The diagrammatic modelling languages represent the verification domain and relevant nuclear verification aspects. Such a meta-model conceptualizes the relation between practices of process management, knowledge management and domain specific verification principles. This fusion is considered as necessary in order to create quality processes. The study also extends the formalization achieved through a meta-model by contributing with a formalization language based on Pattern Theory. Through the use of graphical and mathematical constructs of the theory, process structures are formalized enhancing
Directory of Open Access Journals (Sweden)
Vladimir Andreyevich Tsybatov
2014-12-01
Full Text Available Fuel and energy complex (FEC is one of the main elements of the economy of any territory over which intertwine the interests of all economic entities. To ensure economic growth of the region should ensure that internal balance of energy resources, which should be developed with account of regional specifics of economic growth and energy security. The study examined the status of this equilibrium, indicating fuel and energy balance of the region (TEB. The aim of the research is the development of the fuel and energy balance, which will allow to determine exactly how many and what resources are not enough to ensure the regional development strategy and what resources need to be brought in. In the energy balances as the focus of displays all issues of regional development, so thermopile is necessary as a mechanism of analysis of current issues, economic development, and in the forward-looking version — as a tool future vision for the fuel and energy complex, energy threats and ways of overcoming them. The variety of relationships in the energy sector with other sectors and aspects of society lead to the fact that the development of the fuel and energy balance of the region have to go beyond the actual energy sector, involving the analysis of other sectors of economy, as well as systems such as banking, budgetary, legislative, tax. Due to the complexity of the discussed problems, the obvious is the need to develop appropriate forecast-analytical system, allowing regional authorities to implement evidence-based predictions of the consequences of management decisions. Multivariant scenario study on development of fuel and energy complex and separately industry, to use the methods of project-based management, harmonized application of state regulation of strategic and market mechanisms on the operational directions of development of fuel and energy complex and separately industry in the economy of the region.
Ramos-Gonzá lez, R.; Garcí a-Cerda, L. A.; Alshareef, Husam N.; Gnade, Bruce E.; Quevedo-Ló pez, Manuel Angel Quevedo
2010-01-01
This work reports the preparation and characterization of hafnium (IV) oxide (HfO2) nanoparticles grown by derived sol-gel routes that involves the formation of an organic polymeric network. A comparison between polymerized complex (PC) and polymer precursor (PP) methods is presented. For the PC method, citric acid (CA) and ethylene glycol (EG) are used as the chelating and polymerizable reagents, respectively. In the case of PP method, poly(acrylic acid) (PAA) is used as the chelating reagent. In both cases, different precursor gels were prepared and the hafnium (IV) chloride (HfCl4) molar ratio was varied from 0.1 to 1.0 for the PC method and from 0.05 to 0.5 for the PP method. In order to obtain the nanoparticles, the precursors were heat treated at 500 and 800 °C. The thermal characterization of the precursor gels was carried out by thermogravimetric analysis (TGA) and the structural and morphological characterization by X-ray diffraction (XRD) and transmission electron microscopy (TEM). The XRD patterns of the samples obtained by both methods shows the formation of HfO2 at 500 °C with monoclinic crystalline phase. The PC method exhibited also the cubic phase. Finally, the HfO2 nanoparticles size (4 to 11 nm) was determined by TEM and XRD patterns. © (2010) Trans Tech Publications.
Ramos-González, R.
2010-03-01
This work reports the preparation and characterization of hafnium (IV) oxide (HfO2) nanoparticles grown by derived sol-gel routes that involves the formation of an organic polymeric network. A comparison between polymerized complex (PC) and polymer precursor (PP) methods is presented. For the PC method, citric acid (CA) and ethylene glycol (EG) are used as the chelating and polymerizable reagents, respectively. In the case of PP method, poly(acrylic acid) (PAA) is used as the chelating reagent. In both cases, different precursor gels were prepared and the hafnium (IV) chloride (HfCl4) molar ratio was varied from 0.1 to 1.0 for the PC method and from 0.05 to 0.5 for the PP method. In order to obtain the nanoparticles, the precursors were heat treated at 500 and 800 °C. The thermal characterization of the precursor gels was carried out by thermogravimetric analysis (TGA) and the structural and morphological characterization by X-ray diffraction (XRD) and transmission electron microscopy (TEM). The XRD patterns of the samples obtained by both methods shows the formation of HfO2 at 500 °C with monoclinic crystalline phase. The PC method exhibited also the cubic phase. Finally, the HfO2 nanoparticles size (4 to 11 nm) was determined by TEM and XRD patterns. © (2010) Trans Tech Publications.
International Nuclear Information System (INIS)
Bannouf, S.
2013-01-01
The goal of this thesis was, initially, to evaluate phased array methods for ultrasonic Non Destructive Testing (NDT) in order to propose optimizations, or to develop new alternative methods. In particular, this works deals with the detection of defects in complex geometries and/or materials parts. The TFM (Total Focusing Method) algorithm provides high resolution images and several representations of a same defect thanks to different reconstruction modes. These properties have been exploited judiciously in order to propose an adaptive imaging method in immersion configuration. We showed that TFM imaging can be used to characterize more precisely the defects. However, this method presents two major drawbacks: the large amount of data to be processed and a low signal-to-noise ratio (SNR), especially in noisy materials. We developed solutions to these two problems. To overcome the limitation caused by the large number of signals to be processed, we propose an algorithm that defines the sparse array to activate. As for the low SNR, it can be now improved by use of virtual sources and a new filtering method based on the DORT method (Decomposition of the Time Reversal Operator). (author) [fr
Energy Technology Data Exchange (ETDEWEB)
Geisler-Moroder, David [Bartenbach GmbH, Aldrans (Austria); Lee, Eleanor S. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ward, Gregory J. [Anyhere Software, Albany, NY (United States)
2016-08-29
The Five-Phase Method (5-pm) for simulating complex fenestration systems with Radiance is validated against field measurements. The capability of the method to predict workplane illuminances, vertical sensor illuminances, and glare indices derived from captured and rendered high dynamic range (HDR) images is investigated. To be able to accurately represent the direct sun part of the daylight not only in sensor point simulations, but also in renderings of interior scenes, the 5-pm calculation procedure was extended. The validation shows that the 5-pm is superior to the Three-Phase Method for predicting horizontal and vertical illuminance sensor values as well as glare indices derived from rendered images. Even with input data from global and diffuse horizontal irradiance measurements only, daylight glare probability (DGP) values can be predicted within 10% error of measured values for most situations.
Directory of Open Access Journals (Sweden)
И Н Куринин
2016-12-01
Full Text Available This article concentrates on the basic materials of the educational and methodical complex of a modern format (cloud-based and interactive, used in the educational process of the course “Informatics”, which significantly expands the share of independent work of students according to the increased number of students’ practical work (laboratory work, educational projects, essays. This workshop focuses on mastering the methods of work with personal mobile and office computers, Office programs, Internet technologies by students and making students receive the competences to solve topical applied problems. Efficiency of students’ independent work is additionally ensured by educational and methodical tutorials (lecture notes and compilations of test tasks, excercises, models and examples of performing all tasks, developed by the authors of the article.
Cheng, Jiujun; Nordeste, Ricardo; Trainer, Maria A; Charles, Trevor C
2017-01-01
Development of different PHAs as alternatives to petrochemically derived plastics can be facilitated by mining metagenomic libraries for diverse PHA cycle genes that might be useful for synthesis of bio-plastics. The specific phenotypes associated with mutations of the PHA synthesis pathway genes in Sinorhizobium meliloti and Pseudomonas putida, allows the use of powerful selection and screening tools to identify complementing novel PHA synthesis genes. Identification of novel genes through their function rather than sequence facilitates the functional proteins that may otherwise have been excluded through sequence-only screening methodology. We present here methods that we have developed for the isolation of clones expressing novel PHA metabolism genes from metagenomic libraries.
Nordeste, Ricardo F; Trainer, Maria A; Charles, Trevor C
2010-01-01
Development of different PHAs as alternatives to petrochemically derived plastics can be facilitated by mining metagenomic libraries for diverse PHA cycle genes that might be useful for synthesis of bioplastics. The specific phenotypes associated with mutations of the PHA synthesis pathway genes in Sinorhizobium meliloti allows for the use of powerful selection and screening tools to identify complementing novel PHA synthesis genes. Identification of novel genes through their function rather than sequence facilitates finding functional proteins that may otherwise have been excluded through sequence-only screening methodology. We present here methods that we have developed for the isolation of clones expressing novel PHA metabolism genes from metagenomic libraries.
Urban GHG emissions and resource flows: Methods for understanding the complex functioning of cities
International Nuclear Information System (INIS)
Yetano Roche, María
2015-01-01
This paper sums up the recent developments in concepts and methods being used to measure the impacts of cities on environmental sustainability. It differentiates between a dominant trend in research literature that concentrates on the accounting and allocation of greenhouse gas (GHG) emissions and energy use to cities, and a re-emergence of studies focusing on the direct and indirect urban material and resource flows. The availability of reliable data and standard protocols is greater in the GHG accounting field and continues to grow rapidly
ELECTROREDUCTION MECHANISM OF Ni(DMG)-2 COMPLEX STUDIED WITH QUANTUM CHEMICAL METHOD
Institute of Scientific and Technical Information of China (English)
倪亚明; 任镜清; 黎健; 王德民; 梁伟根; 朱芝仙; 高小霞
1990-01-01
The electronic structures of the species Ni（DMG）2, （Ni（DMG）2）- and （Ni（DMG）2）2- have been studied by INDO quantum chemical method. The results have clearly shown that in the first stage of the electroreduction of Ni（DMG）2, one electron interacts with the d orbitals on the nickel atom, while in the further stage the second electron interacts with the p orbitals on the nitrogen atoms. It conforms with our electrochemical experimental studies which showed that not only Ni（Ⅱ） is reduced but also DMG is catalytically reduced during the reduction of Ni（DMG）2.
Complex wavenumber Fourier analysis of the B-spline based finite element method
Czech Academy of Sciences Publication Activity Database
Kolman, Radek; Plešek, Jiří; Okrouhlík, Miloslav
2014-01-01
Roč. 51, č. 2 (2014), s. 348-359 ISSN 0165-2125 R&D Projects: GA ČR(CZ) GAP101/11/0288; GA ČR(CZ) GAP101/12/2315; GA ČR GPP101/10/P376; GA ČR GA101/09/1630 Institutional support: RVO:61388998 Keywords : elastic wave propagation * dispersion errors * B-spline * finite element method * isogeometric analysis Subject RIV: JR - Other Machinery Impact factor: 1.513, year: 2014 http://www.sciencedirect.com/science/article/pii/S0165212513001479
A numerical method for complex structural dynamics in nuclear plant facilities
International Nuclear Information System (INIS)
Zeitner, W.
1979-01-01
The solution of dynamic problems is often connected with difficulties in setting up a system of equations of motion because of the constraint conditions of the system. Such constraint conditions may be of geometric nature as for example gaps or slidelines, they may be compatibility conditions or thermodynamic criteria for the energy balance of a system. The numerical method proposed in this paper for the treatment of a dynamic problem with constraint conditions requires only to set up the equations of motion without considering the constraints. This always leads to a relatively simple formulation. The constraint conditions themselves are included in the integration procedure by a numerical application of Gauss' principle. (orig.)
Tan, Zhi-Zhong
2015-05-01
We develop a general recursion-transform (R-T) method for a two-dimensional resistor network with a zero resistor boundary. As applications of the R-T method, we consider a significant example to illuminate the usefulness for calculating resistance of a rectangular m×n resistor network with a null resistor and three arbitrary boundaries, a problem never solved before, since Green's function techniques and Laplacian matrix approaches are invalid in this case. Looking for the exact calculation of the resistance of a binary resistor network is important but difficult in the case of an arbitrary boundary since the boundary is like a wall or trap which affects the behavior of finite network. In this paper we obtain several general formulas of resistance between any two nodes in a nonregular m×n resistor network in both finite and infinite cases. In particular, 12 special cases are given by reducing one of the general formulas to understand its applications and meanings, and an integral identity is found when we compare the equivalent resistance of two different structures of the same problem in a resistor network.
An auxiliary optimization method for complex public transit route network based on link prediction
Zhang, Lin; Lu, Jian; Yue, Xianfei; Zhou, Jialin; Li, Yunxuan; Wan, Qian
2018-02-01
Inspired by the missing (new) link prediction and the spurious existing link identification in link prediction theory, this paper establishes an auxiliary optimization method for public transit route network (PTRN) based on link prediction. First, link prediction applied to PTRN is described, and based on reviewing the previous studies, the summary indices set and its algorithms set are collected for the link prediction experiment. Second, through analyzing the topological properties of Jinan’s PTRN established by the Space R method, we found that this is a typical small-world network with a relatively large average clustering coefficient. This phenomenon indicates that the structural similarity-based link prediction will show a good performance in this network. Then, based on the link prediction experiment of the summary indices set, three indices with maximum accuracy are selected for auxiliary optimization of Jinan’s PTRN. Furthermore, these link prediction results show that the overall layout of Jinan’s PTRN is stable and orderly, except for a partial area that requires optimization and reconstruction. The above pattern conforms to the general pattern of the optimal development stage of PTRN in China. Finally, based on the missing (new) link prediction and the spurious existing link identification, we propose optimization schemes that can be used not only to optimize current PTRN but also to evaluate PTRN planning.
Directory of Open Access Journals (Sweden)
Jiacheng Xie
2017-01-01
Full Text Available In a fully mechanized coal-mining face, the positioning and attitude of the shearer and scraper conveyor are inaccurate. To overcome this problem, a joint positioning and attitude solving method that considers the effect of an uneven floor is proposed. In addition, the real-time connection and coupling relationship between the two devices is analyzed. Two types of sensors, namely, the tilt sensor and strapdown inertial navigation system (SINS, are used to measure the shearer body pitch angle and the scraper conveyor shape, respectively. To improve the accuracy, two pieces of information are fused using the adaptive information fusion algorithm. It is observed that, using a marking strategy, the shearer body pitch angle can be reversely mapped to the real-time shape of the scraper conveyor. Then, a virtual-reality (VR software that can visually simulate this entire operation process under different conditions is developed. Finally, experiments are conducted on a prototype experimental platform. The positioning error is found to be less than 0.38 times the middle trough length; moreover, no accumulated error is detected. This method can monitor the operation of the shearer and scraper conveyor in a highly dynamic and precise manner and provide strong technical support for safe and efficient operation of a fully mechanized coal-mining face.
Effective management models and methods of economic educations in regional industrial complexes
Directory of Open Access Journals (Sweden)
Andrey Gennadyevich Butrin
2014-06-01
Full Text Available In the article, the methodical bases of management by the integrated industrial enterprises is developed according to the indicators of sustainable economic development of the region. The scope of the research is the region as the difficult mesosystem consisting of logistic clusters. The subject matter of the research is the organizational and economic relations developing in the course of interaction of participants of the regional economy as mesosystems. The models and methods of management by the large economic systems in the economy of the industrially developed region are developed; the organizational and economic essence of a logistic cluster as the subject of the regional economy us revealed. The mechanism of management by the integrated enterprises with using the cluster approach, technologies of logistics, management of supply chains is offered. They allow to the management of an enterprises to make scientifically reasonable effective decisions developing programs of supply, economic production and realization of finished goods in the close connection with programs of regional economic development.
Directory of Open Access Journals (Sweden)
R. Pirjola
1998-11-01
Full Text Available The electromagnetic field due to ionospheric currents has to be known when evaluating space weather effects at the earth's surface. Forecasting methods of these effects, which include geomagnetically induced currents in technological systems, are being developed. Such applications are time-critical, so the calculation techniques of the electromagnetic field have to be fast but still accurate. The contribution of secondary sources induced within the earth leads to complicated integral formulas for the field at the earth's surface with a time-consuming computation. An approximate method of calculation based on replacing the earth contribution by an image source having mathematically a complex location results in closed-form expressions and in a much faster computation. In this paper we extend the complex image method (CIM to the case of a more realistic electrojet system consisting of a horizontal line current filament with vertical currents at its ends above a layered earth. To be able to utilize previous CIM results, we prove that the current system can be replaced by a purely horizontal current distribution which is equivalent regarding the total (=primary + induced magnetic field and the total horizontal electric field at the earth's surface. The latter result is new. Numerical calculations demonstrate that CIM is very accurate and several magnitudes faster than the exact conventional approach.Key words. Electromagnetic theory · Geomagnetic induction · Auroral ionosphere
Hickey, John M; Sahni, Neha; Toth, Ronald T; Kumru, Ozan S; Joshi, Sangeeta B; Middaugh, C Russell; Volkin, David B
2016-10-01
Liquid chromatographic methods, combined with mass spectrometry, offer exciting and important opportunities to better characterize complex vaccine antigens including recombinant proteins, virus-like particles, inactivated viruses, polysaccharides, and protein-polysaccharide conjugates. The current abilities and limitations of these physicochemical methods to complement traditional in vitro and in vivo vaccine potency assays are explored in this review through the use of illustrative case studies. Various applications of these state-of-the art techniques are illustrated that include the analysis of influenza vaccines (inactivated whole virus and recombinant hemagglutinin), virus-like particle vaccines (human papillomavirus and hepatitis B), and polysaccharide linked to protein carrier vaccines (pneumococcal). Examples of utilizing these analytical methods to characterize vaccine antigens in the presence of adjuvants, which are often included to boost immune responses as part of the final vaccine dosage form, are also presented. Some of the challenges of using chromatographic and LC-MS as physicochemical assays to routinely test complex vaccine antigens are also discussed. Copyright © 2016 Elsevier B.V. All rights reserved.
Investigating flow patterns in a channel with complex obstacles using the lattice Boltzmann method
Energy Technology Data Exchange (ETDEWEB)
Yojina, Jiraporn; Ngamsaad, Waipot; Nuttavut, Narin; Triampo, Darapond; Lenbury, Yongwimon; Sriyab, Somchai; Triampo, Wannapong [Faculty of Science, Mahidol University, Bangkok (Thailand); Kanthang, Paisan [Rajamangala University of Technology, Bangkok (Thailand)
2010-10-15
In this work, mesoscopic modeling via a computational lattice Boltzmann method (LBM) is used to investigate the flow pattern phenomena and the physical properties of the flow field around one and two square obstacles inside a two-dimensional channel with a fixed blockage ratio,{beta} =14 , centered inside a 2D channel, for a range of Reynolds numbers (Re) from 1 to 300. The simulation results show that flow patterns can initially exhibit laminar flow at low Re and then make a transition to periodic, unsteady, and, finally, turbulent flow as the Re get higher. Streamlines and velocity profiles and a vortex shedding pattern are observed. The Strouhal numbers are calculated to characterize the shedding frequency and flow dynamics. The effect of the layouts or configurations of the obstacles are also investigated, and the possible connection between the mixing process and the appropriate design of a chemical mixing system is discussed
The methods and algorithms for designing complex three-dimensional robots
International Nuclear Information System (INIS)
Solovjev, A.E.; Naumov, V.B.
1996-01-01
For automation designing by the Robotics laboratory were executed some fundamental and applied researches. This researching allowed to create rational mathematical model for numeric modeling with real-time simulation. In the mathematical model used set of equations of rigid body's motion in Lagrange's form and set of Appel's equations taking into consideration holonomic and non-holonomic connections. In present article are considered methods and algorithms of dynamic modeling of a system of rigid bodies for robotics task and brief description of the package Computer Aided Engineering for Industrial Robots, based on considered algorithms. So far as, in researching of robots the dynamic tasks (direct and inverse) are more interesting than another tasks, authors pay attention just on these problems
Hybrid RANS/LES method for high Reynolds numbers, applied to atmospheric flow over complex terrain
DEFF Research Database (Denmark)
Bechmann, Andreas; Sørensen, Niels N.; Johansen, Jeppe
2007-01-01
The use of Large-Eddy Simulation (LES) to predict wall-bounded flows has presently been limited to low Reynolds number flows. Since the number of computational grid points required to resolve the near-wall turbulent structures increase rapidly with Reynolds number, LES has been unattainable...... for flows at high Reynolds numbers. To reduce the computational cost of traditional LES a hybrid method is proposed in which the near-wall eddies are modelled in a Reynolds-averaged sense. Close to walls the flow is treated with the RANS-equations and this layer act as wall model for the outer flow handled...... by LES. The wellknown high Reynolds number two-equation k - ǫ turbulence model is used in the RANS layer and the model automatically switches to a two-equation k - ǫ subgrid-scale stress model in the LES region. The approach can be used for flow over rough walls. To demonstrate the ability...
Comparison of matrix method and ray tracing in the study of complex optical systems
Anterrieu, Eric; Perez, Jose-Philippe
2000-06-01
In the context of the classical study of optical systems within the geometrical Gauss approximation, the cardinal elements are efficiently obtained with the aid of the transfer matrix between the input and output planes of the system. In order to take into account the geometrical aberrations, a ray tracing approach, using the Snell- Descartes laws, has been implemented in an interactive software. Both methods are applied for measuring the correction to be done to a human eye suffering from ametropia. This software may be used by optometrists and ophthalmologists for solving the problems encountered when considering this pathology. The ray tracing approach gives a significant improvement and could be very helpful for a better understanding of an eventual surgical act.
International Nuclear Information System (INIS)
Klose, G.
1999-01-01
Lyotropic mesophases possess lattice dimensions of the order of magnitude of the length of their molecules. Consequently, the first Bragg reflections of such systems appear at small scattering angles (small angle scattering). A combination of scattering and NMR methods was applied to study structural properties of POPC/C 12 E n mixtures. Generally, the ranges of existence of the liquid crystalline lamellar phase, the dimension of the unit-cell of the lamellae and important structural parameters of the lipid and surfactant molecules in the mixed bilayers were determined. With that the POPC/C 12 E 4 bilayer represents one of the best structurally characterized mixed model membranes. It is a good starting system for studying the interrelation with other e.g. dynamic or thermodynamic properties. (K.A.)
Use of the Delphi method in resolving complex water resources issues
Taylor, J.G.; Ryder, S.D.
2003-01-01
The tri-state river basins, shared by Georgia, Alabama, and Florida, are being modeled by the U.S. Fish and Wildlife Service and the U.S. Army Corps of Engineers to help facilitate agreement in an acrimonious water dispute among these different state governments. Modeling of such basin reservoir operations requires parallel understanding of several river system components: hydropower production, flood control, municipal and industrial water use, navigation, and reservoir fisheries requirements. The Delphi method, using repetitive surveying of experts, was applied to determine fisheries' water and lake-level requirements on 25 reservoirs in these interstate basins. The Delphi technique allowed the needs and requirements of fish populations to be brought into the modeling effort on equal footing with other water supply and demand components. When the subject matter is concisely defined and limited, this technique can rapidly assess expert opinion on any natural resource issue, and even move expert opinion toward greater agreement.
DEFF Research Database (Denmark)
Zhang, Jiaying; Pivnenko, Sergey; Breinbjerg, Olav
2010-01-01
In this paper, application of a modified Wheeler cap method for the radiation efficiency measurement of balanced electrically small antennas is presented. It is shown that the limitations on the cavity dimension can be overcome and thus measurement in a large cavity is possible. The cavity loss...... is investigated, and a modified radiation efficiency formula that includes the cavity loss is introduced. Moreover, a modification of the technique is proposed that involves the antenna working complex environment inside the Wheeler Cap and thus makes possible measurement of an antenna close to a hand or head...
Toropov, Andrey A.; Toropova, Alla P.
2018-06-01
Predictive model of logP for Pt(II) and Pt(IV) complexes built up with the Monte Carlo method using the CORAL software has been validated with six different splits into the training and validation sets. The improving of the predictive potential of models for six different splits has been obtained using so-called index of ideality of correlation. The suggested models give possibility to extract molecular features, which cause the increase or vice versa decrease of the logP.
International Nuclear Information System (INIS)
Kawai, Y.; Alton, G.D.; Bilheux, J.-C.
2005-01-01
An inexpensive, fast, and close to universal infiltration coating technique has been developed for fabricating fast diffusion-release ISOL targets. Targets are fabricated by deposition of finely divided (∼1μm) compound materials in a paint-slurry onto highly permeable, complex structure reticulated-vitreous-carbon-foam (RVCF) matrices, followed by thermal heat treatment. In this article, we describe the coating method and present information on the physical integrity, uniformity of deposition, and matrix adherence of SiC, HfC and UC 2 targets, destined for on-line use as targets at the Holifield Radioactive Ion Beam Facility (HRIBF)
International Nuclear Information System (INIS)
Maddalena, D.J.; Snowdon, G.M.; Pojer, P.M.
1987-01-01
A simple new technique where stannous tin is adsorbed on the inner surface of plastic tubing and used to reduce ( 99m Tc) pertechnetate prior to labelling radiopharmaceuticals, has been evaluated, using some lipophillic and metal containing ligands. Complexes formed using the technique had good labelling efficiency and behaved the same in rat biodistribution studies as those prepared using conventional labelling methods. The labelling efficiency of the ligands was not related to their lipophillicity suggesting that this technique may be useful for labelling lipophillic and other difficult ligands such as those containing metals, which are incompatible with free stannous ions in solution. (M.E.L.) [es
International Nuclear Information System (INIS)
Fawcett, B.C.; Mason, H.E.
1989-02-01
This report presents details of a new method to enable the computation of collision strengths for complex ions which is adapted from long established optimisation techniques previously applied to the calculation of atomic structures and oscillator strengths. The procedure involves the adjustment of Slater parameters so that they determine improved energy levels and eigenvectors. They provide a basis for collision strength calculations in ions where ab initio computations break down or result in reducible errors. This application is demonstrated through modifications of the DISTORTED WAVE collision code and SUPERSTRUCTURE atomic-structure code which interface via a transformation code JAJOM which processes their output. (author)
McCann, Damhnat; Bull, Rosalind; Winzenberg, Tania
2015-02-01
A significant number of children with a range of complex conditions and health care needs are being cared for by parents in the home environment. This mixed methods systematic review aimed to determine the amount of sleep obtained by these parents and the extent to which the child-related overnight health or care needs affected parental sleep experience and daily functioning. Summary statistics were not able to be determined due to the heterogeneity of included studies, but the common themes that emerged are that parents of children with complex needs experience sleep deprivation that can be both relentless and draining and affects the parents themselves and their relationships. The degree of sleep deprivation varies by diagnosis, but a key contributing factor is the need for parents to be vigilant at night. Of particular importance to health care professionals is the inadequate overnight support provided to parents of children with complex needs, potentially placing these parents at risk of poorer health outcomes associated with sleep deprivation and disturbance. This needs to be addressed to enable parents to remain well and continue to provide the care that their child and family require. © The Author(s) 2014.
Directory of Open Access Journals (Sweden)
Susu Nousala
2010-12-01
Full Text Available This paper will examine the development of sustainable SME methods for tracking tacit (informal knowledge transfer as a series of networks of larger complex system. Understanding sustainable systems begins with valuing tacit knowledge networks and their ability to produce connections on multiple levels. The behaviour of the social or socio aspects of a system in relation to the explicit formal/physical structures need to be understood and actively considered when utilizing methodologies for interacting within complex systems structures. This paper utilizes theory from several previous studies to underpin the key case study discussed. This approach involved examining the behavioural phenomena of an SME knowledge network. The knowledge network elements were highlighted to identify their value within an SME structure. To understand the value of these emergent elements from between tacit and explicit knowledge networks, is to actively, simultaneously and continuous support sustainable development for SME organizations. The simultaneous links within and between groups of organizations is crucial for understanding sustainable networking structures of complex systems.
International Nuclear Information System (INIS)
Corley, J.P.; Baker, D.A.; Hill, E.R.; Wendell, L.L.
1977-09-01
To simplify the calculation of potential long-distance environmental impacts, an overall average population exposure coefficient (P.E.C.) for the entire contiguous United States was calculated for releases to the atmosphere from Hanford facilities. The method, requiring machine computation, combines Bureau of Census population data by census enumeration district and an annual average atmospheric dilution factor (anti chi/Q') derived from 12-hourly gridded wind analyses provided by the NOAA's National Meteorological Center. A variable-trajectory puff-advection model was used to calculate an hourly anti chi/Q' for each grid square, assuming uniform hourly releases; seasonal and annual averages were then calculated. For Hanford, using 1970 census data, a P.E.C. of 2 x 10 -3 man-seconds per cubic meter was calculated. The P.E.C. is useful for both radioactive and nonradioactive releases. To calculate population doses for the entire contiguous United States, the P.E.C. is multiplied by the annual average release rate and then by the dose factor (rem/yr per Ci/m 3 ) for each radionuclide, and the dose contribution in man-rem is summed for all radionuclides. For multiple pathways, the P.E.C. is still useful, provided that doses from a unit release can be obtained from a set of atmospheric dose factors. The methodology is applicable to any point source, any set of population data by map grid coordinates, and any geographical area covered by equivalent meteorological data
The Complexities of Family Caregiving at Work: A Mixed-Methods Study.
Gaugler, Joseph E; Pestka, Deborah L; Davila, Heather; Sales, Rebecca; Owen, Greg; Baumgartner, Sarah A; Shook, Rocky; Cunningham, Jane; Kenney, Maureen
2018-01-01
The current project examined the impact of caregiving and caregiving-work conflict on employees' well-being. A sequential explanatory mixed-methods design (QUAN→qual) was utilized, and a total of 880 employees from a large health-care plan employer completed an online survey. Forty-five caregivers who completed the survey also participated in one of the five focus groups held 1 to 2 months later. Employed caregivers were significantly ( p < .05) more likely to indicate poorer physical and mental health than noncaregivers; among caregivers ( n = 370), caregiving-work conflict emerged as the most significant predictor of well-being and fully mediated the empirical relationship between burden and well-being. The focus group findings complemented the quantitative results; many of the challenges employed caregivers experience stem from their ability or inability to effectively balance their employment and caregiving roles. The results suggest the need to focus on caregiving-work conflict when constructing new or translating existing evidence-based caregiver interventions.
Complexity of major UK companies between 2006 and 2010: Hierarchical structure method approach
Ulusoy, Tolga; Keskin, Mustafa; Shirvani, Ayoub; Deviren, Bayram; Kantar, Ersin; Çaǧrı Dönmez, Cem
2012-11-01
This study reports on topology of the top 40 UK companies that have been analysed for predictive verification of markets for the period 2006-2010, applying the concept of minimal spanning tree and hierarchical tree (HT) analysis. Construction of the minimal spanning tree (MST) and the hierarchical tree (HT) is confined to a brief description of the methodology and a definition of the correlation function between a pair of companies based on the London Stock Exchange (LSE) index in order to quantify synchronization between the companies. A derivation of hierarchical organization and the construction of minimal-spanning and hierarchical trees for the 2006-2008 and 2008-2010 periods have been used and the results validate the predictive verification of applied semantics. The trees are known as useful tools to perceive and detect the global structure, taxonomy and hierarchy in financial data. From these trees, two different clusters of companies in 2006 were detected. They also show three clusters in 2008 and two between 2008 and 2010, according to their proximity. The clusters match each other as regards their common production activities or their strong interrelationship. The key companies are generally given by major economic activities as expected. This work gives a comparative approach between MST and HT methods from statistical physics and information theory with analysis of financial markets that may give new valuable and useful information of the financial market dynamics.
International Nuclear Information System (INIS)
Kipnis, M.A.; Korsunskij, G.M.; Mironenko, A.F.
1984-01-01
It has been suggested that the optimum regime of radiography should be determined using the method of successive simplex planning, which can be convenient in the cases, when the result of testing can not be presented quantitatively. Besides, in this case there is no necessity in duplication of experiments, as even gross errors are automatically corrected with further simplex motion. A plan and results of experimental determination of the optimum regime of product radiography using the X-ray RUP-120-5-1 apparatus are presented. In the experiments described voltage, current intensity and radiography duration are varied. The quality of X-ray images is evaluated according to conventional ten-point scale, taking into account the quality of each projection. It has been established that application of simplex planning to determine regimes of X-ray radiography of different types of products permits to obtain high-quality roentgenograms with simultaneous decrease in the consumption of photomaterials and considerable decrease in the time of laboratory tests
Gajdosik, Martina Srajer; Clifton, James; Josic, Djuro
2012-01-01
Sample displacement chromatography (SDC) in reversed-phase and ion-exchange modes was introduced approximately twenty years ago. This method takes advantage of relative binding affinities of components in a sample mixture. During loading, there is a competition among different sample components for the sorption on the surface of the stationary phase. SDC was first used for the preparative purification of proteins. Later, it was demonstrated that this kind of chromatography can also be performed in ion-exchange, affinity and hydrophobic-interaction mode. It has also been shown that SDC can be performed on monoliths and membrane-based supports in both analytical and preparative scale. Recently, SDC in ion-exchange and hydrophobic interaction mode was also employed successfully for the removal of trace proteins from monoclonal antibody preparations and for the enrichment of low abundance proteins from human plasma. In this review, the principals of SDC are introduced, and the potential for separation of proteins and peptides in micro-analytical, analytical and preparative scale is discussed. PMID:22520159
The method of complex evaluation of management in the sphere of housing and communal services
Directory of Open Access Journals (Sweden)
Okhotina Svetlana
2017-01-01
Full Text Available Many researchers considered the quality of management, but the definition of “quality control” in the literature is quite rare. The author’s definition of this concept, whose distinctive features are as follows: management is considered as a system, and its quality is determined by the quality of the elements of the system; management is of high quality, if not only provides function, but also the development of the facility; the quality of management is measured by customer satisfaction. The study authors of market relations in the sphere of housing and communal services helped to define the modern model of management of the industry of housing and communal services, which involves the preservation of state regulation and control and the bringing to market of private operators. Identified the need for further sustained efforts to implement the new economic relations in the system of housing and communal services at all levels of government, which requires further improvement of management. Based on the authors analysis of the methods of evaluation of activities of management companies found that despite their diversity they all have several disadvantages, the main of which is the lack of standard indicators by which to judge to what extent the housing managers of the organization implement the adopted programme. Therefore, the author proposes two sets of criteria (representing the result of control and management efficiency, which will be monitored and evaluated the quality management of the organization.
Quasiparticle self-consistent GW method for the spectral properties of complex materials.
Bruneval, Fabien; Gatti, Matteo
2014-01-01
The GW approximation to the formally exact many-body perturbation theory has been applied successfully to materials for several decades. Since the practical calculations are extremely cumbersome, the GW self-energy is most commonly evaluated using a first-order perturbative approach: This is the so-called G 0 W 0 scheme. However, the G 0 W 0 approximation depends heavily on the mean-field theory that is employed as a basis for the perturbation theory. Recently, a procedure to reach a kind of self-consistency within the GW framework has been proposed. The quasiparticle self-consistent GW (QSGW) approximation retains some positive aspects of a self-consistent approach, but circumvents the intricacies of the complete GW theory, which is inconveniently based on a non-Hermitian and dynamical self-energy. This new scheme allows one to surmount most of the flaws of the usual G 0 W 0 at a moderate calculation cost and at a reasonable implementation burden. In particular, the issues of small band gap semiconductors, of large band gap insulators, and of some transition metal oxides are then cured. The QSGW method broadens the range of materials for which the spectral properties can be predicted with confidence.
Sorting it out: bedding particle size and nesting material processing method affect nest complexity.
Robinson-Junker, Amy; Morin, Amelia; Pritchett-Corning, Kathleen; Gaskill, Brianna N
2017-04-01
As part of routine husbandry, an increasing number of laboratory mice receive nesting material in addition to standard bedding material in their cages. Nesting material improves health outcomes and physiological performance in mice that receive it. Providing usable nesting material uniformly and efficiently to various strains of mice remains a challenge. The aim of this study was to determine how bedding particle size, method of nesting material delivery, and processing of the nesting material before delivery affected nest building in mice of strong (BALB/cAnNCrl) and weak (C3H/HeNCrl) gathering abilities. Our data suggest that processing nesting material through a grinder in conjunction with bedding material, although convenient for provision of bedding with nesting material 'built-in', negatively affects the integrity of the nesting material and subsequent nest-building outcomes. We also found that C3H mice, previously thought to be poor nest builders, built similarly scored nests to those of BALB/c mice when provided with unprocessed nesting material. This was true even when nesting material was mixed into the bedding substrate. We also observed that when nesting material was mixed into the bedding substrate, mice of both strains would sort their bedding by particle size more often than if it were not mixed in. Our findings support the utility of the practice of distributing nesting material mixed in with bedding substrate, but not that of processing the nesting material with the bedding in order to mix them.
Wittmeier, Kristy D M; Restall, Gayle; Mulder, Kathy; Dufault, Brenden; Paterson, Marie; Thiessen, Matthew; Lix, Lisa M
2016-08-31
Children with complex needs can face barriers to system access and navigation related to their need for multiple services and healthcare providers. Central intake for pediatric rehabilitation was developed and implemented in 2008 in Winnipeg Manitoba Canada as a means to enhance service coordination and access for children and their families. This study evaluates the process and impact of implementing a central intake system, using pediatric physiotherapy as a case example. A mixed methods instrumental case study design was used. Interviews were completed with 9 individuals. Data was transcribed and analyzed for themes. Quantitative data (wait times, referral volume and caregiver satisfaction) was collected for children referred to physiotherapy with complex needs (n = 1399), and a comparison group of children referred for orthopedic concerns (n = 3901). Wait times were analyzed using the Kruskal-Wallis test, caregiver satisfaction was analyzed using Fisher exact test and change point modeling was applied to examine referral volume over the study period. Interview participants described central intake implementation as creating more streamlined processes. Factors that facilitated successful implementation included 1) agreement among stakeholders, 2) hiring of a central intake coordinator, 3) a financial commitment from the government and 4) leadership at the individual and organization level. Mean (sd) wait times improved for children with complex needs (12.3(13.1) to 8.0(6.9) days from referral to contact with family, p physiotherapy (i.e., decreasing wait times) for families of children with complex needs. Future research is needed to build on this single discipline case study approach to examine changes in wait times, therapy coordination and stakeholder satisfaction within the context of continuing improvements for pediatric therapy services within the province.
Directory of Open Access Journals (Sweden)
Hassan Badreddine
2017-01-01
Full Text Available The current work focuses on the development and application of a new finite volume immersed boundary method (IBM to simulate three-dimensional fluid flows and heat transfer around complex geometries. First, the discretization of the governing equations based on the second-order finite volume method on Cartesian, structured, staggered grid is outlined, followed by the description of modifications which have to be applied to the discretized system once a body is immersed into the grid. To validate the new approach, the heat conduction equation with a source term is solved inside a cavity with an immersed body. The approach is then tested for a natural convection flow in a square cavity with and without circular cylinder for different Rayleigh numbers. The results computed with the present approach compare very well with the benchmark solutions. As a next step in the validation procedure, the method is tested for Direct Numerical Simulation (DNS of a turbulent flow around a surface-mounted matrix of cubes. The results computed with the present method compare very well with Laser Doppler Anemometry (LDA measurements of the same case, showing that the method can be used for scale-resolving simulations of turbulence as well.
Energy Technology Data Exchange (ETDEWEB)
Woo, Tae Ho [Seoul National Univ. (Korea, Republic of). Dept. of Nuclear Engineering
2012-07-15
The power stabilization of the nuclear power plants (NPPs) is investigated in the aspect of the liquid metal coolant. The quantification of the risk analysis is performed by the system dynamics (SD) method which is processed by the feedback and accumulation complex algorithms. The Vensim software package is used for the simulations, which is supported by the Monte-Carlo method. There are 2 kinds of considerations as the economic and safety properties. The result shows the stability of the operations when the power can be decided. This shows the higher efficiency of the reactor. The failure frequency is 16/60 = 27%. In the event of Power Stabilized, the failure event is in the quite lower frequency rate. The commercial use of the reactor is important in the operations. (orig.)
Samoila, I. V.; Radulescu, V.; Moise, G.; Diaconu, A.; Radulescu, R.
2017-12-01
Combined geophysical acquisition technologies including High Resolution 2D Seismic (HR2D), Multi-Beam Echo-Sounding (MBES), Sub-Bottom Profiling (SBP) and Magnetometry were used in the Western Black Sea (offshore Romania) to identify possible geohazards, such as gas escaping surface sediments and tectonic hazard areas up to 1 km below the seafloor. The National Project was funded by the Research and Innovation Ministry of Romania, and has taken place over 1.5 years with the purpose of creating risk maps for the surveyed pilot area. Using an array of geophysical methods and creating a workflow to identify geohazard susceptible areas on the Romanian Black Sea continental shelf is important and beneficial for future research projects. The SBP and MBES data show disturbed areas that can be interpreted as gas escapes on the surface of the seafloor, and some escapes were confirmed on the HR2D profiles. Shallow gas indicators like gas chimneys and acoustic blanking are usually delimited by vertical, sub-vertical and/or quasi-horizontal faults that mark possible hazard areas on shallow sedimentary sections. Interpreted seismic profiles show three main markers: one delimiting the Pliocene-Quaternary boundary and two for the Miocene (Upper and Lower). Vertical and quasi-horizontal faults are characteristic for the Upper Miocene, while the Lower Miocene has NW-SE horizontal faults. Faults and possible hazard areas were marked on seismic sections and were further correlated with the MBES, SBP, Magnetometry and previously recorded data, such as earthquake epicenters scattered offshore in the Western Black Sea. The main fault systems likely to cause those earthquakes also aid the migration of gas if the faults are not sealed. We observed that the gas escapes were correlated with faults described on the recent seismic profiles. Mapping hazard areas will have an important contribution to better understand the recent evolution of the Western Black Sea basin but also for projecting
Energy Technology Data Exchange (ETDEWEB)
Vetere, V
2002-09-15
This thesis is related to comparative studies of the chemical properties of molecular complexes containing lanthanide or actinide trivalent cations, in the context of the nuclear waste disposal. More precisely, our aim was a quantum chemical analysis of the metal-ligand bonding in such species. Various theoretical approaches were compared, for the inclusion of correlation (density functional theory, multiconfigurational methods) and of relativistic effects (relativistic scalar and 2-component Hamiltonians, relativistic pseudopotentials). The performance of these methods were checked by comparing computed structural properties to published experimental data, on small model systems: lanthanide and actinide tri-halides and on X{sub 3}M-L species (X=F, Cl; M=La, Nd, U; L = NH{sub 3}, acetonitrile, CO). We have thus shown the good performance of density functionals combined with a quasi-relativistic method, as well as of gradient-corrected functionals associated with relativistic pseudopotentials. In contrast, functionals including some part of exact exchange are less reliable to reproduce experimental trends, and we have given a possible explanation for this result . Then, a detailed analysis of the bonding has allowed us to interpret the discrepancies observed in the structural properties of uranium and lanthanides complexes, based on a covalent contribution to the bonding, in the case of uranium(III), which does not exist in the lanthanide(III) homologues. Finally, we have examined more sizeable systems, closer to experimental species, to analyse the influence of the coordination number, of the counter-ions and of the oxidation state of uranium, on the metal-ligand bonding. (author)
Round, Jeff; Drake, Robyn; Kendall, Edward; Addicott, Rachael; Agelopoulos, Nicky; Jones, Louise
2015-03-01
We report the use of difference in differences (DiD) methodology to evaluate a complex, system-wide healthcare intervention. We use the worked example of evaluating the Marie Curie Delivering Choice Programme (DCP) for advanced illness in a large urban healthcare economy. DiD was selected because a randomised controlled trial was not feasible. The method allows for before and after comparison of changes that occur in an intervention site with a matched control site. This enables analysts to control for the effect of the intervention in the absence of a local control. Any policy, seasonal or other confounding effects over the test period are assumed to have occurred in a balanced way at both sites. Data were obtained from primary care trusts. Outcomes were place of death, inpatient admissions, length of stay and costs. Small changes were identified between pre- and post-DCP outputs in the intervention site. The proportion of home deaths and median cost increased slightly, while the number of admissions per patient and the average length of stay per admission decreased slightly. None of these changes was statistically significant. Effects estimates were limited by small numbers accessing new services and selection bias in sample population and comparator site. In evaluating the effect of a complex healthcare intervention, the choice of analysis method and output measures is crucial. Alternatives to randomised controlled trials may be required for evaluating large scale complex interventions and the DiD approach is suitable, subject to careful selection of measured outputs and control population. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Directory of Open Access Journals (Sweden)
Abdulbaset El Hadi Saad
2017-10-01
Full Text Available Advanced global optimization algorithms have been continuously introduced and improved to solve various complex design optimization problems for which the objective and constraint functions can only be evaluated through computation intensive numerical analyses or simulations with a large number of design variables. The often implicit, multimodal, and ill-shaped objective and constraint functions in high-dimensional and “black-box” forms demand the search to be carried out using low number of function evaluations with high search efficiency and good robustness. This work investigates the performance of six recently introduced, nature-inspired global optimization methods: Artificial Bee Colony (ABC, Firefly Algorithm (FFA, Cuckoo Search (CS, Bat Algorithm (BA, Flower Pollination Algorithm (FPA and Grey Wolf Optimizer (GWO. These approaches are compared in terms of search efficiency and robustness in solving a set of representative benchmark problems in smooth-unimodal, non-smooth unimodal, smooth multimodal, and non-smooth multimodal function forms. In addition, four classic engineering optimization examples and a real-life complex mechanical system design optimization problem, floating offshore wind turbines design optimization, are used as additional test cases representing computationally-expensive black-box global optimization problems. Results from this comparative study show that the ability of these global optimization methods to obtain a good solution diminishes as the dimension of the problem, or number of design variables increases. Although none of these methods is universally capable, the study finds that GWO and ABC are more efficient on average than the other four in obtaining high quality solutions efficiently and consistently, solving 86% and 80% of the tested benchmark problems, respectively. The research contributes to future improvements of global optimization methods.
Directory of Open Access Journals (Sweden)
Yu. G. Romanova
2017-08-01
Full Text Available Treatment and prevention of inflammatory diseases of parodontium are one of the most difficult problems in stomatology today. Purpose of research: estimation of clinical efficiency of local combined application of developed agent apigel for oral cavity care and low-frequency electromagnetic field magnetotherapy at treatment of inflammatory diseases of parodontium. Materials and methods: 46 patients with chronic generalized catarrhal gingivitis and chronic generalized periodontitis of 1st degree were included into the study. Patients were divided into 2 groups depending on treatment management: basic (n = 23 and control (n = 23. Conventional treatment with the local use of the dental gel with camomile was used in the control group. Patients of the basic group were treated with local combined application of apigel and magnetotherapy. Efficiency was estimated with clinical, laboratory, microbiological and functional (ultrasonic Doppler examination methods of examination. Results: The application of the apigel and pulsating electromagnetic field in the complex medical treatment of patients with chronic generalized periodontitis (GhGP caused positive changes in clinical symptom and condition of parodontal tissues, that was accompanied by decline of hygienic and parodontal indexes. As compared with patients who had traditional anti-inflammatory therapy, patients who were treated with local application of apigel and magnetoterapy had decline of edema incidence. It was revealed that decrease of the pain correlated with improvement of hygienic condition of oral cavity and promoted prevention of bacterial contamination of damaged mucous membranes. Estimation of microvasculatory blood stream with the method of ultrasonic doppler flowmetry revealed more rapid normalization of volume and linear high systole, speed of blood stream in the parodontal tissues in case of use of new complex local method. Conclusions: Effect of the developed local agent in patients
Fukuda, Takeshi; Kurabayashi, Tomokazu; Yamaki, Tatsuki
2016-04-01
A reprecipitation method has been investigated for fabricating colloidal nanoparticles using Eu-complex. Herein, we investigated optical degradation characteristics of (1,10-phenanthroline)tris [4,4,4-trifluoro-1-(2-thienyl)-1,3-butanedionato]europium(III) colloidal nanoparticles, which were embedded into a silica glass film fabricated by a conventional sol-gel process. At first, we tried several types of good solvents for the reprecipitation method, and dimethyl sulfoxide (DMSO) is found to be a suitable solvent for realizing the small diameter and the high long-term stability against the ultraviolet irradiation even though the boing point of DMSO is higher than that of water used as a poor solvent. By optimizing the good solvent and the concentration of Eu-complex, the relative photoluminescence intensity of 0.96 was achieved even though the ultraviolet light was continuously irradiated for 90 min. In addition, the average diameter of 106 nm was achieved when DMSO was used as a good solvent, resulting in the high transmittance at a visible wavelength region. Therefore, we can achieve the transparent emissive thin film with a center wavelength of 612 nm, and the optical degradation was drastically reduced by forming nanoparticles.
Iterative methods for 3D implicit finite-difference migration using the complex Padé approximation
International Nuclear Information System (INIS)
Costa, Carlos A N; Campos, Itamara S; Costa, Jessé C; Neto, Francisco A; Schleicher, Jörg; Novais, Amélia
2013-01-01
Conventional implementations of 3D finite-difference (FD) migration use splitting techniques to accelerate performance and save computational cost. However, such techniques are plagued with numerical anisotropy that jeopardises the correct positioning of dipping reflectors in the directions not used for the operator splitting. We implement 3D downward continuation FD migration without splitting using a complex Padé approximation. In this way, the numerical anisotropy is eliminated at the expense of a computationally more intensive solution of a large-band linear system. We compare the performance of the iterative stabilized biconjugate gradient (BICGSTAB) and that of the multifrontal massively parallel direct solver (MUMPS). It turns out that the use of the complex Padé approximation not only stabilizes the solution, but also acts as an effective preconditioner for the BICGSTAB algorithm, reducing the number of iterations as compared to the implementation using the real Padé expansion. As a consequence, the iterative BICGSTAB method is more efficient than the direct MUMPS method when solving a single term in the Padé expansion. The results of both algorithms, here evaluated by computing the migration impulse response in the SEG/EAGE salt model, are of comparable quality. (paper)
Alcántara-Concepción, Victor; Cram, Silke; Gibson, Richard; Ponce de León, Claudia; Mazari-Hiriart, Marisa
2013-01-01
The Xochimilco area in the southeastern part of Mexico City has a variety of socioeconomic activities, such as periurban agriculture, which is of great importance in the Mexico City metropolitan area. Pesticides are used extensively, some being legal, mostly chlorpyrifos and malathion, and some illegal, mostly DDT. Sediments are a common sink for pesticides in aquatic systems near agricultural areas, and Xochimilco sediments have a complex composition with high contents of organic matter and clay that are ideal adsorption sites for organochlorine (OC) and organophosphorus (OP) pesticides. Therefore, it is important to have a quick, affordable, and reliable method to determine these pesticides. Conventional methods for the determination of OC and OP pesticides are long, laborious, and costly owing to the high volume of solvents and adsorbents. The present study developed and validated a method for determining 18 OC and five OP pesticides in sediments with high organic and clay contents. In contrast with other methods described in the literature, this method allows isolation of the 23 pesticides with a 12 min microwave-assisted extraction (MAE) and one-step cleanup of pesticides. The method developed is a simpler, time-saving procedure that uses only 3.5 g of dry sediment. The use of MAE eliminates excessive handling and the possible loss of analytes. It was shown that the use of LC-Si cartridges with hexane-ethyl acetate (75+25, v/v) in the cleanup procedure recovered all pesticides with rates between 70 and 120%. The validation parameters demonstrated good performance of the method, with intermediate precision ranging from 7.3 to 17.0%, HorRat indexes all below 0.5, and tests of accuracy with the 23 pesticides at three concentration levels demonstrating recoveries ranging from 74 to 114% and RSDs from 3.3 to 12.7%.
International Nuclear Information System (INIS)
Aoki, Takayuki
2010-01-01
In this study, a quantitative method on maintenance level which is determined by the two factors, maintenance plan and field work implementation ability by maintenance crew is discussed. And also a quantitative evaluation method on safety level for giant complex plant system is discussed. As a result of consideration, the following results were obtained. (1) It was considered that equipment condition after maintenance work was determined by the two factors, maintenance plan and field work implementation ability possessed by maintenance crew. The equipment condition determined by the two factors was named as 'equipment maintenance level' and its quantitative evaluation method was clarified. (2) It was considered that CDF in a nuclear power plant, evaluated by using a failure rate counting the above maintenance level was quite different from CDF evaluated by using existing failure rates including a safety margin. Then, the former CDF was named as 'plant safety level' of plant system and its quantitative evaluation method was clarified. (3) Enhancing equipment maintenance level means an improvement of maintenance quality. That results in the enhancement of plant safety level. Therefore, plant safety level should be always watched as a plant performance indicator. (author)
Kang, Seokkoo; Borazjani, Iman; Sotiropoulos, Fotis
2008-11-01
Unsteady 3D simulations of flows in natural streams is a challenging task due to the complexity of the bathymetry, the shallowness of the flow, and the presence of multiple nature- and man-made obstacles. This work is motivated by the need to develop a powerful numerical method for simulating such flows using coherent-structure-resolving turbulence models. We employ the curvilinear immersed boundary method of Ge and Sotiropoulos (Journal of Computational Physics, 2007) and address the critical issue of numerical efficiency in large aspect ratio computational domains and grids such as those encountered in long and shallow open channels. We show that the matrix-free Newton-Krylov method for solving the momentum equations coupled with an algebraic multigrid method with incomplete LU preconditioner for solving the Poisson equation yield a robust and efficient procedure for obtaining time-accurate solutions in such problems. We demonstrate the potential of the numerical approach by carrying out a direct numerical simulation of flow in a long and shallow meandering stream with multiple hydraulic structures.
Directory of Open Access Journals (Sweden)
E. V. Guseva
2016-01-01
Full Text Available Currently the information technologies have penetrated to all spheres of human activity, including education. The main objective of the article is to show the advantages of the developed complex and to familiarize with its structure too. The article presents the arguments that the use of the distance learning tools has a significant impact on Russian education. This approach provides the conditions for the development of innovative teaching methods. The approach describes the capabilities offered by the virtual education center of distance learning Moodle too. It is attractive not only openness but because it contains a large set of libraries, classes and functions in the programming language PHP too, which makes it a convenient tool for developing various online information systems. It is shown that the effectiveness of distance learning depends on the organization of educational material. The basic modules of the course were underlined. This section provides a comprehensive understanding of material. For the verification and control of students knowledge the testing system was developed. In addition, the training package has been developed which contains the information, helping to assess the level of students knowledge. The testing system includes a list of tests divided into sections and consists of a set of questions of different complexity. The questions are stored in the single database (“The bank of questions” and can be reused in one or more courses or sections. After passing the correct answers to the test questions can be available for the student. In addition, this module includes tools for grading by the teacher. The article concludes that the virtual educational complex enables to teach students, has a friendly interface that stimulate the students to continue the work and its successful completion.
Górecki, Marcin; Carpita, Luca; Arrico, Lorenzo; Zinna, Francesco; Di Bari, Lorenzo
2018-05-29
We studied enantiopure chiral trivalent lanthanide (Ln3+ = La3+, Sm3+, Eu3+, Gd3+, Tm3+, and Yb3+) complexes with two fluorinated achiral tris(β-diketonate) ligands (HFA = hexafluoroacetylacetonate and TTA = 2-thenoyltrifluoroacetonate), incorporating a chiral bis(oxazolinyl)pyridine (PyBox) unit as a neutral ancillary ligand, by the combined use of optical and chiroptical methods, ranging from UV to IR both in absorption and circular dichroism (CD), and including circularly polarized luminescence (CPL). Ultimately, all the spectroscopic information is integrated into a total and a chiroptical super-spectrum, which allows one to characterize a multidimensional chemical space, spanned by the different Ln3+ ions, the acidity and steric demand of the diketone and the chirality of the PyBox ligand. In all cases, the Ln3+ ions endow the systems with peculiar chiroptical properties, either allied to f-f transitions or induced by the metal onto the ligand. In more detail, we found that Sm3+ complexes display interesting CPL features, which partly superimpose and partly integrate the more common Eu3+ properties. Especially, in the context of security tags, the pair Sm/Eu may be a winning choice for chiroptical barcoding.
International Nuclear Information System (INIS)
Altiparmakov, D.
1988-12-01
This analysis is part of the report on ' Implementation of geometry module of 05R code in another Monte Carlo code', chapter 6.0: establishment of future activity related to geometry in Monte Carlo method. The introduction points out some problems in solving complex three-dimensional models which induce the need for developing more efficient geometry modules in Monte Carlo calculations. Second part include formulation of the problem and geometry module. Two fundamental questions to be solved are defined: (1) for a given point, it is necessary to determine material region or boundary where it belongs, and (2) for a given direction, all cross section points with material regions should be determined. Third part deals with possible connection with Monte Carlo calculations for computer simulation of geometry objects. R-function theory enables creation of geometry module base on the same logic (complex regions are constructed by elementary regions sets operations) as well as construction geometry codes. R-functions can efficiently replace functions of three-value logic in all significant models. They are even more appropriate for application since three-value logic is not typical for digital computers which operate in two-value logic. This shows that there is a need for work in this field. It is shown that there is a possibility to develop interactive code for computer modeling of geometry objects in parallel with development of geometry module [sr
International Nuclear Information System (INIS)
Caffrey, Martin
2015-01-01
A comprehensive and up-to-date review of the lipid cubic phase or in meso method for crystallizing membrane and soluble proteins and complexes is reported. Recent applications of the method for in situ serial crystallography at X-ray free-electron lasers and synchrotrons are described. The lipid cubic phase or in meso method is a robust approach for crystallizing membrane proteins for structure determination. The uptake of the method is such that it is experiencing what can only be described as explosive growth. This timely, comprehensive and up-to-date review introduces the reader to the practice of in meso crystallogenesis, to the associated challenges and to their solutions. A model of how crystallization comes about mechanistically is presented for a more rational approach to crystallization. The possible involvement of the lamellar and inverted hexagonal phases in crystallogenesis and the application of the method to water-soluble, monotopic and lipid-anchored proteins are addressed. How to set up trials manually and automatically with a robot is introduced with reference to open-access online videos that provide a practical guide to all aspects of the method. These range from protein reconstitution to crystal harvesting from the hosting mesophase, which is noted for its viscosity and stickiness. The sponge phase, as an alternative medium in which to perform crystallization, is described. The compatibility of the method with additive lipids, detergents, precipitant-screen components and materials carried along with the protein such as denaturants and reducing agents is considered. The powerful host and additive lipid-screening strategies are described along with how samples that have low protein concentration and cell-free expressed protein can be used. Assaying the protein reconstituted in the bilayer of the cubic phase for function is an important element of quality control and is detailed. Host lipid design for crystallization at low temperatures and for
Energy Technology Data Exchange (ETDEWEB)
Caffrey, Martin, E-mail: martin.caffrey@tcd.ie [Trinity College Dublin, Dublin (Ireland)
2015-01-01
A comprehensive and up-to-date review of the lipid cubic phase or in meso method for crystallizing membrane and soluble proteins and complexes is reported. Recent applications of the method for in situ serial crystallography at X-ray free-electron lasers and synchrotrons are described. The lipid cubic phase or in meso method is a robust approach for crystallizing membrane proteins for structure determination. The uptake of the method is such that it is experiencing what can only be described as explosive growth. This timely, comprehensive and up-to-date review introduces the reader to the practice of in meso crystallogenesis, to the associated challenges and to their solutions. A model of how crystallization comes about mechanistically is presented for a more rational approach to crystallization. The possible involvement of the lamellar and inverted hexagonal phases in crystallogenesis and the application of the method to water-soluble, monotopic and lipid-anchored proteins are addressed. How to set up trials manually and automatically with a robot is introduced with reference to open-access online videos that provide a practical guide to all aspects of the method. These range from protein reconstitution to crystal harvesting from the hosting mesophase, which is noted for its viscosity and stickiness. The sponge phase, as an alternative medium in which to perform crystallization, is described. The compatibility of the method with additive lipids, detergents, precipitant-screen components and materials carried along with the protein such as denaturants and reducing agents is considered. The powerful host and additive lipid-screening strategies are described along with how samples that have low protein concentration and cell-free expressed protein can be used. Assaying the protein reconstituted in the bilayer of the cubic phase for function is an important element of quality control and is detailed. Host lipid design for crystallization at low temperatures and for
Directory of Open Access Journals (Sweden)
Martin Hitziger
2014-01-01
Full Text Available A digital soil mapping approach is applied to a complex, mountainous terrain in the Ecuadorian Andes. Relief features are derived from a digital elevation model and used as predictors for topsoil texture classes sand, silt, and clay. The performance of three statistical learning methods is compared: linear regression, random forest, and stochastic gradient boosting of regression trees. In linear regression, a stepwise backward variable selection procedure is applied and overfitting is controlled by minimizing Mallow’s Cp. For random forest and boosting, the effect of predictor selection and tuning procedures is assessed. 100-fold repetitions of a 5-fold cross-validation of the selected modelling procedures are employed for validation, uncertainty assessment, and method comparison. Absolute assessment of model performance is achieved by comparing the prediction error of the selected method and the mean. Boosting performs best, providing predictions that are reliably better than the mean. The median reduction of the root mean square error is around 5%. Elevation is the most important predictor. All models clearly distinguish ridges and slopes. The predicted texture patterns are interpreted as result of catena sequences (eluviation of fine particles on slope shoulders and landslides (mixing up mineral soil horizons on slopes.
Cocolin, Luca; Dolci, Paola; Rantsiou, Kalliopi
2011-11-01
The ecology of fermented sausages is complex and includes different species and strains of bacteria, yeasts and molds. The developments in the field of molecular biology, allowed for new methods to become available, which could be applied to better understand dynamics and diversity of the microorganisms involved in the production of sausages. Methods, such as denaturing gradient gel electrophoresis (DGGE), employed as a culture-independent approach, allow to define the microbial dynamics during the fermentation and ripening. Such approach has highlighted that two main species of lactic acid bacteria, namely Lactobacillus sakei and Lb. curvatus, are involved in the transformation process and that they are accompanied by Staphylococcus xylosus, as representative of the coagulase-negative cocci. These findings were repeatedly confirmed in different regions of the world, mainly in the Mediterranean countries where dry fermented sausages have a long tradition and history. The application of molecular methods for the identification and characterization of isolated strains from fermentations highlighted a high degree of diversity within the species mentioned above, underlining the need to better follow strain dynamics during the transformation process. While there is an important number of papers dealing with bacterial ecology by using molecular methods, studies on mycobiota of fermented sausages are just a few. This review reports on how the application of molecular methods made possible a better comprehension of the sausage fermentations, opening up new fields of research that in the near future will allow to unravel the connection between sensory properties and co-presence of multiple strains of the same species. Copyright © 2011 Elsevier Ltd. All rights reserved.
Novikova, V.; Nikolaeva, O.
2017-11-01
In the article the authors consider a cognitive management method of the investment-building complex in the crisis conditions. The factors influencing the choice of an investment strategy are studied, the basic lines of the activity in the field of crisis-management from a position of mathematical modelling are defined. The general approach to decision-making on investment in real assets on the basis of the discrete systems based on the optimum control theory is offered. With the use of a discrete maximum principle the task in view of the decision is found. The numerical algorithm to define the optimum control is formulated by investments. Analytical decisions for the case of constant profitability of the basic means are obtained.
DEFF Research Database (Denmark)
Vallejo, R L; Rexroad III, C E; Silverstein, J T
2009-01-01
As a first step toward the genetic mapping of QTL affecting stress response variation in rainbow trout, we performed complex segregation analyses (CSA) fitting mixed inheritance models of plasma cortisol by using Bayesian methods in large full-sib families of rainbow trout. To date, no studies have...... been conducted to determine the mode of inheritance of stress response as measured by plasma cortisol response when using a crowding stress paradigm and CSA in rainbow trout. The main objective of this study was to determine the mode of inheritance of plasma cortisol after a crowding stress....... The results from fitting mixed inheritance models with Bayesian CSA suggest that 1 or more major genes with dominant cortisol-decreasing alleles and small additive genetic effects of a large number of independent genes likely underlie the genetic variation of plasma cortisol in the rainbow trout families...
Directory of Open Access Journals (Sweden)
Kuo Zhang
2018-01-01
Full Text Available The mechanisms of acupuncture are still unclear. In order to reveal the regulatory effect of manual acupuncture (MA on the neuroendocrine-immune (NEI network and identify the key signaling molecules during MA modulating NEI network, we used a rat complete Freund’s adjuvant (CFA model to observe the analgesic and anti-inflammatory effect of MA, and, what is more, we used statistical and complex network methods to analyze the data about the expression of 55 common signaling molecules of NEI network in ST36 (Zusanli acupoint, and serum and hind foot pad tissue. The results indicate that MA had significant analgesic, anti-inflammatory effects on CFA rats; the key signaling molecules may play a key role during MA regulating NEI network, but further research is needed.
International Nuclear Information System (INIS)
Kleiner, S.C.; Dickman, R.L.
1985-01-01
The velocity autocorrelation function (ACF) of observed spectral line centroid fluctuations is noted to effectively reproduce the actual ACF of turbulent gas motions within an interstellar cloud, thereby furnishing a framework for the study of the large scale velocity structure of the Taurus dark cloud complex traced by the present C-13O J = 1-0 observations of this region. The results obtained are discussed in the context of recent suggestions that widely observed correlations between molecular cloud widths and cloud sizes indicate the presence of a continuum of turbulent motions within the dense interstellar medium. Attention is then given to a method for the quantitative study of these turbulent motions, involving the mapping of a source in an optically thin spectral line and studying the spatial correlation properties of the resulting velocity centroid map. 61 references
Directory of Open Access Journals (Sweden)
Chung-Liang Chang
2014-01-01
Full Text Available A compressive sensing based array processing method is proposed to lower the complexity, and computation load of array system and to maintain the robust antijam performance in global navigation satellite system (GNSS receiver. Firstly, the spatial and temporal compressed matrices are multiplied with array signal, which results in a small size array system. Secondly, the 2-dimensional (2D minimum variance distortionless response (MVDR beamformer is employed in proposed system to mitigate the narrowband and wideband interference simultaneously. The iterative process is performed to find optimal spatial and temporal gain vector by MVDR approach, which enhances the steering gain of direction of arrival (DOA of interest. Meanwhile, the null gain is set at DOA of interference. Finally, the simulated navigation signal is generated offline by the graphic user interface tool and employed in the proposed algorithm. The theoretical analysis results using the proposed algorithm are verified based on simulated results.