International Nuclear Information System (INIS)
Braendas, E.
1986-01-01
The method of complex scaling is taken to include bound states, resonances, remaining scattering background and interference. Particular points of the general complex coordinate formulation are presented. It is shown that care must be exercised to avoid paradoxical situations resulting from inadequate definitions of operator domains. A new resonance localization theorem is presented
Continuum Level Density in Complex Scaling Method
International Nuclear Information System (INIS)
Suzuki, R.; Myo, T.; Kato, K.
2005-01-01
A new calculational method of continuum level density (CLD) at unbound energies is studied in the complex scaling method (CSM). It is shown that the CLD can be calculated by employing the discretization of continuum states in the CSM without any smoothing technique
Level density in the complex scaling method
International Nuclear Information System (INIS)
Suzuki, Ryusuke; Kato, Kiyoshi; Myo, Takayuki
2005-01-01
It is shown that the continuum level density (CLD) at unbound energies can be calculated with the complex scaling method (CSM), in which the energy spectra of bound states, resonances and continuum states are obtained in terms of L 2 basis functions. In this method, the extended completeness relation is applied to the calculation of the Green functions, and the continuum-state part is approximately expressed in terms of discretized complex scaled continuum solutions. The obtained result is compared with the CLD calculated exactly from the scattering phase shift. The discretization in the CSM is shown to give a very good description of continuum states. We discuss how the scattering phase shifts can inversely be calculated from the discretized CLD using a basis function technique in the CSM. (author)
International Nuclear Information System (INIS)
Horner, D.A.; Colgan, J.; Martin, F.; McCurdy, C.W.; Pindzola, M.S.; Rescigno, T.N.
2004-01-01
Symmetrized complex amplitudes for the double photoionization of helium are computed by the time-dependent close-coupling and exterior complex scaling methods, and it is demonstrated that both methods are capable of the direct calculation of these amplitudes. The results are found to be in excellent agreement with each other and in very good agreement with results of other ab initio methods and experiment
Özen, Hamit; Turan, Selahattin
2017-01-01
This study was designed to develop the scale of the Complex Adaptive Leadership for School Principals (CAL-SP) and examine its psychometric properties. This was an exploratory mixed method research design (ES-MMD). Both qualitative and quantitative methods were used to develop and assess psychometric properties of the questionnaire. This study…
Directory of Open Access Journals (Sweden)
Ru Liang
2018-01-01
Full Text Available The magnitude of business dynamics has increased rapidly due to increased complexity, uncertainty, and risk of large-scale infrastructure projects. This fact made it increasingly tough to “go alone” into a contractor. As a consequence, joint venture contractors with diverse strengths and weaknesses cooperatively bid for bidding. Understanding project complexity and making decision on the optimal joint venture contractor is challenging. This paper is to study how to select joint venture contractors for undertaking large-scale infrastructure projects based on a multiattribute mathematical model. Two different methods are developed to solve the problem. One is based on ideal points and the other one is based on balanced ideal advantages. Both of the two methods consider individual difference in expert judgment and contractor attributes. A case study of Hong Kong-Zhuhai-Macao-Bridge (HZMB project in China is used to demonstrate how to apply these two methods and their advantages.
Iteratively-coupled propagating exterior complex scaling method for electron-hydrogen collisions
International Nuclear Information System (INIS)
Bartlett, Philip L; Stelbovics, Andris T; Bray, Igor
2004-01-01
A newly-derived iterative coupling procedure for the propagating exterior complex scaling (PECS) method is used to efficiently calculate the electron-impact wavefunctions for atomic hydrogen. An overview of this method is given along with methods for extracting scattering cross sections. Differential scattering cross sections at 30 eV are presented for the electron-impact excitation to the n = 1, 2, 3 and 4 final states, for both PECS and convergent close coupling (CCC), which are in excellent agreement with each other and with experiment. PECS results are presented at 27.2 eV and 30 eV for symmetric and asymmetric energy-sharing triple differential cross sections, which are in excellent agreement with CCC and exterior complex scaling calculations, and with experimental data. At these intermediate energies, the efficiency of the PECS method with iterative coupling has allowed highly accurate partial-wave solutions of the full Schroedinger equation, for L ≤ 50 and a large number of coupled angular momentum states, to be obtained with minimal computing resources. (letter to the editor)
Continuum level density of a coupled-channel system in the complex scaling method
International Nuclear Information System (INIS)
Suzuki, Ryusuke; Kato, Kiyoshi; Kruppa, Andras; Giraud, Bertrand G.
2008-01-01
We study the continuum level density (CLD) in the formalism of the complex scaling method (CSM) for coupled-channel systems. We apply the formalism to the 4 He=[ 3 H+p]+[ 3 He+n] coupled-channel cluster model where there are resonances at low energy. Numerical calculations of the CLD in the CSM with a finite number of L 2 basis functions are consistent with the exact result calculated from the S-matrix by solving coupled-channel equations. We also study channel densities. In this framework, the extended completeness relation (ECR) plays an important role. (author)
A Proactive Complex Event Processing Method for Large-Scale Transportation Internet of Things
Wang, Yongheng; Cao, Kening
2014-01-01
The Internet of Things (IoT) provides a new way to improve the transportation system. The key issue is how to process the numerous events generated by IoT. In this paper, a proactive complex event processing method is proposed for large-scale transportation IoT. Based on a multilayered adaptive dynamic Bayesian model, a Bayesian network structure learning algorithm using search-and-score is proposed to support accurate predictive analytics. A parallel Markov decision processes model is design...
Three-body Coulomb breakup of 11Li in the complex scaling method
International Nuclear Information System (INIS)
Myo, Takayuki; Aoyama, Shigeyoshi; Kato, Kiyoshi; Ikeda, Kiyomi
2003-01-01
Coulomb breakup strengths of 11 Li into a three-body 9 Li+n+n system are studied in the complex scaling method. We decompose the transition strengths into the contributions from three-body resonances, two-body '' 10 Li+n'' and three-body '' 9 Li+n+n'' continuum states. In the calculated results, we cannot find the dipole resonances with a sharp decay width in 11 Li. There is a low energy enhancement in the breakup strength, which is produced by both the two- and three-body continuum states. The enhancement given by the three-body continuum states is found to have a strong connection to the halo structure of 11 Li. The calculated breakup strength distribution is compared with the experimental data from MSU, RIKEN and GSI
Recent developments in complex scaling
International Nuclear Information System (INIS)
Rescigno, T.N.
1980-01-01
Some recent developments in the use of complex basis function techniques to study resonance as well as certain types of non-resonant, scattering phenomena are discussed. Complex scaling techniques and other closely related methods have continued to attract the attention of computational physicists and chemists and have now reached a point of development where meaningful calculations on many-electron atoms and molecules are beginning to appear feasible
The method of measurement and synchronization control for large-scale complex loading system
International Nuclear Information System (INIS)
Liao Min; Li Pengyuan; Hou Binglin; Chi Chengfang; Zhang Bo
2012-01-01
With the development of modern industrial technology, measurement and control system was widely used in high precision, complex industrial control equipment and large-tonnage loading device. The measurement and control system is often used to analyze the distribution of stress and displacement in the complex bearing load or the complex nature of the mechanical structure itself. In ITER GS mock-up with 5 flexible plates, for each load combination, detect and measure potential slippage between the central flexible plate and the neighboring spacers is necessary as well as the potential slippage between each pre-stressing bar and its neighboring plate. The measurement and control system consists of seven sets of EDC controller and board, computer system, 16-channel quasi-dynamic strain gauge, 25 sets of displacement sensors, 7 sets of load and displacement sensors in the cylinders. This paper demonstrates the principles and methods of EDC220 digital controller to achieve synchronization control, and R and D process of multi-channel loading control software and measurement software. (authors)
Energy Technology Data Exchange (ETDEWEB)
Shi, Min [Anhui University, School of Physics and Materials Science, Hefei (China); RIKEN Nishina Center, Wako (Japan); Shi, Xin-Xing; Guo, Jian-You [Anhui University, School of Physics and Materials Science, Hefei (China); Niu, Zhong-Ming [Anhui University, School of Physics and Materials Science, Hefei (China); Interdisciplinary Theoretical Science Research Group, RIKEN, Wako (Japan); Sun, Ting-Ting [Zhengzhou University, School of Physics and Engineering, Zhengzhou (China)
2017-03-15
We have extended the complex scaled Green's function method to the relativistic framework describing deformed nuclei with the theoretical formalism presented in detail. We have checked the applicability and validity of the present formalism for exploration of the resonances in deformed nuclei. Furthermore, we have studied the dependences of resonances on nuclear deformations and the shape of potential, which are helpful to recognize the evolution of resonant levels from stable nuclei to exotic nuclei with axially quadruple deformations. (orig.)
International Nuclear Information System (INIS)
Kleiner, S.C.; Dickman, R.L.
1985-01-01
The velocity autocorrelation function (ACF) of observed spectral line centroid fluctuations is noted to effectively reproduce the actual ACF of turbulent gas motions within an interstellar cloud, thereby furnishing a framework for the study of the large scale velocity structure of the Taurus dark cloud complex traced by the present C-13O J = 1-0 observations of this region. The results obtained are discussed in the context of recent suggestions that widely observed correlations between molecular cloud widths and cloud sizes indicate the presence of a continuum of turbulent motions within the dense interstellar medium. Attention is then given to a method for the quantitative study of these turbulent motions, involving the mapping of a source in an optically thin spectral line and studying the spatial correlation properties of the resulting velocity centroid map. 61 references
Large-scale Complex IT Systems
Sommerville, Ian; Cliff, Dave; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard
2011-01-01
This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that identifies the major challen...
Large-scale complex IT systems
Sommerville, Ian; Cliff, Dave; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard
2012-01-01
12 pages, 2 figures This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that ident...
Complex scaling in the cluster model
International Nuclear Information System (INIS)
Kruppa, A.T.; Lovas, R.G.; Gyarmati, B.
1987-01-01
To find the positions and widths of resonances, a complex scaling of the intercluster relative coordinate is introduced into the resonating-group model. In the generator-coordinate technique used to solve the resonating-group equation the complex scaling requires minor changes in the formulae and code. The finding of the resonances does not need any preliminary guess or explicit reference to any asymptotic prescription. The procedure is applied to the resonances in the relative motion of two ground-state α clusters in 8 Be, but is appropriate for any systems consisting of two clusters. (author) 23 refs.; 5 figs
Linearization Method and Linear Complexity
Tanaka, Hidema
We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N(≅2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.
Information geometric methods for complexity
Felice, Domenico; Cafaro, Carlo; Mancini, Stefano
2018-03-01
Research on the use of information geometry (IG) in modern physics has witnessed significant advances recently. In this review article, we report on the utilization of IG methods to define measures of complexity in both classical and, whenever available, quantum physical settings. A paradigmatic example of a dramatic change in complexity is given by phase transitions (PTs). Hence, we review both global and local aspects of PTs described in terms of the scalar curvature of the parameter manifold and the components of the metric tensor, respectively. We also report on the behavior of geodesic paths on the parameter manifold used to gain insight into the dynamics of PTs. Going further, we survey measures of complexity arising in the geometric framework. In particular, we quantify complexity of networks in terms of the Riemannian volume of the parameter space of a statistical manifold associated with a given network. We are also concerned with complexity measures that account for the interactions of a given number of parts of a system that cannot be described in terms of a smaller number of parts of the system. Finally, we investigate complexity measures of entropic motion on curved statistical manifolds that arise from a probabilistic description of physical systems in the presence of limited information. The Kullback-Leibler divergence, the distance to an exponential family and volumes of curved parameter manifolds, are examples of essential IG notions exploited in our discussion of complexity. We conclude by discussing strengths, limits, and possible future applications of IG methods to the physics of complexity.
International Nuclear Information System (INIS)
Zhang Fang-Fang; Liu Shu-Tang; Yu Wei-Yong
2013-01-01
To increase the variety and security of communication, we present the definitions of modified projective synchronization with complex scaling factors (CMPS) of real chaotic systems and complex chaotic systems, where complex scaling factors establish a link between real chaos and complex chaos. Considering all situations of unknown parameters and pseudo-gradient condition, we design adaptive CMPS schemes based on the speed-gradient method for the real drive chaotic system and complex response chaotic system and for the complex drive chaotic system and the real response chaotic system, respectively. The convergence factors and dynamical control strength are added to regulate the convergence speed and increase robustness. Numerical simulations verify the feasibility and effectiveness of the presented schemes. (general)
Scale-dependent intrinsic entropies of complex time series.
Yeh, Jia-Rong; Peng, Chung-Kang; Huang, Norden E
2016-04-13
Multi-scale entropy (MSE) was developed as a measure of complexity for complex time series, and it has been applied widely in recent years. The MSE algorithm is based on the assumption that biological systems possess the ability to adapt and function in an ever-changing environment, and these systems need to operate across multiple temporal and spatial scales, such that their complexity is also multi-scale and hierarchical. Here, we present a systematic approach to apply the empirical mode decomposition algorithm, which can detrend time series on various time scales, prior to analysing a signal's complexity by measuring the irregularity of its dynamics on multiple time scales. Simulated time series of fractal Gaussian noise and human heartbeat time series were used to study the performance of this new approach. We show that our method can successfully quantify the fractal properties of the simulated time series and can accurately distinguish modulations in human heartbeat time series in health and disease. © 2016 The Author(s).
Energy Technology Data Exchange (ETDEWEB)
Wu, Xiaokun; Han, Min; Ming, Dengming, E-mail: dming@fudan.edu.cn [Department of Physiology and Biophysics, School of Life Sciences, Fudan University, Shanghai (China)
2015-10-07
Membrane proteins play critically important roles in many cellular activities such as ions and small molecule transportation, signal recognition, and transduction. In order to fulfill their functions, these proteins must be placed in different membrane environments and a variety of protein-lipid interactions may affect the behavior of these proteins. One of the key effects of protein-lipid interactions is their ability to change the dynamics status of membrane proteins, thus adjusting their functions. Here, we present a multi-scaled normal mode analysis (mNMA) method to study the dynamics perturbation to the membrane proteins imposed by lipid bi-layer membrane fluctuations. In mNMA, channel proteins are simulated at all-atom level while the membrane is described with a coarse-grained model. mNMA calculations clearly show that channel gating motion can tightly couple with a variety of membrane deformations, including bending and twisting. We then examined bi-channel systems where two channels were separated with different distances. From mNMA calculations, we observed both positive and negative gating correlations between two neighboring channels, and the correlation has a maximum as the channel center-to-center distance is close to 2.5 times of their diameter. This distance is larger than recently found maximum attraction distance between two proteins embedded in membrane which is 1.5 times of the protein size, indicating that membrane fluctuation might impose collective motions among proteins within a larger area. The hybrid resolution feature in mNMA provides atomic dynamics information for key components in the system without costing much computer resource. We expect it to be a conventional simulation tool for ordinary laboratories to study the dynamics of very complicated biological assemblies. The source code is available upon request to the authors.
Scattering methods in complex fluids
Chen, Sow-Hsin
2015-01-01
Summarising recent research on the physics of complex liquids, this in-depth analysis examines the topic of complex liquids from a modern perspective, addressing experimental, computational and theoretical aspects of the field. Selecting only the most interesting contemporary developments in this rich field of research, the authors present multiple examples including aggregation, gel formation and glass transition, in systems undergoing percolation, at criticality, or in supercooled states. Connecting experiments and simulation with key theoretical principles, and covering numerous systems including micelles, micro-emulsions, biological systems, and cement pastes, this unique text is an invaluable resource for graduate students and researchers looking to explore and understand the expanding field of complex fluids.
Scaling up complex interventions: insights from a realist synthesis.
Willis, Cameron D; Riley, Barbara L; Stockton, Lisa; Abramowicz, Aneta; Zummach, Dana; Wong, Geoff; Robinson, Kerry L; Best, Allan
2016-12-19
legislation, or agreements with new funding partners.This synthesis applies and advances theory, realist methods and the practice of scaling up complex interventions. Practitioners may benefit from a number of coordinated efforts, including conducting or commissioning evaluations at strategic moments, mobilising local and political support through relevant partnerships, and promoting ongoing knowledge exchange in peer learning networks. Action research studies guided by these findings, and studies on knowledge translation for realist syntheses are promising future directions.
DEFF Research Database (Denmark)
Troen, Ib; Bechmann, Andreas; Kelly, Mark C.
2014-01-01
Using the Wind Atlas methodology to predict the average wind speed at one location from measured climatological wind frequency distributions at another nearby location we analyse the relative prediction errors using a linearized flow model (IBZ) and a more physically correct fully non-linear 3D...... flow model (CFD) for a number of sites in very complex terrain (large terrain slopes). We first briefly describe the Wind Atlas methodology as implemented in WAsP and the specifics of the “classical” model setup and the new setup allowing the use of the CFD computation engine. We discuss some known...
Scaling as an Organizational Method
DEFF Research Database (Denmark)
Papazu, Irina Maria Clara Hansen; Nelund, Mette
2018-01-01
Organization studies have shown limited interest in the part that scaling plays in organizational responses to climate change and sustainability. Moreover, while scales are viewed as central to the diagnosis of the organizational challenges posed by climate change and sustainability, the role...... turn something as immense as the climate into a small and manageable problem, thus making abstract concepts part of concrete, organizational practice....
International Nuclear Information System (INIS)
Bondar, Y.; Konoplia, E.
1999-01-01
There has been a considerable aggregate effect of natural factors on soils contaminated with radioactive pollution by autopurification. Such factors such as natural decay, vertical migration of nuclides over the soil profile, as well as cyclic carrying of nuclides from the soil by vegetation, have been analyzed. The contaminated Belarus Polessie soils, as the result of the Chernobyl Catastrophe, have shown that during the past 13 years, a 1.5-1.7 fold decrease of long living radionuclides has taken place in the rooting layer. The qualitative characteristics of the soil purification process by phytocoenosis have been established, and the effectiveness and limitations of this method have been demonstrated. The effect of microbiological soil processes on the radionuclides mobility have been studied and the issues of the migration process intensification by means of optimal nutrient media have been considered. Hydroseparation of highly dispersed soil particles with simultaneous consideration of the soil organic substance contents allows attainment of a purification coefficient of 1.5-2. Further increase of C pur leads to irreversible humus substance loss and depriving the soil of its fertility, in addition the quantity of solid wastes dramatically increases that should be localized. A soil cut has been carried out on an experimental plot. It has been shown that the effectiveness of this method is high in comparison with other appropriate methods. However, with time, the purification rate decreases due to the radionuclides exceeding the bounds of the cutting layer caused by migration. (author)
Large-Scale Optimization for Bayesian Inference in Complex Systems
Energy Technology Data Exchange (ETDEWEB)
Willcox, Karen [MIT; Marzouk, Youssef [MIT
2013-11-12
The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to
A study of complex scaling transformation using the Wigner representation of wavefunctions.
Kaprálová-Ždánská, Petra Ruth
2011-05-28
The complex scaling operator exp(-θ ̂x̂p/ℏ), being a foundation of the complex scaling method for resonances, is studied in the Wigner phase-space representation. It is shown that the complex scaling operator behaves similarly to the squeezing operator, rotating and amplifying Wigner quasi-probability distributions of the respective wavefunctions. It is disclosed that the distorting effect of the complex scaling transformation is correlated with increased numerical errors of computed resonance energies and widths. The behavior of the numerical error is demonstrated for a computation of CO(2+) vibronic resonances. © 2011 American Institute of Physics
Simple spatial scaling rules behind complex cities.
Li, Ruiqi; Dong, Lei; Zhang, Jiang; Wang, Xinran; Wang, Wen-Xu; Di, Zengru; Stanley, H Eugene
2017-11-28
Although most of wealth and innovation have been the result of human interaction and cooperation, we are not yet able to quantitatively predict the spatial distributions of three main elements of cities: population, roads, and socioeconomic interactions. By a simple model mainly based on spatial attraction and matching growth mechanisms, we reveal that the spatial scaling rules of these three elements are in a consistent framework, which allows us to use any single observation to infer the others. All numerical and theoretical results are consistent with empirical data from ten representative cities. In addition, our model can also provide a general explanation of the origins of the universal super- and sub-linear aggregate scaling laws and accurately predict kilometre-level socioeconomic activity. Our work opens a new avenue for uncovering the evolution of cities in terms of the interplay among urban elements, and it has a broad range of applications.
Immune Algorithm Complex Method for Transducer Calibration
Directory of Open Access Journals (Sweden)
YU Jiangming
2014-08-01
Full Text Available As a key link in engineering test tasks, the transducer calibration has significant influence on accuracy and reliability of test results. Because of unknown and complex nonlinear characteristics, conventional method can’t achieve satisfactory accuracy. An Immune algorithm complex modeling approach is proposed, and the simulated studies on the calibration of third multiple output transducers is made respectively by use of the developed complex modeling. The simulated and experimental results show that the Immune algorithm complex modeling approach can improve significantly calibration precision comparison with traditional calibration methods.
Stationarity of resonant pole trajectories in complex scaling
International Nuclear Information System (INIS)
Canuto, S.; Goscinski, O.
1978-01-01
A reciprocity theorem relating the real parameters eta and α that define the complex scaling transformation r → eta r e/sup iα/ in the theory of complex scaling for resonant states is demonstrated. The virial theorem is used in connection with the stationarity of the pole trajectory. The Stark broadening in the hydrogen atom using a basis set generated by Rayleigh--Schroedinger perturbation theory is treated as an example. 18 references
Complex scaling behavior in animal foraging patterns
Premachandra, Prabhavi Kaushalya
This dissertation attempts to answer questions from two different areas of biology, ecology and neuroscience, using physics-based techniques. In Section 2, suitability of three competing random walk models is tested to describe the emergent movement patterns of two species of primates. The truncated power law (power law with exponential cut off) is the most suitable random walk model that characterizes the emergent movement patterns of these primates. In Section 3, an agent-based model is used to simulate search behavior in different environments (landscapes) to investigate the impact of the resource landscape on the optimal foraging movement patterns of deterministic foragers. It should be noted that this model goes beyond previous work in that it includes parameters such as spatial memory and satiation, which have received little consideration to date in the field of movement ecology. When the food availability is scarce in a tropical forest-like environment with feeding trees distributed in a clumped fashion and the size of those trees are distributed according to a lognormal distribution, the optimal foraging pattern of a generalist who can consume various and abundant food types indeed reaches the Levy range, and hence, show evidence for Levy-flight-like (power law distribution with exponent between 1 and 3) behavior. Section 4 of the dissertation presents an investigation of phase transition behavior in a network of locally coupled self-sustained oscillators as the system passes through various bursting states. The results suggest that a phase transition does not occur for this locally coupled neuronal network. The data analysis in the dissertation adopts a model selection approach and relies on methods based on information theory and maximum likelihood.
Complex Formation Control of Large-Scale Intelligent Autonomous Vehicles
Directory of Open Access Journals (Sweden)
Ming Lei
2012-01-01
Full Text Available A new formation framework of large-scale intelligent autonomous vehicles is developed, which can realize complex formations while reducing data exchange. Using the proposed hierarchy formation method and the automatic dividing algorithm, vehicles are automatically divided into leaders and followers by exchanging information via wireless network at initial time. Then, leaders form formation geometric shape by global formation information and followers track their own virtual leaders to form line formation by local information. The formation control laws of leaders and followers are designed based on consensus algorithms. Moreover, collision-avoiding problems are considered and solved using artificial potential functions. Finally, a simulation example that consists of 25 vehicles shows the effectiveness of theory.
Multiple time scale methods in tokamak magnetohydrodynamics
International Nuclear Information System (INIS)
Jardin, S.C.
1984-01-01
Several methods are discussed for integrating the magnetohydrodynamic (MHD) equations in tokamak systems on other than the fastest time scale. The dynamical grid method for simulating ideal MHD instabilities utilizes a natural nonorthogonal time-dependent coordinate transformation based on the magnetic field lines. The coordinate transformation is chosen to be free of the fast time scale motion itself, and to yield a relatively simple scalar equation for the total pressure, P = p + B 2 /2μ 0 , which can be integrated implicitly to average over the fast time scale oscillations. Two methods are described for the resistive time scale. The zero-mass method uses a reduced set of two-fluid transport equations obtained by expanding in the inverse magnetic Reynolds number, and in the small ratio of perpendicular to parallel mobilities and thermal conductivities. The momentum equation becomes a constraint equation that forces the pressure and magnetic fields and currents to remain in force balance equilibrium as they evolve. The large mass method artificially scales up the ion mass and viscosity, thereby reducing the severe time scale disparity between wavelike and diffusionlike phenomena, but not changing the resistive time scale behavior. Other methods addressing the intermediate time scales are discussed
A New Class of Scaling Correction Methods
International Nuclear Information System (INIS)
Mei Li-Jie; Wu Xin; Liu Fu-Yao
2012-01-01
When conventional integrators like Runge—Kutta-type algorithms are used, numerical errors can make an orbit deviate from a hypersurface determined by many constraints, which leads to unreliable numerical solutions. Scaling correction methods are a powerful tool to avoid this. We focus on their applications, and also develop a family of new velocity multiple scaling correction methods where scale factors only act on the related components of the integrated momenta. They can preserve exactly some first integrals of motion in discrete or continuous dynamical systems, so that rapid growth of roundoff or truncation errors is suppressed significantly. (general)
Tailoring Enterprise Systems Engineering Policy for Project Scale and Complexity
Cox, Renee I.; Thomas, L. Dale
2014-01-01
Space systems are characterized by varying degrees of scale and complexity. Accordingly, cost-effective implementation of systems engineering also varies depending on scale and complexity. Recognizing that systems engineering and integration happen everywhere and at all levels of a given system and that the life cycle is an integrated process necessary to mature a design, the National Aeronautic and Space Administration's (NASA's) Marshall Space Flight Center (MSFC) has developed a suite of customized implementation approaches based on project scale and complexity. While it may be argued that a top-level system engineering process is common to and indeed desirable across an enterprise for all space systems, implementation of that top-level process and the associated products developed as a result differ from system to system. The implementation approaches used for developing a scientific instrument necessarily differ from those used for a space station. .
Time Scale in Least Square Method
Directory of Open Access Journals (Sweden)
Özgür Yeniay
2014-01-01
Full Text Available Study of dynamic equations in time scale is a new area in mathematics. Time scale tries to build a bridge between real numbers and integers. Two derivatives in time scale have been introduced and called as delta and nabla derivative. Delta derivative concept is defined as forward direction, and nabla derivative concept is defined as backward direction. Within the scope of this study, we consider the method of obtaining parameters of regression equation of integer values through time scale. Therefore, we implemented least squares method according to derivative definition of time scale and obtained coefficients related to the model. Here, there exist two coefficients originating from forward and backward jump operators relevant to the same model, which are different from each other. Occurrence of such a situation is equal to total number of values of vertical deviation between regression equations and observation values of forward and backward jump operators divided by two. We also estimated coefficients for the model using ordinary least squares method. As a result, we made an introduction to least squares method on time scale. We think that time scale theory would be a new vision in least square especially when assumptions of linear regression are violated.
Modeling complex work systems - method meets reality
van der Veer, Gerrit C.; Hoeve, Machteld; Lenting, Bert
1996-01-01
Modeling an existing task situation is often a first phase in the (re)design of information systems. For complex systems design, this model should consider both the people and the organization involved, the work, and situational aspects. Groupware Task Analysis (GTA) as part of a method for the
Large-scale computing techniques for complex system simulations
Dubitzky, Werner; Schott, Bernard
2012-01-01
Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and
Renormalization Scale-Fixing for Complex Scattering Amplitudes
Energy Technology Data Exchange (ETDEWEB)
Brodsky, Stanley J.; /SLAC; Llanes-Estrada, Felipe J.; /Madrid U.
2005-12-21
We show how to fix the renormalization scale for hard-scattering exclusive processes such as deeply virtual meson electroproduction by applying the BLM prescription to the imaginary part of the scattering amplitude and employing a fixed-t dispersion relation to obtain the scale-fixed real part. In this way we resolve the ambiguity in BLM renormalization scale-setting for complex scattering amplitudes. We illustrate this by computing the H generalized parton distribution at leading twist in an analytic quark-diquark model for the parton-proton scattering amplitude which can incorporate Regge exchange contributions characteristic of the deep inelastic structure functions.
Method Points: towards a metric for method complexity
Directory of Open Access Journals (Sweden)
Graham McLeod
1998-11-01
Full Text Available A metric for method complexity is proposed as an aid to choosing between competing methods, as well as in validating the effects of method integration or the products of method engineering work. It is based upon a generic method representation model previously developed by the author and adaptation of concepts used in the popular Function Point metric for system size. The proposed technique is illustrated by comparing two popular I.E. deliverables with counterparts in the object oriented Unified Modeling Language (UML. The paper recommends ways to improve the practical adoption of new methods.
Methods for determination of extractable complex composition
International Nuclear Information System (INIS)
Sergievskij, V.V.
1984-01-01
Specific features and restrictions of main methods for determining the extractable complex composition by the distribution data (methods of equilibrium shift, saturation, mathematical models) are considered. Special attention is given to the solution of inverse problems with account for hydration effect on the activity of organic phase components. By example of the systems lithium halides-isoamyl alcohol, thorium nitrate-n-hexyl alcohol, mineral acids tri-n-butyl phosphate (TBP), metal nitrates (uranium lanthanides) - TBP the results on determining stoichiometry of extraction equilibria obtained by various methods are compared
An improved sampling method of complex network
Gao, Qi; Ding, Xintong; Pan, Feng; Li, Weixing
2014-12-01
Sampling subnet is an important topic of complex network research. Sampling methods influence the structure and characteristics of subnet. Random multiple snowball with Cohen (RMSC) process sampling which combines the advantages of random sampling and snowball sampling is proposed in this paper. It has the ability to explore global information and discover the local structure at the same time. The experiments indicate that this novel sampling method could keep the similarity between sampling subnet and original network on degree distribution, connectivity rate and average shortest path. This method is applicable to the situation where the prior knowledge about degree distribution of original network is not sufficient.
Open quantum maps from complex scaling of kicked scattering systems
Mertig, Normann; Shudo, Akira
2018-04-01
We derive open quantum maps from periodically kicked scattering systems and discuss the computation of their resonance spectra in terms of theoretically grounded methods, such as complex scaling and sufficiently weak absorbing potentials. In contrast, we also show that current implementations of open quantum maps, based on strong absorptive or even projective openings, fail to produce the resonance spectra of kicked scattering systems. This comparison pinpoints flaws in current implementations of open quantum maps, namely, the inability to separate resonance eigenvalues from the continuum as well as the presence of diffraction effects due to strong absorption. The reported deviations from the true resonance spectra appear, even if the openings do not affect the classical trapped set, and become appreciable for shorter-lived resonances, e.g., those associated with chaotic orbits. This makes the open quantum maps, which we derive in this paper, a valuable alternative for future explorations of quantum-chaotic scattering systems, for example, in the context of the fractal Weyl law. The results are illustrated for a quantum map model whose classical dynamics exhibits key features of ionization and a trapped set which is organized by a topological horseshoe.
Vertical equilibrium with sub-scale analytical methods for geological CO2 sequestration
Gasda, S. E.; Nordbotten, J. M.; Celia, M. A.
2009-01-01
equilibrium with sub-scale analytical method (VESA) combines the flexibility of a numerical method, allowing for heterogeneous and geologically complex systems, with the efficiency and accuracy of an analytical method, thereby eliminating expensive grid
Cut Based Method for Comparing Complex Networks.
Liu, Qun; Dong, Zhishan; Wang, En
2018-03-23
Revealing the underlying similarity of various complex networks has become both a popular and interdisciplinary topic, with a plethora of relevant application domains. The essence of the similarity here is that network features of the same network type are highly similar, while the features of different kinds of networks present low similarity. In this paper, we introduce and explore a new method for comparing various complex networks based on the cut distance. We show correspondence between the cut distance and the similarity of two networks. This correspondence allows us to consider a broad range of complex networks and explicitly compare various networks with high accuracy. Various machine learning technologies such as genetic algorithms, nearest neighbor classification, and model selection are employed during the comparison process. Our cut method is shown to be suited for comparisons of undirected networks and directed networks, as well as weighted networks. In the model selection process, the results demonstrate that our approach outperforms other state-of-the-art methods with respect to accuracy.
Preface: Introductory Remarks: Linear Scaling Methods
Bowler, D. R.; Fattebert, J.-L.; Gillan, M. J.; Haynes, P. D.; Skylaris, C.-K.
2008-07-01
It has been just over twenty years since the publication of the seminal paper on molecular dynamics with ab initio methods by Car and Parrinello [1], and the contribution of density functional theory (DFT) and the related techniques to physics, chemistry, materials science, earth science and biochemistry has been huge. Nevertheless, significant improvements are still being made to the performance of these standard techniques; recent work suggests that speed improvements of one or even two orders of magnitude are possible [2]. One of the areas where major progress has long been expected is in O(N), or linear scaling, DFT, in which the computer effort is proportional to the number of atoms. Linear scaling DFT methods have been in development for over ten years [3] but we are now in an exciting period where more and more research groups are working on these methods. Naturally there is a strong and continuing effort to improve the efficiency of the methods and to make them more robust. But there is also a growing ambition to apply them to challenging real-life problems. This special issue contains papers submitted following the CECAM Workshop 'Linear-scaling ab initio calculations: applications and future directions', held in Lyon from 3-6 September 2007. A noteworthy feature of the workshop is that it included a significant number of presentations involving real applications of O(N) methods, as well as work to extend O(N) methods into areas of greater accuracy (correlated wavefunction methods, quantum Monte Carlo, TDDFT) and large scale computer architectures. As well as explicitly linear scaling methods, the conference included presentations on techniques designed to accelerate and improve the efficiency of standard (that is non-linear-scaling) methods; this highlights the important question of crossover—that is, at what size of system does it become more efficient to use a linear-scaling method? As well as fundamental algorithmic questions, this brings up
International Nuclear Information System (INIS)
McCurdy, C William; MartIn, Fernando
2004-01-01
B-spline methods are now well established as widely applicable tools for the evaluation of atomic and molecular continuum states. The mathematical technique of exterior complex scaling has been shown, in a variety of other implementations, to be a powerful method with which to solve atomic and molecular scattering problems, because it allows the correct imposition of continuum boundary conditions without their explicit analytic application. In this paper, an implementation of exterior complex scaling in B-splines is described that can bring the well-developed technology of B-splines to bear on new problems, including multiple ionization and breakup problems, in a straightforward way. The approach is demonstrated for examples involving the continuum motion of nuclei in diatomic molecules as well as electronic continua. For problems involving electrons, a method based on Poisson's equation is presented for computing two-electron integrals over B-splines under exterior complex scaling
Nonlinear dynamics of the complex multi-scale network
Makarov, Vladimir V.; Kirsanov, Daniil; Goremyko, Mikhail; Andreev, Andrey; Hramov, Alexander E.
2018-04-01
In this paper, we study the complex multi-scale network of nonlocally coupled oscillators for the appearance of chimera states. Chimera is a special state in which, in addition to the asynchronous cluster, there are also completely synchronous parts in the system. We show that the increase of nodes in subgroups leads to the destruction of the synchronous interaction within the common ring and to the narrowing of the chimera region.
Complex networks principles, methods and applications
Latora, Vito; Russo, Giovanni
2017-01-01
Networks constitute the backbone of complex systems, from the human brain to computer communications, transport infrastructures to online social systems and metabolic reactions to financial markets. Characterising their structure improves our understanding of the physical, biological, economic and social phenomena that shape our world. Rigorous and thorough, this textbook presents a detailed overview of the new theory and methods of network science. Covering algorithms for graph exploration, node ranking and network generation, among the others, the book allows students to experiment with network models and real-world data sets, providing them with a deep understanding of the basics of network theory and its practical applications. Systems of growing complexity are examined in detail, challenging students to increase their level of skill. An engaging presentation of the important principles of network science makes this the perfect reference for researchers and undergraduate and graduate students in physics, ...
Cope's Rule and the Universal Scaling Law of Ornament Complexity.
Raia, Pasquale; Passaro, Federico; Carotenuto, Francesco; Maiorino, Leonardo; Piras, Paolo; Teresi, Luciano; Meiri, Shai; Itescu, Yuval; Novosolov, Maria; Baiano, Mattia Antonio; Martínez, Ricard; Fortelius, Mikael
2015-08-01
Luxuriant, bushy antlers, bizarre crests, and huge, twisting horns and tusks are conventionally understood as products of sexual selection. This view stems from both direct observation and from the empirical finding that the size of these structures grows faster than body size (i.e., ornament size shows positive allometry). We contend that the familiar evolutionary increase in the complexity of ornaments over time in many animal clades is decoupled from ornament size evolution. Increased body size comes with extended growth. Since growth scales to the quarter power of body size, we predicted that ornament complexity should scale according to the quarter power law as well, irrespective of the role of sexual selection in the evolution and function of the ornament. To test this hypothesis, we selected three clades (ammonites, deer, and ceratopsian dinosaurs) whose species bore ornaments that differ in terms of the importance of sexual selection to their evolution. We found that the exponent of the regression of ornament complexity to body size is the same for the three groups and is statistically indistinguishable from 0.25. We suggest that the evolution of ornament complexity is a by-product of Cope's rule. We argue that although sexual selection may control size in most ornaments, it does not influence their shape.
A Low Complexity Discrete Radiosity Method
Chatelier , Pierre Yves; Malgouyres , Rémy
2006-01-01
International audience; Rather than using Monte Carlo sampling techniques or patch projections to compute radiosity, it is possible to use a discretization of a scene into voxels and perform some discrete geometry calculus to quickly compute visibility information. In such a framework , the radiosity method may be as precise as a patch-based radiosity using hemicube computation for form-factors, but it lowers the overall theoretical complexity to an O(N log N) + O(N), where the O(N) is largel...
Methods for Large-Scale Nonlinear Optimization.
1980-05-01
STANFORD, CALIFORNIA 94305 METHODS FOR LARGE-SCALE NONLINEAR OPTIMIZATION by Philip E. Gill, Waiter Murray, I Michael A. Saunden, and Masgaret H. Wright...typical iteration can be partitioned so that where B is an m X m basise matrix. This partition effectively divides the vari- ables into three classes... attention is given to the standard of the coding or the documentation. A much better way of obtaining mathematical software is from a software library
Temperature scaling method for Markov chains.
Crosby, Lonnie D; Windus, Theresa L
2009-01-22
The use of ab initio potentials in Monte Carlo simulations aimed at investigating the nucleation kinetics of water clusters is complicated by the computational expense of the potential energy determinations. Furthermore, the common desire to investigate the temperature dependence of kinetic properties leads to an urgent need to reduce the expense of performing simulations at many different temperatures. A method is detailed that allows a Markov chain (obtained via Monte Carlo) at one temperature to be scaled to other temperatures of interest without the need to perform additional large simulations. This Markov chain temperature-scaling (TeS) can be generally applied to simulations geared for numerous applications. This paper shows the quality of results which can be obtained by TeS and the possible quantities which may be extracted from scaled Markov chains. Results are obtained for a 1-D analytical potential for which the exact solutions are known. Also, this method is applied to water clusters consisting of between 2 and 5 monomers, using Dynamical Nucleation Theory to determine the evaporation rate constant for monomer loss. Although ab initio potentials are not utilized in this paper, the benefit of this method is made apparent by using the Dang-Chang polarizable classical potential for water to obtain statistical properties at various temperatures.
Low-Complexity Transmit Antenna Selection and Beamforming for Large-Scale MIMO Communications
Directory of Open Access Journals (Sweden)
Kun Qian
2014-01-01
Full Text Available Transmit antenna selection plays an important role in large-scale multiple-input multiple-output (MIMO communications, but optimal large-scale MIMO antenna selection is a technical challenge. Exhaustive search is often employed in antenna selection, but it cannot be efficiently implemented in large-scale MIMO communication systems due to its prohibitive high computation complexity. This paper proposes a low-complexity interactive multiple-parameter optimization method for joint transmit antenna selection and beamforming in large-scale MIMO communication systems. The objective is to jointly maximize the channel outrage capacity and signal-to-noise (SNR performance and minimize the mean square error in transmit antenna selection and minimum variance distortionless response (MVDR beamforming without exhaustive search. The effectiveness of all the proposed methods is verified by extensive simulation results. It is shown that the required antenna selection processing time of the proposed method does not increase along with the increase of selected antennas, but the computation complexity of conventional exhaustive search method will significantly increase when large-scale antennas are employed in the system. This is particularly useful in antenna selection for large-scale MIMO communication systems.
Large Scale Emerging Properties from Non Hamiltonian Complex Systems
Directory of Open Access Journals (Sweden)
Marco Bianucci
2017-06-01
Full Text Available The concept of “large scale” depends obviously on the phenomenon we are interested in. For example, in the field of foundation of Thermodynamics from microscopic dynamics, the spatial and time large scales are order of fraction of millimetres and microseconds, respectively, or lesser, and are defined in relation to the spatial and time scales of the microscopic systems. In large scale oceanography or global climate dynamics problems the time scales of interest are order of thousands of kilometres, for space, and many years for time, and are compared to the local and daily/monthly times scales of atmosphere and ocean dynamics. In all the cases a Zwanzig projection approach is, at least in principle, an effective tool to obtain class of universal smooth “large scale” dynamics for few degrees of freedom of interest, starting from the complex dynamics of the whole (usually many degrees of freedom system. The projection approach leads to a very complex calculus with differential operators, that is drastically simplified when the basic dynamics of the system of interest is Hamiltonian, as it happens in Foundation of Thermodynamics problems. However, in geophysical Fluid Dynamics, Biology, and in most of the physical problems the building block fundamental equations of motions have a non Hamiltonian structure. Thus, to continue to apply the useful projection approach also in these cases, we exploit the generalization of the Hamiltonian formalism given by the Lie algebra of dissipative differential operators. In this way, we are able to analytically deal with the series of the differential operators stemming from the projection approach applied to these general cases. Then we shall apply this formalism to obtain some relevant results concerning the statistical properties of the El Niño Southern Oscillation (ENSO.
Scale effect in fatigue resistance under complex stressed state
International Nuclear Information System (INIS)
Sosnovskij, L.A.
1979-01-01
On the basis the of the fatigue failure statistic theory obtained is the formula for calculated estimation of probabillity of failure under complex stressed state according to partial probabilities of failure under linear stressed state with provision for the scale effect. Also the formula for calculation of equivalent stress is obtained. The verification of both formulae using literary experimental data for plane stressed state torsion has shown that the error of estimations does not exceed 10% for materials with the ultimate strength changing from 61 to 124 kg/mm 2
Self-similarity and scaling theory of complex networks
Song, Chaoming
Scale-free networks have been studied extensively due to their relevance to many real systems as diverse as the World Wide Web (WWW), the Internet, biological and social networks. We present a novel approach to the analysis of scale-free networks, revealing that their structure is self-similar. This result is achieved by the application of a renormalization procedure which coarse-grains the system into boxes containing nodes within a given "size". Concurrently, we identify a power-law relation between the number of boxes needed to cover the network and the size of the box defining a self-similar exponent, which classifies fractal and non-fractal networks. By using the concept of renormalization as a mechanism for the growth of fractal and non-fractal modular networks, we show that the key principle that gives rise to the fractal architecture of networks is a strong effective "repulsion" between the most connected nodes (hubs) on all length scales, rendering them very dispersed. We show that a robust network comprised of functional modules, such as a cellular network, necessitates a fractal topology, suggestive of a evolutionary drive for their existence. These fundamental properties help to understand the emergence of the scale-free property in complex networks.
Gamma Ray Tomographic Scan Method for Large Scale Industrial Plants
International Nuclear Information System (INIS)
Moon, Jin Ho; Jung, Sung Hee; Kim, Jong Bum; Park, Jang Geun
2011-01-01
The gamma ray tomography systems have been used to investigate a chemical process for last decade. There have been many cases of gamma ray tomography for laboratory scale work but not many cases for industrial scale work. Non-tomographic equipment with gamma-ray sources is often used in process diagnosis. Gamma radiography, gamma column scanning and the radioisotope tracer technique are examples of gamma ray application in industries. In spite of many outdoor non-gamma ray tomographic equipment, the most of gamma ray tomographic systems still remained as indoor equipment. But, as the gamma tomography has developed, the demand on gamma tomography for real scale plants also increased. To develop the industrial scale system, we introduced the gamma-ray tomographic system with fixed detectors and rotating source. The general system configuration is similar to 4 th generation geometry. But the main effort has been made to actualize the instant installation of the system for real scale industrial plant. This work would be a first attempt to apply the 4th generation industrial gamma tomographic scanning by experimental method. The individual 0.5-inch NaI detector was used for gamma ray detection by configuring circular shape around industrial plant. This tomographic scan method can reduce mechanical complexity and require a much smaller space than a conventional CT. Those properties make it easy to get measurement data for a real scale plant
A new large-scale manufacturing platform for complex biopharmaceuticals.
Vogel, Jens H; Nguyen, Huong; Giovannini, Roberto; Ignowski, Jolene; Garger, Steve; Salgotra, Anil; Tom, Jennifer
2012-12-01
Complex biopharmaceuticals, such as recombinant blood coagulation factors, are addressing critical medical needs and represent a growing multibillion-dollar market. For commercial manufacturing of such, sometimes inherently unstable, molecules it is important to minimize product residence time in non-ideal milieu in order to obtain acceptable yields and consistently high product quality. Continuous perfusion cell culture allows minimization of residence time in the bioreactor, but also brings unique challenges in product recovery, which requires innovative solutions. In order to maximize yield, process efficiency, facility and equipment utilization, we have developed, scaled-up and successfully implemented a new integrated manufacturing platform in commercial scale. This platform consists of a (semi-)continuous cell separation process based on a disposable flow path and integrated with the upstream perfusion operation, followed by membrane chromatography on large-scale adsorber capsules in rapid cycling mode. Implementation of the platform at commercial scale for a new product candidate led to a yield improvement of 40% compared to the conventional process technology, while product quality has been shown to be more consistently high. Over 1,000,000 L of cell culture harvest have been processed with 100% success rate to date, demonstrating the robustness of the new platform process in GMP manufacturing. While membrane chromatography is well established for polishing in flow-through mode, this is its first commercial-scale application for bind/elute chromatography in the biopharmaceutical industry and demonstrates its potential in particular for manufacturing of potent, low-dose biopharmaceuticals. Copyright © 2012 Wiley Periodicals, Inc.
Comparison of MODIS and SWAT evapotranspiration over a complex terrain at different spatial scales
Abiodun, Olanrewaju O.; Guan, Huade; Post, Vincent E. A.; Batelaan, Okke
2018-05-01
In most hydrological systems, evapotranspiration (ET) and precipitation are the largest components of the water balance, which are difficult to estimate, particularly over complex terrain. In recent decades, the advent of remotely sensed data based ET algorithms and distributed hydrological models has provided improved spatially upscaled ET estimates. However, information on the performance of these methods at various spatial scales is limited. This study compares the ET from the MODIS remotely sensed ET dataset (MOD16) with the ET estimates from a SWAT hydrological model on graduated spatial scales for the complex terrain of the Sixth Creek Catchment of the Western Mount Lofty Ranges, South Australia. ET from both models was further compared with the coarser-resolution AWRA-L model at catchment scale. The SWAT model analyses are performed on daily timescales with a 6-year calibration period (2000-2005) and 7-year validation period (2007-2013). Differences in ET estimation between the SWAT and MOD16 methods of up to 31, 19, 15, 11 and 9 % were observed at respectively 1, 4, 9, 16 and 25 km2 spatial resolutions. Based on the results of the study, a spatial scale of confidence of 4 km2 for catchment-scale evapotranspiration is suggested in complex terrain. Land cover differences, HRU parameterisation in AWRA-L and catchment-scale averaging of input climate data in the SWAT semi-distributed model were identified as the principal sources of weaker correlations at higher spatial resolution.
Atmospheric dispersion modelling over complex terrain at small scale
Nosek, S.; Janour, Z.; Kukacka, L.; Jurcakova, K.; Kellnerova, R.; Gulikova, E.
2014-03-01
Previous study concerned of qualitative modelling neutrally stratified flow over open-cut coal mine and important surrounding topography at meso-scale (1:9000) revealed an important area for quantitative modelling of atmospheric dispersion at small-scale (1:3300). The selected area includes a necessary part of the coal mine topography with respect to its future expansion and surrounding populated areas. At this small-scale simultaneous measurement of velocity components and concentrations in specified points of vertical and horizontal planes were performed by two-dimensional Laser Doppler Anemometry (LDA) and Fast-Response Flame Ionization Detector (FFID), respectively. The impact of the complex terrain on passive pollutant dispersion with respect to the prevailing wind direction was observed and the prediction of the air quality at populated areas is discussed. The measured data will be used for comparison with another model taking into account the future coal mine transformation. Thus, the impact of coal mine transformation on pollutant dispersion can be observed.
Polarized atomic orbitals for linear scaling methods
Berghold, Gerd; Parrinello, Michele; Hutter, Jürg
2002-02-01
We present a modified version of the polarized atomic orbital (PAO) method [M. S. Lee and M. Head-Gordon, J. Chem. Phys. 107, 9085 (1997)] to construct minimal basis sets optimized in the molecular environment. The minimal basis set derives its flexibility from the fact that it is formed as a linear combination of a larger set of atomic orbitals. This approach significantly reduces the number of independent variables to be determined during a calculation, while retaining most of the essential chemistry resulting from the admixture of higher angular momentum functions. Furthermore, we combine the PAO method with linear scaling algorithms. We use the Chebyshev polynomial expansion method, the conjugate gradient density matrix search, and the canonical purification of the density matrix. The combined scheme overcomes one of the major drawbacks of standard approaches for large nonorthogonal basis sets, namely numerical instabilities resulting from ill-conditioned overlap matrices. We find that the condition number of the PAO overlap matrix is independent from the condition number of the underlying extended basis set, and consequently no numerical instabilities are encountered. Various applications are shown to confirm this conclusion and to compare the performance of the PAO method with extended basis-set calculations.
Measurement methods on the complexity of network
Institute of Scientific and Technical Information of China (English)
LIN Lin; DING Gang; CHEN Guo-song
2010-01-01
Based on the size of network and the number of paths in the network,we proposed a model of topology complexity of a network to measure the topology complexity of the network.Based on the analyses of the effects of the number of the equipment,the types of equipment and the processing time of the node on the complexity of the network with the equipment-constrained,a complexity model of equipment-constrained network was constructed to measure the integrated complexity of the equipment-constrained network.The algorithms for the two models were also developed.An automatic generator of the random single label network was developed to test the models.The results show that the models can correctly evaluate the topology complexity and the integrated complexity of the networks.
Matters of Scale: Sociology in and for a Complex World.
Pyyhtinen, Olli
2017-08-01
The article proposes that if sociology is to make sense of a world that is ever more complex and complicated, it is important to reconsider the scale(s) of our relations and actions. Instead of assuming a nested vertical hierarchy of the micro to macro binary, scale should be treated not only as multiple, but also as something produced and sustained in practice. Coming to grips with the complex world, we are living in also necessitates attending to the conduits and connections between various sites, fields, and terrains to which our lives are entangled. The article concludes with a note on the marginalization of sociology from public discussions, and it argues that it is possibly by attending to ambiguity and to the unfinished making of our contemporary world that sociology might have the most to give to discussions about the economy, about the future of humanity, and how to organize society. Cet article suggère que si la sociologie doit nous éclairer sur le sens d'un monde de plus en plus complexe, il est important de revoir l'échelle de nos relations et actions. Au lieu d'assumer une hiérarchie verticale de la dualité micro-macro, cette étendue doit être traitée non seulement comme multiple, mais aussi comme une chose produite et soutenue par la pratique. Pour faire face à la complexité du monde dans lequel nous vivons, il faut aussi de s'occuper des conduits et connections entre des sites divers, des champs, et des terrains dans lesquels nos vies se déroulent. Cet article conclut avec une note sur la marginalisation de la sociologie dans les discussions publiques ; et il défend l'idée que c'est possiblement en appréhendant l'ambiguïté et la construction incomplète de notre monde contemporain que la sociologie peut être la plus fructueuse en termes de discussions portant sur l'économie, le futur de l'humanité, et l'organisation de la société. © 2017 Canadian Sociological Association/La Société canadienne de sociologie.
Research on image complexity evaluation method based on color information
Wang, Hao; Duan, Jin; Han, Xue-hui; Xiao, Bo
2017-11-01
In order to evaluate the complexity of a color image more effectively and find the connection between image complexity and image information, this paper presents a method to compute the complexity of image based on color information.Under the complexity ,the theoretical analysis first divides the complexity from the subjective level, divides into three levels: low complexity, medium complexity and high complexity, and then carries on the image feature extraction, finally establishes the function between the complexity value and the color characteristic model. The experimental results show that this kind of evaluation method can objectively reconstruct the complexity of the image from the image feature research. The experimental results obtained by the method of this paper are in good agreement with the results of human visual perception complexity,Color image complexity has a certain reference value.
Time-dependent approach to collisional ionization using exterior complex scaling
International Nuclear Information System (INIS)
McCurdy, C. William; Horner, Daniel A.; Rescigno, Thomas N.
2002-01-01
We present a time-dependent formulation of the exterior complex scaling method that has previously been used to treat electron-impact ionization of the hydrogen atom accurately at low energies. The time-dependent approach solves a driven Schroedinger equation, and scales more favorably with the number of electrons than the original formulation. The method is demonstrated in calculations for breakup processes in two dimensions (2D) and three dimensions for systems involving short-range potentials and in 2D for electron-impact ionization in the Temkin-Poet model for electron-hydrogen atom collisions
Geophysical mapping of complex glaciogenic large-scale structures
DEFF Research Database (Denmark)
Høyer, Anne-Sophie
2013-01-01
This thesis presents the main results of a four year PhD study concerning the use of geophysical data in geological mapping. The study is related to the Geocenter project, “KOMPLEKS”, which focuses on the mapping of complex, large-scale geological structures. The study area is approximately 100 km2...... data types and co-interpret them in order to improve our geological understanding. However, in order to perform this successfully, methodological considerations are necessary. For instance, a structure indicated by a reflection in the seismic data is not always apparent in the resistivity data...... information) can be collected. The geophysical data are used together with geological analyses from boreholes and pits to interpret the geological history of the hill-island. The geophysical data reveal that the glaciotectonic structures truncate at the surface. The directions of the structures were mapped...
Approaching complexity by stochastic methods: From biological systems to turbulence
Energy Technology Data Exchange (ETDEWEB)
Friedrich, Rudolf [Institute for Theoretical Physics, University of Muenster, D-48149 Muenster (Germany); Peinke, Joachim [Institute of Physics, Carl von Ossietzky University, D-26111 Oldenburg (Germany); Sahimi, Muhammad [Mork Family Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, CA 90089-1211 (United States); Reza Rahimi Tabar, M., E-mail: mohammed.r.rahimi.tabar@uni-oldenburg.de [Department of Physics, Sharif University of Technology, Tehran 11155-9161 (Iran, Islamic Republic of); Institute of Physics, Carl von Ossietzky University, D-26111 Oldenburg (Germany); Fachbereich Physik, Universitaet Osnabrueck, Barbarastrasse 7, 49076 Osnabrueck (Germany)
2011-09-15
This review addresses a central question in the field of complex systems: given a fluctuating (in time or space), sequentially measured set of experimental data, how should one analyze the data, assess their underlying trends, and discover the characteristics of the fluctuations that generate the experimental traces? In recent years, significant progress has been made in addressing this question for a class of stochastic processes that can be modeled by Langevin equations, including additive as well as multiplicative fluctuations or noise. Important results have emerged from the analysis of temporal data for such diverse fields as neuroscience, cardiology, finance, economy, surface science, turbulence, seismic time series and epileptic brain dynamics, to name but a few. Furthermore, it has been recognized that a similar approach can be applied to the data that depend on a length scale, such as velocity increments in fully developed turbulent flow, or height increments that characterize rough surfaces. A basic ingredient of the approach to the analysis of fluctuating data is the presence of a Markovian property, which can be detected in real systems above a certain time or length scale. This scale is referred to as the Markov-Einstein (ME) scale, and has turned out to be a useful characteristic of complex systems. We provide a review of the operational methods that have been developed for analyzing stochastic data in time and scale. We address in detail the following issues: (i) reconstruction of stochastic evolution equations from data in terms of the Langevin equations or the corresponding Fokker-Planck equations and (ii) intermittency, cascades, and multiscale correlation functions.
Approaching complexity by stochastic methods: From biological systems to turbulence
International Nuclear Information System (INIS)
Friedrich, Rudolf; Peinke, Joachim; Sahimi, Muhammad; Reza Rahimi Tabar, M.
2011-01-01
This review addresses a central question in the field of complex systems: given a fluctuating (in time or space), sequentially measured set of experimental data, how should one analyze the data, assess their underlying trends, and discover the characteristics of the fluctuations that generate the experimental traces? In recent years, significant progress has been made in addressing this question for a class of stochastic processes that can be modeled by Langevin equations, including additive as well as multiplicative fluctuations or noise. Important results have emerged from the analysis of temporal data for such diverse fields as neuroscience, cardiology, finance, economy, surface science, turbulence, seismic time series and epileptic brain dynamics, to name but a few. Furthermore, it has been recognized that a similar approach can be applied to the data that depend on a length scale, such as velocity increments in fully developed turbulent flow, or height increments that characterize rough surfaces. A basic ingredient of the approach to the analysis of fluctuating data is the presence of a Markovian property, which can be detected in real systems above a certain time or length scale. This scale is referred to as the Markov-Einstein (ME) scale, and has turned out to be a useful characteristic of complex systems. We provide a review of the operational methods that have been developed for analyzing stochastic data in time and scale. We address in detail the following issues: (i) reconstruction of stochastic evolution equations from data in terms of the Langevin equations or the corresponding Fokker-Planck equations and (ii) intermittency, cascades, and multiscale correlation functions.
Reorganizing Complex Network to Improve Large-Scale Multiagent Teamwork
Directory of Open Access Journals (Sweden)
Yang Xu
2014-01-01
Full Text Available Large-scale multiagent teamwork has been popular in various domains. Similar to human society infrastructure, agents only coordinate with some of the others, with a peer-to-peer complex network structure. Their organization has been proven as a key factor to influence their performance. To expedite team performance, we have analyzed that there are three key factors. First, complex network effects may be able to promote team performance. Second, coordination interactions coming from their sources are always trying to be routed to capable agents. Although they could be transferred across the network via different paths, their sources and sinks depend on the intrinsic nature of the team which is irrelevant to the network connections. In addition, the agents involved in the same plan often form a subteam and communicate with each other more frequently. Therefore, if the interactions between agents can be statistically recorded, we are able to set up an integrated network adjustment algorithm by combining the three key factors. Based on our abstracted teamwork simulations and the coordination statistics, we implemented the adaptive reorganization algorithm. The experimental results briefly support our design that the reorganized network is more capable of coordinating heterogeneous agents.
Hierarchical modeling and robust synthesis for the preliminary design of large scale complex systems
Koch, Patrick Nathan
Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: (1) Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis, (2) Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration, and (3) Noise modeling techniques for implementing robust preliminary design when approximate models are employed. The method developed and associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system; the turbofan system-level problem is partitioned into engine cycle and configuration design and a compressor module is integrated for more detailed subsystem-level design exploration, improving system evaluation.
Multilevel method for modeling large-scale networks.
Energy Technology Data Exchange (ETDEWEB)
Safro, I. M. (Mathematics and Computer Science)
2012-02-24
Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from
Linear-scaling quantum mechanical methods for excited states.
Yam, ChiYung; Zhang, Qing; Wang, Fan; Chen, GuanHua
2012-05-21
The poor scaling of many existing quantum mechanical methods with respect to the system size hinders their applications to large systems. In this tutorial review, we focus on latest research on linear-scaling or O(N) quantum mechanical methods for excited states. Based on the locality of quantum mechanical systems, O(N) quantum mechanical methods for excited states are comprised of two categories, the time-domain and frequency-domain methods. The former solves the dynamics of the electronic systems in real time while the latter involves direct evaluation of electronic response in the frequency-domain. The localized density matrix (LDM) method is the first and most mature linear-scaling quantum mechanical method for excited states. It has been implemented in time- and frequency-domains. The O(N) time-domain methods also include the approach that solves the time-dependent Kohn-Sham (TDKS) equation using the non-orthogonal localized molecular orbitals (NOLMOs). Besides the frequency-domain LDM method, other O(N) frequency-domain methods have been proposed and implemented at the first-principles level. Except one-dimensional or quasi-one-dimensional systems, the O(N) frequency-domain methods are often not applicable to resonant responses because of the convergence problem. For linear response, the most efficient O(N) first-principles method is found to be the LDM method with Chebyshev expansion for time integration. For off-resonant response (including nonlinear properties) at a specific frequency, the frequency-domain methods with iterative solvers are quite efficient and thus practical. For nonlinear response, both on-resonance and off-resonance, the time-domain methods can be used, however, as the time-domain first-principles methods are quite expensive, time-domain O(N) semi-empirical methods are often the practical choice. Compared to the O(N) frequency-domain methods, the O(N) time-domain methods for excited states are much more mature and numerically stable, and
Global-Scale Hydrology: Simple Characterization of Complex Simulation
Koster, Randal D.
1999-01-01
Atmospheric general circulation models (AGCMS) are unique and valuable tools for the analysis of large-scale hydrology. AGCM simulations of climate provide tremendous amounts of hydrological data with a spatial and temporal coverage unmatched by observation systems. To the extent that the AGCM behaves realistically, these data can shed light on the nature of the real world's hydrological cycle. In the first part of the seminar, I will describe the hydrological cycle in a typical AGCM, with some emphasis on the validation of simulated precipitation against observations. The second part of the seminar will focus on a key goal in large-scale hydrology studies, namely the identification of simple, overarching controls on hydrological behavior hidden amidst the tremendous amounts of data produced by the highly complex AGCM parameterizations. In particular, I will show that a simple 50-year-old climatological relation (and a recent extension we made to it) successfully predicts, to first order, both the annual mean and the interannual variability of simulated evaporation and runoff fluxes. The seminar will conclude with an example of a practical application of global hydrology studies. The accurate prediction of weather statistics several months in advance would have tremendous societal benefits, and conventional wisdom today points at the use of coupled ocean-atmosphere-land models for such seasonal-to-interannual prediction. Understanding the hydrological cycle in AGCMs is critical to establishing the potential for such prediction. Our own studies show, among other things, that soil moisture retention can lead to significant precipitation predictability in many midlatitude and tropical regions.
Complex operator method of the hydrogen atom
International Nuclear Information System (INIS)
Jiang, X.
1989-01-01
Frequently the hydrogen atom eigenvalue problem is analytically solved by solving a radial wave equation for a particle in a Coulomb field. In this article, complex coordinates are introduced, and an expression for the energy levels of the hydrogen atom is obtained by means of the algebraic solution of operators. The form of this solution is in accord with that of the analytical solution
Rising Trend: Complex and sophisticated attack methods
Indian Academy of Sciences (India)
Stux, DuQu, Nitro, Luckycat, Exploit Kits, FLAME. ADSL/SoHo Router Compromise. Botnets of compromised ADSL/SoHo Routers; User Redirection via malicious DNS entry. Web Application attacks. SQL Injection, RFI etc. More and more Webshells. More utility to hackers; Increasing complexity and evading mechanisms.
Rising Trend: Complex and sophisticated attack methods
Indian Academy of Sciences (India)
Increased frequency and intensity of DoS/DDoS. Few Gbps is now normal; Anonymous VPNs being used; Botnets being used as a vehicle for launching DDoS attacks. Large scale booking of domain names. Hundred thousands of domains registered in short duration via few registrars; Single registrant; Most of the domains ...
Complex networks with scale-free nature and hierarchical modularity
Shekatkar, Snehal M.; Ambika, G.
2015-09-01
Generative mechanisms which lead to empirically observed structure of networked systems from diverse fields like biology, technology and social sciences form a very important part of study of complex networks. The structure of many networked systems like biological cell, human society and World Wide Web markedly deviate from that of completely random networks indicating the presence of underlying processes. Often the main process involved in their evolution is the addition of links between existing nodes having a common neighbor. In this context we introduce an important property of the nodes, which we call mediating capacity, that is generic to many networks. This capacity decreases rapidly with increase in degree, making hubs weak mediators of the process. We show that this property of nodes provides an explanation for the simultaneous occurrence of the observed scale-free structure and hierarchical modularity in many networked systems. This also explains the high clustering and small-path length seen in real networks as well as non-zero degree-correlations. Our study also provides insight into the local process which ultimately leads to emergence of preferential attachment and hence is also important in understanding robustness and control of real networks as well as processes happening on real networks.
Directory of Open Access Journals (Sweden)
Aliyeh Kazemi
2016-09-01
Full Text Available Construction projects have always been complex. By growing trend of this complexity, implementations of large-scale constructions become harder. Hence, evaluating and understanding these complexities are critical. Correct evaluation of a project complication can provide executives and managers with good source to use. Fuzzy analytic network process (ANP is a logical and systematic approach toward defining, evaluation, and grading. This method allows for analyzing complex systems, and determining complexity of them. In this study, by taking advantage of fuzzy ANP, effective indexes for development of complications in large-scale construction projects in Iran have been determined and prioritized. The results show socio-political, project system interdependencies, and technological complexity indexes ranked top to three. Furthermore, in comparison of three main huge projects: commercial-administrative, hospital, and skyscrapers, the hospital project had been evaluated as the most complicated. This model is beneficial for professionals in managing large-scale projects.
Complex modular structure of large-scale brain networks
Valencia, M.; Pastor, M. A.; Fernández-Seara, M. A.; Artieda, J.; Martinerie, J.; Chavez, M.
2009-06-01
Modular structure is ubiquitous among real-world networks from related proteins to social groups. Here we analyze the modular organization of brain networks at a large scale (voxel level) extracted from functional magnetic resonance imaging signals. By using a random-walk-based method, we unveil the modularity of brain webs and show modules with a spatial distribution that matches anatomical structures with functional significance. The functional role of each node in the network is studied by analyzing its patterns of inter- and intramodular connections. Results suggest that the modular architecture constitutes the structural basis for the coexistence of functional integration of distant and specialized brain areas during normal brain activities at rest.
Comparison of MODIS and SWAT evapotranspiration over a complex terrain at different spatial scales
Directory of Open Access Journals (Sweden)
O. O. Abiodun
2018-05-01
Full Text Available In most hydrological systems, evapotranspiration (ET and precipitation are the largest components of the water balance, which are difficult to estimate, particularly over complex terrain. In recent decades, the advent of remotely sensed data based ET algorithms and distributed hydrological models has provided improved spatially upscaled ET estimates. However, information on the performance of these methods at various spatial scales is limited. This study compares the ET from the MODIS remotely sensed ET dataset (MOD16 with the ET estimates from a SWAT hydrological model on graduated spatial scales for the complex terrain of the Sixth Creek Catchment of the Western Mount Lofty Ranges, South Australia. ET from both models was further compared with the coarser-resolution AWRA-L model at catchment scale. The SWAT model analyses are performed on daily timescales with a 6-year calibration period (2000–2005 and 7-year validation period (2007–2013. Differences in ET estimation between the SWAT and MOD16 methods of up to 31, 19, 15, 11 and 9 % were observed at respectively 1, 4, 9, 16 and 25 km2 spatial resolutions. Based on the results of the study, a spatial scale of confidence of 4 km2 for catchment-scale evapotranspiration is suggested in complex terrain. Land cover differences, HRU parameterisation in AWRA-L and catchment-scale averaging of input climate data in the SWAT semi-distributed model were identified as the principal sources of weaker correlations at higher spatial resolution.
Hybrid recommendation methods in complex networks.
Fiasconaro, A; Tumminello, M; Nicosia, V; Latora, V; Mantegna, R N
2015-07-01
We propose two recommendation methods, based on the appropriate normalization of already existing similarity measures, and on the convex combination of the recommendation scores derived from similarity between users and between objects. We validate the proposed measures on three data sets, and we compare the performance of our methods to other recommendation systems recently proposed in the literature. We show that the proposed similarity measures allow us to attain an improvement of performances of up to 20% with respect to existing nonparametric methods, and that the accuracy of a recommendation can vary widely from one specific bipartite network to another, which suggests that a careful choice of the most suitable method is highly relevant for an effective recommendation on a given system. Finally, we study how an increasing presence of random links in the network affects the recommendation scores, finding that one of the two recommendation algorithms introduced here can systematically outperform the others in noisy data sets.
A computational approach to modeling cellular-scale blood flow in complex geometry
Balogh, Peter; Bagchi, Prosenjit
2017-04-01
We present a computational methodology for modeling cellular-scale blood flow in arbitrary and highly complex geometry. Our approach is based on immersed-boundary methods, which allow modeling flows in arbitrary geometry while resolving the large deformation and dynamics of every blood cell with high fidelity. The present methodology seamlessly integrates different modeling components dealing with stationary rigid boundaries of complex shape, moving rigid bodies, and highly deformable interfaces governed by nonlinear elasticity. Thus it enables us to simulate 'whole' blood suspensions flowing through physiologically realistic microvascular networks that are characterized by multiple bifurcating and merging vessels, as well as geometrically complex lab-on-chip devices. The focus of the present work is on the development of a versatile numerical technique that is able to consider deformable cells and rigid bodies flowing in three-dimensional arbitrarily complex geometries over a diverse range of scenarios. After describing the methodology, a series of validation studies are presented against analytical theory, experimental data, and previous numerical results. Then, the capability of the methodology is demonstrated by simulating flows of deformable blood cells and heterogeneous cell suspensions in both physiologically realistic microvascular networks and geometrically intricate microfluidic devices. It is shown that the methodology can predict several complex microhemodynamic phenomena observed in vascular networks and microfluidic devices. The present methodology is robust and versatile, and has the potential to scale up to very large microvascular networks at organ levels.
Multistage Spectral Relaxation Method for Solving the Hyperchaotic Complex Systems
Directory of Open Access Journals (Sweden)
Hassan Saberi Nik
2014-01-01
Full Text Available We present a pseudospectral method application for solving the hyperchaotic complex systems. The proposed method, called the multistage spectral relaxation method (MSRM is based on a technique of extending Gauss-Seidel type relaxation ideas to systems of nonlinear differential equations and using the Chebyshev pseudospectral methods to solve the resulting system on a sequence of multiple intervals. In this new application, the MSRM is used to solve famous hyperchaotic complex systems such as hyperchaotic complex Lorenz system and the complex permanent magnet synchronous motor. We compare this approach to the Runge-Kutta based ode45 solver to show that the MSRM gives accurate results.
Hagbani, Turki Al; Nazzal, Sami
2017-03-30
One approach to enhance curcumin (CUR) aqueous solubility is to use cyclodextrins (CDs) to form inclusion complexes where CUR is encapsulated as a guest molecule within the internal cavity of the water-soluble CD. Several methods have been reported for the complexation of CUR with CDs. Limited information, however, is available on the use of the autoclave process (AU) in complex formation. The aims of this work were therefore to (1) investigate and evaluate the AU cycle as a complex formation method to enhance CUR solubility; (2) compare the efficacy of the AU process with the freeze-drying (FD) and evaporation (EV) processes in complex formation; and (3) confirm CUR stability by characterizing CUR:CD complexes by NMR, Raman spectroscopy, DSC, and XRD. Significant differences were found in the saturation solubility of CUR from its complexes with CD when prepared by the three complexation methods. The AU yielded a complex with expected chemical and physical fingerprints for a CUR:CD inclusion complex that maintained the chemical integrity and stability of CUR and provided the highest solubility of CUR in water. Physical and chemical characterizations of the AU complexes confirmed the encapsulated of CUR inside the CD cavity and the transformation of the crystalline CUR:CD inclusion complex to an amorphous form. It was concluded that the autoclave process with its short processing time could be used as an alternate and efficient methods for drug:CD complexation. Copyright © 2017 Elsevier B.V. All rights reserved.
Early Language Learning: Complexity and Mixed Methods
Enever, Janet, Ed.; Lindgren, Eva, Ed.
2017-01-01
This is the first collection of research studies to explore the potential for mixed methods to shed light on foreign or second language learning by young learners in instructed contexts. It brings together recent studies undertaken in Cameroon, China, Croatia, Ethiopia, France, Germany, Italy, Kenya, Mexico, Slovenia, Spain, Sweden, Tanzania and…
Solving the three-body Coulomb breakup problem using exterior complex scaling
Energy Technology Data Exchange (ETDEWEB)
McCurdy, C.W.; Baertschy, M.; Rescigno, T.N.
2004-05-17
Electron-impact ionization of the hydrogen atom is the prototypical three-body Coulomb breakup problem in quantum mechanics. The combination of subtle correlation effects and the difficult boundary conditions required to describe two electrons in the continuum have made this one of the outstanding challenges of atomic physics. A complete solution of this problem in the form of a ''reduction to computation'' of all aspects of the physics is given by the application of exterior complex scaling, a modern variant of the mathematical tool of analytic continuation of the electronic coordinates into the complex plane that was used historically to establish the formal analytic properties of the scattering matrix. This review first discusses the essential difficulties of the three-body Coulomb breakup problem in quantum mechanics. It then describes the formal basis of exterior complex scaling of electronic coordinates as well as the details of its numerical implementation using a variety of methods including finite difference, finite elements, discrete variable representations, and B-splines. Given these numerical implementations of exterior complex scaling, the scattering wave function can be generated with arbitrary accuracy on any finite volume in the space of electronic coordinates, but there remains the fundamental problem of extracting the breakup amplitudes from it. Methods are described for evaluating these amplitudes. The question of the volume-dependent overall phase that appears in the formal theory of ionization is resolved. A summary is presented of accurate results that have been obtained for the case of electron-impact ionization of hydrogen as well as a discussion of applications to the double photoionization of helium.
Interplay between multiple length and time scales in complex ...
Indian Academy of Sciences (India)
Administrator
Processes in complex chemical systems, such as macromolecules, electrolytes, interfaces, ... by processes operating on a multiplicity of length .... real time. The design and interpretation of femto- second experiments has required considerable ...
Nonlinear Phenomena in Complex Systems: From Nano to Macro Scale
Stanley, H
2014-01-01
Topics of complex system physics and their interdisciplinary applications to different problems in seismology, biology, economy, sociology, energy and nanotechnology are covered in this new work from renowned experts in their fields. In particular, contributed papers contain original results on network science, earthquake dynamics, econophysics, sociophysics, nanoscience and biological physics. Most of the papers use interdisciplinary approaches based on statistical physics, quantum physics and other topics of complex system physics. Papers on econophysics and sociophysics are focussed on societal aspects of physics such as, opinion dynamics, public debates and financial and economic stability. This work will be of interest to statistical physicists, economists, biologists, seismologists and all scientists working in interdisciplinary topics of complexity.
Electron-helium scattering in the S-wave model using exterior complex scaling
International Nuclear Information System (INIS)
Horner, Daniel A.; McCurdy, C. William; Rescigno, Thomas N.
2004-01-01
Electron-impact excitation and ionization of helium is studied in the S-wave model. The problem is treated in full dimensionality using a time-dependent formulation of the exterior complex scaling method that does not involve the solution of large linear systems of equations. We discuss the steps that must be taken to compute stable ionization amplitudes. We present total excitation, total ionization and single differential cross sections from the ground and n=2 excited states and compare our results with those obtained by others using a frozen-core model
Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems
Energy Technology Data Exchange (ETDEWEB)
Ghattas, Omar [The University of Texas at Austin
2013-10-15
The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUARO Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.
Scaling a nuclear power plant as a complex system
International Nuclear Information System (INIS)
Zuber, N.
2005-01-01
This lecture summarizes and discusses the highlights of the fractional scaling analysis (FSA) and the benefits it may offer NPP technology. FSA is a quantitative methodology developed to: 1. scale time-dependent evolutionary processes involving an aggregate of interacting modules and processes (such as an NPP) and 2. integrate and organize information and data of interest to NPP design and safety analyses. The methodology is based upon three concepts: 1. fractional scaling, 2. hierarchical levels, 3. aggregate configuration. FSA is used to provide syntheses (at various hierarchical levels) and generate quantitative criteria for assessing the effects of various design and operating parameters on thermohydraulic processes in an NPP. The synthesis is carried out at three hierarchical levels: process, component and system. The methodology is illustrated by applying it to various problems at the three hierarchical levels. (author)
The Tunneling Method for Global Optimization in Multidimensional Scaling.
Groenen, Patrick J. F.; Heiser, Willem J.
1996-01-01
A tunneling method for global minimization in multidimensional scaling is introduced and adjusted for multidimensional scaling with general Minkowski distances. The method alternates a local search step with a tunneling step in which a different configuration is sought with the same STRESS implementation. (SLD)
Complexity, Methodology and Method: Crafting a Critical Process of Research
Alhadeff-Jones, Michel
2013-01-01
This paper defines a theoretical framework aiming to support the actions and reflections of researchers looking for a "method" in order to critically conceive the complexity of a scientific process of research. First, it starts with a brief overview of the core assumptions framing Morin's "paradigm of complexity" and Le…
Networks, complexity and internet regulation scale-free law
Guadamuz, Andres
2013-01-01
This book, then, starts with a general statement: that regulators should try, wherever possible, to use the physical methodological tools presently available in order to draft better legislation. While such an assertion may be applied to the law in general, this work will concentrate on the much narrower area of Internet regulation and the science of complex networks The Internet is the subject of this book not only because it is my main area of research, but also because –without...
Complex dewetting scenarios of ultrathin silicon films for large-scale nanoarchitectures.
Naffouti, Meher; Backofen, Rainer; Salvalaglio, Marco; Bottein, Thomas; Lodari, Mario; Voigt, Axel; David, Thomas; Benkouider, Abdelmalek; Fraj, Ibtissem; Favre, Luc; Ronda, Antoine; Berbezier, Isabelle; Grosso, David; Abbarchi, Marco; Bollani, Monica
2017-11-01
Dewetting is a ubiquitous phenomenon in nature; many different thin films of organic and inorganic substances (such as liquids, polymers, metals, and semiconductors) share this shape instability driven by surface tension and mass transport. Via templated solid-state dewetting, we frame complex nanoarchitectures of monocrystalline silicon on insulator with unprecedented precision and reproducibility over large scales. Phase-field simulations reveal the dominant role of surface diffusion as a driving force for dewetting and provide a predictive tool to further engineer this hybrid top-down/bottom-up self-assembly method. Our results demonstrate that patches of thin monocrystalline films of metals and semiconductors share the same dewetting dynamics. We also prove the potential of our method by fabricating nanotransfer molding of metal oxide xerogels on silicon and glass substrates. This method allows the novel possibility of transferring these Si-based patterns on different materials, which do not usually undergo dewetting, offering great potential also for microfluidic or sensing applications.
Modelling across bioreactor scales: methods, challenges and limitations
DEFF Research Database (Denmark)
Gernaey, Krist
that it is challenging and expensive to acquire experimental data of good quality that can be used for characterizing gradients occurring inside a large industrial scale bioreactor. But which model building methods are available? And how can one ensure that the parameters in such a model are properly estimated? And what......Scale-up and scale-down of bioreactors are very important in industrial biotechnology, especially with the currently available knowledge on the occurrence of gradients in industrial-scale bioreactors. Moreover, it becomes increasingly appealing to model such industrial scale systems, considering...
Directory of Open Access Journals (Sweden)
Jianhua Xu
2013-01-01
Full Text Available Based on the observed data from 51 meteorological stations during the period from 1958 to 2012 in Xinjiang, China, we investigated the complexity of temperature dynamics from the temporal and spatial perspectives by using a comprehensive approach including the correlation dimension (CD, classical statistics, and geostatistics. The main conclusions are as follows (1 The integer CD values indicate that the temperature dynamics are a complex and chaotic system, which is sensitive to the initial conditions. (2 The complexity of temperature dynamics decreases along with the increase of temporal scale. To describe the temperature dynamics, at least 3 independent variables are needed at daily scale, whereas at least 2 independent variables are needed at monthly, seasonal, and annual scales. (3 The spatial patterns of CD values at different temporal scales indicate that the complex temperature dynamics are derived from the complex landform.
Böbel, A.; Knapek, C. A.; Räth, C.
2018-05-01
Experiments of the recrystallization processes in two-dimensional complex plasmas are analyzed to rigorously test a recently developed scale-free phase transition theory. The "fractal-domain-structure" (FDS) theory is based on the kinetic theory of Frenkel. It assumes the formation of homogeneous domains, separated by defect lines, during crystallization and a fractal relationship between domain area and boundary length. For the defect number fraction and system energy a scale-free power-law relation is predicted. The long-range scaling behavior of the bond-order correlation function shows clearly that the complex plasma phase transitions are not of the Kosterlitz, Thouless, Halperin, Nelson, and Young type. Previous preliminary results obtained by counting the number of dislocations and applying a bond-order metric for structural analysis are reproduced. These findings are supplemented by extending the use of the bond-order metric to measure the defect number fraction and furthermore applying state-of-the-art analysis methods, allowing a systematic testing of the FDS theory with unprecedented scrutiny: A morphological analysis of lattice structure is performed via Minkowski tensor methods. Minkowski tensors form a complete family of additive, motion covariant and continuous morphological measures that are sensitive to nonlinear properties. The FDS theory is rigorously confirmed and predictions of the theory are reproduced extremely well. The predicted scale-free power-law relation between defect fraction number and system energy is verified for one more order of magnitude at high energies compared to the inherently discontinuous bond-order metric. It is found that the fractal relation between crystalline domain area and circumference is independent of the experiment, the particular Minkowski tensor method, and the particular choice of parameters. Thus, the fractal relationship seems to be inherent to two-dimensional phase transitions in complex plasmas. Minkowski
Complexity Analysis of Carbon Market Using the Modified Multi-Scale Entropy
Directory of Open Access Journals (Sweden)
Jiuli Yin
2018-06-01
Full Text Available Carbon markets provide a market-based way to reduce climate pollution. Subject to general market regulations, the major existing emission trading markets present complex characteristics. This paper analyzes the complexity of carbon market by using the multi-scale entropy. Pilot carbon markets in China are taken as the example. Moving average is adopted to extract the scales due to the short length of the data set. Results show a low-level complexity inferring that China’s pilot carbon markets are quite immature in lack of market efficiency. However, the complexity varies in different time scales. China’s carbon markets (except for the Chongqing pilot are more complex in the short period than in the long term. Furthermore, complexity level in most pilot markets increases as the markets developed, showing an improvement in market efficiency. All these results demonstrate that an effective carbon market is required for the full function of emission trading.
Scalable Parallel Methods for Analyzing Metagenomics Data at Extreme Scale
International Nuclear Information System (INIS)
Daily, Jeffrey A.
2015-01-01
The field of bioinformatics and computational biology is currently experiencing a data revolution. The exciting prospect of making fundamental biological discoveries is fueling the rapid development and deployment of numerous cost-effective, high-throughput next-generation sequencing technologies. The result is that the DNA and protein sequence repositories are being bombarded with new sequence information. Databases are continuing to report a Moore's law-like growth trajectory in their database sizes, roughly doubling every 18 months. In what seems to be a paradigm-shift, individual projects are now capable of generating billions of raw sequence data that need to be analyzed in the presence of already annotated sequence information. While it is clear that data-driven methods, such as sequencing homology detection, are becoming the mainstay in the field of computational life sciences, the algorithmic advancements essential for implementing complex data analytics at scale have mostly lagged behind. Sequence homology detection is central to a number of bioinformatics applications including genome sequencing and protein family characterization. Given millions of sequences, the goal is to identify all pairs of sequences that are highly similar (or 'homologous') on the basis of alignment criteria. While there are optimal alignment algorithms to compute pairwise homology, their deployment for large-scale is currently not feasible; instead, heuristic methods are used at the expense of quality. In this dissertation, we present the design and evaluation of a parallel implementation for conducting optimal homology detection on distributed memory supercomputers. Our approach uses a combination of techniques from asynchronous load balancing (viz. work stealing, dynamic task counters), data replication, and exact-matching filters to achieve homology detection at scale. Results for a collection of 2.56M sequences show parallel efficiencies of ~75-100% on up to 8K
Scalable Parallel Methods for Analyzing Metagenomics Data at Extreme Scale
Energy Technology Data Exchange (ETDEWEB)
Daily, Jeffrey A. [Washington State Univ., Pullman, WA (United States)
2015-05-01
The field of bioinformatics and computational biology is currently experiencing a data revolution. The exciting prospect of making fundamental biological discoveries is fueling the rapid development and deployment of numerous cost-effective, high-throughput next-generation sequencing technologies. The result is that the DNA and protein sequence repositories are being bombarded with new sequence information. Databases are continuing to report a Moore’s law-like growth trajectory in their database sizes, roughly doubling every 18 months. In what seems to be a paradigm-shift, individual projects are now capable of generating billions of raw sequence data that need to be analyzed in the presence of already annotated sequence information. While it is clear that data-driven methods, such as sequencing homology detection, are becoming the mainstay in the field of computational life sciences, the algorithmic advancements essential for implementing complex data analytics at scale have mostly lagged behind. Sequence homology detection is central to a number of bioinformatics applications including genome sequencing and protein family characterization. Given millions of sequences, the goal is to identify all pairs of sequences that are highly similar (or “homologous”) on the basis of alignment criteria. While there are optimal alignment algorithms to compute pairwise homology, their deployment for large-scale is currently not feasible; instead, heuristic methods are used at the expense of quality. In this dissertation, we present the design and evaluation of a parallel implementation for conducting optimal homology detection on distributed memory supercomputers. Our approach uses a combination of techniques from asynchronous load balancing (viz. work stealing, dynamic task counters), data replication, and exact-matching filters to achieve homology detection at scale. Results for a collection of 2.56M sequences show parallel efficiencies of ~75-100% on up to 8K cores
Snowden, Thomas J; van der Graaf, Piet H; Tindall, Marcus J
2017-07-01
Complex models of biochemical reaction systems have become increasingly common in the systems biology literature. The complexity of such models can present a number of obstacles for their practical use, often making problems difficult to intuit or computationally intractable. Methods of model reduction can be employed to alleviate the issue of complexity by seeking to eliminate those portions of a reaction network that have little or no effect upon the outcomes of interest, hence yielding simplified systems that retain an accurate predictive capacity. This review paper seeks to provide a brief overview of a range of such methods and their application in the context of biochemical reaction network models. To achieve this, we provide a brief mathematical account of the main methods including timescale exploitation approaches, reduction via sensitivity analysis, optimisation methods, lumping, and singular value decomposition-based approaches. Methods are reviewed in the context of large-scale systems biology type models, and future areas of research are briefly discussed.
TopoSCALE v.1.0: downscaling gridded climate data in complex terrain
Fiddes, J.; Gruber, S.
2014-02-01
Simulation of land surface processes is problematic in heterogeneous terrain due to the the high resolution required of model grids to capture strong lateral variability caused by, for example, topography, and the lack of accurate meteorological forcing data at the site or scale it is required. Gridded data products produced by atmospheric models can fill this gap, however, often not at an appropriate spatial resolution to drive land-surface simulations. In this study we describe a method that uses the well-resolved description of the atmospheric column provided by climate models, together with high-resolution digital elevation models (DEMs), to downscale coarse-grid climate variables to a fine-scale subgrid. The main aim of this approach is to provide high-resolution driving data for a land-surface model (LSM). The method makes use of an interpolation of pressure-level data according to topographic height of the subgrid. An elevation and topography correction is used to downscale short-wave radiation. Long-wave radiation is downscaled by deriving a cloud-component of all-sky emissivity at grid level and using downscaled temperature and relative humidity fields to describe variability with elevation. Precipitation is downscaled with a simple non-linear lapse and optionally disaggregated using a climatology approach. We test the method in comparison with unscaled grid-level data and a set of reference methods, against a large evaluation dataset (up to 210 stations per variable) in the Swiss Alps. We demonstrate that the method can be used to derive meteorological inputs in complex terrain, with most significant improvements (with respect to reference methods) seen in variables derived from pressure levels: air temperature, relative humidity, wind speed and incoming long-wave radiation. This method may be of use in improving inputs to numerical simulations in heterogeneous and/or remote terrain, especially when statistical methods are not possible, due to lack of
An efficient Korringa-Kohn-Rostoker method for ''complex'' lattices
International Nuclear Information System (INIS)
Yussouff, M.; Zeller, R.
1980-10-01
We present a modification of the exact KKR-band structure method which uses (a) a new energy expansion for structure constants and (b) only the reciprocal lattice summation. It is quite efficient and particularly useful for 'complex' lattices. The band structure of hexagonal-close-packed Beryllium at symmetry points is presented as an example of this method. (author)
A direction of developing a mining method and mining complexes
Energy Technology Data Exchange (ETDEWEB)
Gabov, V.V.; Efimov, I.A. [St. Petersburg State Mining Institute, St. Petersburg (Russian Federation). Vorkuta Branch
1996-12-31
The analyses of a mining method as a main factor determining the development stages of mining units is presented. The paper suggests a perspective mining method which differs from the known ones by following peculiarities: the direction selectivity of cuts with regard to coal seams structure; the cutting speed, thickness and succession of dusts. This method may be done by modulate complexes (a shield carrying a cutting head for coal mining), their mining devices being supplied with hydraulic drive. An experimental model of the module complex has been developed. 2 refs.
Large-scale synthesis of YSZ nanopowder by Pechini method
Indian Academy of Sciences (India)
Administrator
structure and chemical purity of 99⋅1% by inductively coupled plasma optical emission spectroscopy on a large scale. Keywords. Sol–gel; yttria-stabilized zirconia; large scale; nanopowder; Pechini method. 1. Introduction. Zirconia has attracted the attention of many scientists because of its tremendous thermal, mechanical ...
a Range Based Method for Complex Facade Modeling
Adami, A.; Fregonese, L.; Taffurelli, L.
2011-09-01
the complex architecture. From the point cloud we can extract a false colour map depending on the distance of each point from the average plane. In this way we can represent each point of the facades by a height map in grayscale. In this operation it is important to define the scale of the final result in order to set the correct pixel size in the map. The following step is concerning the use of a modifier which is well-known in computer graphics. In fact the modifier Displacement allows to simulate on a planar surface the original roughness of the object according to a grayscale map. The value of gray is read by the modifier as the distance from the reference plane and it represents the displacement of the corresponding element of the virtual plane. Similar to the bump map, the displacement modifier does not only simulate the effect, but it really deforms the planar surface. In this way the 3d model can be use not only in a static representation, but also in dynamic animation or interactive application. The setting of the plane to be deformed is the most important step in this process. In 3d Max the planar surface has to be characterized by the real dimension of the façade and also by a correct number of quadrangular faces which are the smallest part of the whole surface. In this way we can consider the modified surface as a 3d raster representation where each quadrangular face (corresponding to traditional pixel) is displaced according the value of gray (= distance from the plane). This method can be applied in different context, above all when the object to be represented can be considered as a 2,5 dimension such as facades of architecture in city model or large scale representation. But also it can be used to represent particular effect such as deformation of walls in a complete 3d way.
A RANGE BASED METHOD FOR COMPLEX FACADE MODELING
Directory of Open Access Journals (Sweden)
A. Adami
2012-09-01
homogeneous point cloud of the complex architecture. From the point cloud we can extract a false colour map depending on the distance of each point from the average plane. In this way we can represent each point of the facades by a height map in grayscale. In this operation it is important to define the scale of the final result in order to set the correct pixel size in the map. The following step is concerning the use of a modifier which is well-known in computer graphics. In fact the modifier Displacement allows to simulate on a planar surface the original roughness of the object according to a grayscale map. The value of gray is read by the modifier as the distance from the reference plane and it represents the displacement of the corresponding element of the virtual plane. Similar to the bump map, the displacement modifier does not only simulate the effect, but it really deforms the planar surface. In this way the 3d model can be use not only in a static representation, but also in dynamic animation or interactive application. The setting of the plane to be deformed is the most important step in this process. In 3d Max the planar surface has to be characterized by the real dimension of the façade and also by a correct number of quadrangular faces which are the smallest part of the whole surface. In this way we can consider the modified surface as a 3d raster representation where each quadrangular face (corresponding to traditional pixel is displaced according the value of gray (= distance from the plane. This method can be applied in different context, above all when the object to be represented can be considered as a 2,5 dimension such as facades of architecture in city model or large scale representation. But also it can be used to represent particular effect such as deformation of walls in a complete 3d way.
High-resolution method for evolving complex interface networks
Pan, Shucheng; Hu, Xiangyu Y.; Adams, Nikolaus A.
2018-04-01
In this paper we describe a high-resolution transport formulation of the regional level-set approach for an improved prediction of the evolution of complex interface networks. The novelty of this method is twofold: (i) construction of local level sets and reconstruction of a global level set, (ii) local transport of the interface network by employing high-order spatial discretization schemes for improved representation of complex topologies. Various numerical test cases of multi-region flow problems, including triple-point advection, single vortex flow, mean curvature flow, normal driven flow, dry foam dynamics and shock-bubble interaction show that the method is accurate and suitable for a wide range of complex interface-network evolutions. Its overall computational cost is comparable to the Semi-Lagrangian regional level-set method while the prediction accuracy is significantly improved. The approach thus offers a viable alternative to previous interface-network level-set method.
An Extended Newmark-FDTD Method for Complex Dispersive Media
Directory of Open Access Journals (Sweden)
Yu-Qiang Zhang
2018-01-01
Full Text Available Based on polarizability in the form of a complex quadratic rational function, a novel finite-difference time-domain (FDTD approach combined with the Newmark algorithm is presented for dealing with a complex dispersive medium. In this paper, the time-stepping equation of the polarization vector is derived by applying simultaneously the Newmark algorithm to the two sides of a second-order time-domain differential equation obtained from the relation between the polarization vector and electric field intensity in the frequency domain by the inverse Fourier transform. Then, its accuracy and stability are discussed from the two aspects of theoretical analysis and numerical computation. It is observed that this method possesses the advantages of high accuracy, high stability, and a wide application scope and can thus be applied to the treatment of many complex dispersion models, including the complex conjugate pole residue model, critical point model, modified Lorentz model, and complex quadratic rational function.
Extreme-Scale Bayesian Inference for Uncertainty Quantification of Complex Simulations
Energy Technology Data Exchange (ETDEWEB)
Biros, George [Univ. of Texas, Austin, TX (United States)
2018-01-12
Uncertainty quantification (UQ)—that is, quantifying uncertainties in complex mathematical models and their large-scale computational implementations—is widely viewed as one of the outstanding challenges facing the field of CS&E over the coming decade. The EUREKA project set to address the most difficult class of UQ problems: those for which both the underlying PDE model as well as the uncertain parameters are of extreme scale. In the project we worked on these extreme-scale challenges in the following four areas: 1. Scalable parallel algorithms for sampling and characterizing the posterior distribution that exploit the structure of the underlying PDEs and parameter-to-observable map. These include structure-exploiting versions of the randomized maximum likelihood method, which aims to overcome the intractability of employing conventional MCMC methods for solving extreme-scale Bayesian inversion problems by appealing to and adapting ideas from large-scale PDE-constrained optimization, which have been very successful at exploring high-dimensional spaces. 2. Scalable parallel algorithms for construction of prior and likelihood functions based on learning methods and non-parametric density estimation. Constructing problem-specific priors remains a critical challenge in Bayesian inference, and more so in high dimensions. Another challenge is construction of likelihood functions that capture unmodeled couplings between observations and parameters. We will create parallel algorithms for non-parametric density estimation using high dimensional N-body methods and combine them with supervised learning techniques for the construction of priors and likelihood functions. 3. Bayesian inadequacy models, which augment physics models with stochastic models that represent their imperfections. The success of the Bayesian inference framework depends on the ability to represent the uncertainty due to imperfections of the mathematical model of the phenomena of interest. This is a
Methods of scaling threshold color difference using printed samples
Huang, Min; Cui, Guihua; Liu, Haoxue; Luo, M. Ronnier
2012-01-01
A series of printed samples on substrate of semi-gloss paper and with the magnitude of threshold color difference were prepared for scaling the visual color difference and to evaluate the performance of different method. The probabilities of perceptibly was used to normalized to Z-score and different color differences were scaled to the Z-score. The visual color difference was got, and checked with the STRESS factor. The results indicated that only the scales have been changed but the relative scales between pairs in the data are preserved.
A Qualitative Method to Estimate HSI Display Complexity
International Nuclear Information System (INIS)
Hugo, Jacques; Gertman, David
2013-01-01
There is mounting evidence that complex computer system displays in control rooms contribute to cognitive complexity and, thus, to the probability of human error. Research shows that reaction time increases and response accuracy decreases as the number of elements in the display screen increase. However, in terms of supporting the control room operator, approaches focusing on addressing display complexity solely in terms of information density and its location and patterning, will fall short of delivering a properly designed interface. This paper argues that information complexity and semantic complexity are mandatory components when considering display complexity and that the addition of these concepts assists in understanding and resolving differences between designers and the preferences and performance of operators. This paper concludes that a number of simplified methods, when combined, can be used to estimate the impact that a particular display may have on the operator's ability to perform a function accurately and effectively. We present a mixed qualitative and quantitative approach and a method for complexity estimation
A Qualitative Method to Estimate HSI Display Complexity
Energy Technology Data Exchange (ETDEWEB)
Hugo, Jacques; Gertman, David [Idaho National Laboratory, Idaho (United States)
2013-04-15
There is mounting evidence that complex computer system displays in control rooms contribute to cognitive complexity and, thus, to the probability of human error. Research shows that reaction time increases and response accuracy decreases as the number of elements in the display screen increase. However, in terms of supporting the control room operator, approaches focusing on addressing display complexity solely in terms of information density and its location and patterning, will fall short of delivering a properly designed interface. This paper argues that information complexity and semantic complexity are mandatory components when considering display complexity and that the addition of these concepts assists in understanding and resolving differences between designers and the preferences and performance of operators. This paper concludes that a number of simplified methods, when combined, can be used to estimate the impact that a particular display may have on the operator's ability to perform a function accurately and effectively. We present a mixed qualitative and quantitative approach and a method for complexity estimation.
Complex dynamics of our economic life on different scales: insights from search engine query data.
Preis, Tobias; Reith, Daniel; Stanley, H Eugene
2010-12-28
Search engine query data deliver insight into the behaviour of individuals who are the smallest possible scale of our economic life. Individuals are submitting several hundred million search engine queries around the world each day. We study weekly search volume data for various search terms from 2004 to 2010 that are offered by the search engine Google for scientific use, providing information about our economic life on an aggregated collective level. We ask the question whether there is a link between search volume data and financial market fluctuations on a weekly time scale. Both collective 'swarm intelligence' of Internet users and the group of financial market participants can be regarded as a complex system of many interacting subunits that react quickly to external changes. We find clear evidence that weekly transaction volumes of S&P 500 companies are correlated with weekly search volume of corresponding company names. Furthermore, we apply a recently introduced method for quantifying complex correlations in time series with which we find a clear tendency that search volume time series and transaction volume time series show recurring patterns.
Macro-scale complexity of nano- to micro-scale architecture of ...
Indian Academy of Sciences (India)
mobile, due to the lack of correlation between the silicon oxide layer and the final olivine particles, leading ... (olivine) systems. .... A branched forsterite crystal system (scale bar = .... therefore, that no template mechanism is operating between.
A NDVI assisted remote sensing image adaptive scale segmentation method
Zhang, Hong; Shen, Jinxiang; Ma, Yanmei
2018-03-01
Multiscale segmentation of images can effectively form boundaries of different objects with different scales. However, for the remote sensing image which widely coverage with complicated ground objects, the number of suitable segmentation scales, and each of the scale size is still difficult to be accurately determined, which severely restricts the rapid information extraction of the remote sensing image. A great deal of experiments showed that the normalized difference vegetation index (NDVI) can effectively express the spectral characteristics of a variety of ground objects in remote sensing images. This paper presents a method using NDVI assisted adaptive segmentation of remote sensing images, which segment the local area by using NDVI similarity threshold to iteratively select segmentation scales. According to the different regions which consist of different targets, different segmentation scale boundaries could be created. The experimental results showed that the adaptive segmentation method based on NDVI can effectively create the objects boundaries for different ground objects of remote sensing images.
Deposit and scale prevention methods in thermal sea water desalination
International Nuclear Information System (INIS)
Froehner, K.R.
1977-01-01
Introductory remarks deal with the 'fouling factor' and its influence on the overall heat transfer coefficient of msf evaporators. The composition of the matter dissolved in sea water and the thermal and chemical properties lead to formation of alkaline scale or even hard, sulphate scale on the heat exchanger tube walls and can hamper plant operation and economics seriously. Among the scale prevention methods are 1) pH control by acid dosing (decarbonation), 2) 'threshold treatment' by dosing of inhibitors of different kind, 3) mechanical cleaning by sponge rubber balls guided through the heat exchanger tubes, in general combined with methods no. 1 or 2, and 4) application of a scale crystals germ slurry (seeding). Mention is made of several other scale prevention proposals. The problems encountered with marine life (suspension, deposit, growth) in desalination plants are touched. (orig.) [de
Elements of a method to scale ignition reactor Tokamak
International Nuclear Information System (INIS)
Cotsaftis, M.
1984-08-01
Due to unavoidable uncertainties from present scaling laws when projected to thermonuclear regime, a method is proposed to minimize these uncertainties in order to figure out the main parameters of ignited tokamak. The method mainly consists in searching, if any, a domain in adapted parameters space which allows Ignition, but is the least sensitive to possible change in scaling laws. In other words, Ignition domain is researched which is the intersection of all possible Ignition domains corresponding to all possible scaling laws produced by all possible transports
Method of producing nano-scaled inorganic platelets
Zhamu, Aruna; Jang, Bor Z.
2012-11-13
The present invention provides a method of exfoliating a layered material (e.g., transition metal dichalcogenide) to produce nano-scaled platelets having a thickness smaller than 100 nm, typically smaller than 10 nm. The method comprises (a) dispersing particles of a non-graphite laminar compound in a liquid medium containing therein a surfactant or dispersing agent to obtain a stable suspension or slurry; and (b) exposing the suspension or slurry to ultrasonic waves at an energy level for a sufficient length of time to produce separated nano-scaled platelets. The nano-scaled platelets are candidate reinforcement fillers for polymer nanocomposites.
Variational Multi-Scale method with spectral approximation of the sub-scales.
Dia, Ben Mansour; Chá con-Rebollo, Tomas
2015-01-01
A variational multi-scale method where the sub-grid scales are computed by spectral approximations is presented. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base
Features of the method of large-scale paleolandscape reconstructions
Nizovtsev, Vyacheslav; Erman, Natalia; Graves, Irina
2017-04-01
The method of paleolandscape reconstructions was tested in the key area of the basin of the Central Dubna, located at the junction of the Taldom and Sergiev Posad districts of the Moscow region. A series of maps was created which shows paleoreconstructions of the original (indigenous) living environment of initial settlers during main time periods of the Holocene age and features of human interaction with landscapes at the early stages of economic development of the territory (in the early and middle Holocene). The sequence of these works is as follows. 1. Comprehensive analysis of topographic maps of different scales and aerial and satellite images, stock materials of geological and hydrological surveys and prospecting of peat deposits, archaeological evidence on ancient settlements, palynological and osteological analysis, analysis of complex landscape and archaeological studies. 2. Mapping of factual material and analyzing of the spatial distribution of archaeological sites were performed. 3. Running of a large-scale field landscape mapping (sample areas) and compiling of maps of the modern landscape structure. On this basis, edaphic properties of the main types of natural boundaries were analyzed and their resource base was determined. 4. Reconstruction of lake-river system during the main periods of the Holocene. The boundaries of restored paleolakes were determined based on power and territorial confinement of decay ooze. 5. On the basis of landscape and edaphic method the actual paleolandscape reconstructions for the main periods of the Holocene were performed. During the reconstructions of the original, indigenous flora we relied on data of palynological studies conducted on the studied area or in similar landscape conditions. 6. The result was a retrospective analysis and periodization of the settlement process, economic development and the formation of the first anthropogenically transformed landscape complexes. The reconstruction of the dynamics of the
New complex variable meshless method for advection—diffusion problems
International Nuclear Information System (INIS)
Wang Jian-Fei; Cheng Yu-Min
2013-01-01
In this paper, an improved complex variable meshless method (ICVMM) for two-dimensional advection—diffusion problems is developed based on improved complex variable moving least-square (ICVMLS) approximation. The equivalent functional of two-dimensional advection—diffusion problems is formed, the variation method is used to obtain the equation system, and the penalty method is employed to impose the essential boundary conditions. The difference method for two-point boundary value problems is used to obtain the discrete equations. Then the corresponding formulas of the ICVMM for advection—diffusion problems are presented. Two numerical examples with different node distributions are used to validate and inestigate the accuracy and efficiency of the new method in this paper. It is shown that ICVMM is very effective for advection—diffusion problems, and has a good convergent character, accuracy, and computational efficiency
Variational Multi-Scale method with spectral approximation of the sub-scales.
Dia, Ben Mansour
2015-01-07
A variational multi-scale method where the sub-grid scales are computed by spectral approximations is presented. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base of eigenfunctions which are orthonormal in weighted L2 spaces. We propose a feasible VMS-spectral method by truncation of this spectral expansion to a nite number of modes.
Application of Lattice Boltzmann Methods in Complex Mass Transfer Systems
Sun, Ning
Lattice Boltzmann Method (LBM) is a novel computational fluid dynamics method that can easily handle complex and dynamic boundaries, couple local or interfacial interactions/reactions, and be easily parallelized allowing for simulation of large systems. While most of the current studies in LBM mainly focus on fluid dynamics, however, the inherent power of this method makes it an ideal candidate for the study of mass transfer systems involving complex/dynamic microstructures and local reactions. In this thesis, LBM is introduced to be an alternative computational method for the study of electrochemical energy storage systems (Li-ion batteries (LIBs) and electric double layer capacitors (EDLCs)) and transdermal drug design on mesoscopic scale. Based on traditional LBM, the following in-depth studies have been carried out: (1) For EDLCs, the simulation of diffuse charge dynamics is carried out for both the charge and the discharge processes on 2D systems of complex random electrode geometries (pure random, random spheres and random fibers). Steric effect of concentrated solutions is considered by using modified Poisson-Nernst-Plank (MPNP) equations and compared with regular Poisson-Nernst-Plank (PNP) systems. The effects of electrode microstructures (electrode density, electrode filler morphology, filler size, etc.) on the net charge distribution and charge/discharge time are studied in detail. The influence of applied potential during discharging process is also discussed. (2) For the study of dendrite formation on the anode of LIBs, it is shown that the Lattice Boltzmann model can capture all the experimentally observed features of microstructure evolution at the anode, from smooth to mossy to dendritic. The mechanism of dendrite formation process in mesoscopic scale is discussed in detail and compared with the traditional Sand's time theories. It shows that dendrite formation is closely related to the inhomogeneous reactively at the electrode-electrolyte interface
Dual-scale Galerkin methods for Darcy flow
Wang, Guoyin; Scovazzi, Guglielmo; Nouveau, Léo; Kees, Christopher E.; Rossi, Simone; Colomés, Oriol; Main, Alex
2018-02-01
The discontinuous Galerkin (DG) method has found widespread application in elliptic problems with rough coefficients, of which the Darcy flow equations are a prototypical example. One of the long-standing issues of DG approximations is the overall computational cost, and many different strategies have been proposed, such as the variational multiscale DG method, the hybridizable DG method, the multiscale DG method, the embedded DG method, and the Enriched Galerkin method. In this work, we propose a mixed dual-scale Galerkin method, in which the degrees-of-freedom of a less computationally expensive coarse-scale approximation are linked to the degrees-of-freedom of a base DG approximation. We show that the proposed approach has always similar or improved accuracy with respect to the base DG method, with a considerable reduction in computational cost. For the specific definition of the coarse-scale space, we consider Raviart-Thomas finite elements for the mass flux and piecewise-linear continuous finite elements for the pressure. We provide a complete analysis of stability and convergence of the proposed method, in addition to a study on its conservation and consistency properties. We also present a battery of numerical tests to verify the results of the analysis, and evaluate a number of possible variations, such as using piecewise-linear continuous finite elements for the coarse-scale mass fluxes.
Ferroli, Paolo; Broggi, Morgan; Schiavolin, Silvia; Acerbi, Francesco; Bettamio, Valentina; Caldiroli, Dario; Cusin, Alberto; La Corte, Emanuele; Leonardi, Matilde; Raggi, Alberto; Schiariti, Marco; Visintini, Sergio; Franzini, Angelo; Broggi, Giovanni
2015-12-01
OBJECT The Milan Complexity Scale-a new practical grading scale designed to estimate the risk of neurological clinical worsening after performing surgery for tumor removal-is presented. METHODS A retrospective study was conducted on all elective consecutive surgical procedures for tumor resection between January 2012 and December 2014 at the Second Division of Neurosurgery at Fondazione IRCCS Istituto Neurologico Carlo Besta of Milan. A prospective database dedicated to reporting complications and all clinical and radiological data was retrospectively reviewed. The Karnofsky Performance Scale (KPS) was used to classify each patient's health status. Complications were divided into major and minor and recorded based on etiology and required treatment. A logistic regression model was used to identify possible predictors of clinical worsening after surgery in terms of changes between the preoperative and discharge KPS scores. Statistically significant predictors were rated based on their odds ratios in order to build an ad hoc complexity scale. For each patient, a corresponding total score was calculated, and ANOVA was performed to compare the mean total scores between the improved/unchanged and worsened patients. Relative risk (RR) and chi-square statistics were employed to provide the risk of worsening after surgery for each total score. RESULTS The case series was composed of 746 patients (53.2% female; mean age 51.3 ± 17.1). The most common tumors were meningiomas (28.6%) and glioblastomas (24.1%). The mortality rate was 0.94%, the major complication rate was 9.1%, and the minor complication rate was 32.6%. Of 746 patients, 523 (70.1%) patients improved or remained unchanged, and 223 (29.9%) patients worsened. The following factors were found to be statistically significant predictors of the change in KPS scores: tumor size larger than 4 cm, cranial nerve manipulation, major brain vessel manipulation, posterior fossa location, and eloquent area involvement
VLSI scaling methods and low power CMOS buffer circuit
International Nuclear Information System (INIS)
Sharma Vijay Kumar; Pattanaik Manisha
2013-01-01
Device scaling is an important part of the very large scale integration (VLSI) design to boost up the success path of VLSI industry, which results in denser and faster integration of the devices. As technology node moves towards the very deep submicron region, leakage current and circuit reliability become the key issues. Both are increasing with the new technology generation and affecting the performance of the overall logic circuit. The VLSI designers must keep the balance in power dissipation and the circuit's performance with scaling of the devices. In this paper, different scaling methods are studied first. These scaling methods are used to identify the effects of those scaling methods on the power dissipation and propagation delay of the CMOS buffer circuit. For mitigating the power dissipation in scaled devices, we have proposed a reliable leakage reduction low power transmission gate (LPTG) approach and tested it on complementary metal oxide semiconductor (CMOS) buffer circuit. All simulation results are taken on HSPICE tool with Berkeley predictive technology model (BPTM) BSIM4 bulk CMOS files. The LPTG CMOS buffer reduces 95.16% power dissipation with 84.20% improvement in figure of merit at 32 nm technology node. Various process, voltage and temperature variations are analyzed for proving the robustness of the proposed approach. Leakage current uncertainty decreases from 0.91 to 0.43 in the CMOS buffer circuit that causes large circuit reliability. (semiconductor integrated circuits)
Methods for forming complex oxidation reaction products including superconducting articles
International Nuclear Information System (INIS)
Rapp, R.A.; Urquhart, A.W.; Nagelberg, A.S.; Newkirk, M.S.
1992-01-01
This patent describes a method for producing a superconducting complex oxidation reaction product of two or more metals in an oxidized state. It comprises positioning at least one parent metal source comprising one of the metals adjacent to a permeable mass comprising at least one metal-containing compound capable of reaction to form the complex oxidation reaction product in step below, the metal component of the at least one metal-containing compound comprising at least a second of the two or more metals, and orienting the parent metal source and the permeable mass relative to each other so that formation of the complex oxidation reaction product will occur in a direction towards and into the permeable mass; and heating the parent metal source in the presence of an oxidant to a temperature region above its melting point to form a body of molten parent metal to permit infiltration and reaction of the molten parent metal into the permeable mass and with the oxidant and the at least one metal-containing compound to form the complex oxidation reaction product, and progressively drawing the molten parent metal source through the complex oxidation reaction product towards the oxidant and towards and into the adjacent permeable mass so that fresh complex oxidation reaction product continues to form within the permeable mass; and recovering the resulting complex oxidation reaction product
Multi-scale modeling with cellular automata: The complex automata approach
Hoekstra, A.G.; Falcone, J.-L.; Caiazzo, A.; Chopard, B.
2008-01-01
Cellular Automata are commonly used to describe complex natural phenomena. In many cases it is required to capture the multi-scale nature of these phenomena. A single Cellular Automata model may not be able to efficiently simulate a wide range of spatial and temporal scales. It is our goal to
Jiao, Li-Guang; Ho, Yew Kam
2014-05-01
The screened Coulomb potential (SCP) has been extensively used in atomic physics, nuclear physics, quantum chemistry and plasma physics. However, an accurate calculation for atomic resonances under SCP is still a challenging task for various methods. Within the complex-scaling computational scheme, we have developed a method utilizing the modified Bessel functions to calculate doubly-excited resonances in two-electron atomic systems with configuration interaction-type basis. To test the validity of our method, we have calculated S- and P-wave resonance states of the helium atom with various screening strengths, and have found good agreement with earlier calculations using different methods. Our present method can be applied to calculate high-lying resonances associated with high excitation thresholds of the He+ ion, and with high-angular-momentum states. The derivation and calculation details of our present investigation together with new results of high-angular-momentum states will be presented at the meeting. Supported by NSC of Taiwan.
Flow and Transport in Complex Microporous Carbonates as a Consequence of Separation of Scales
Bijeljic, B.; Raeini, A. Q.; Lin, Q.; Blunt, M. J.
2017-12-01
Some of the most important examples of flow and transport in complex pore structures are found in subsurface applications such as contaminant hydrology, carbon storage and enhanced oil recovery. Carbonate rock structures contain most of the world's oil reserves, considerable amount of water reserves, and potentially hold a storage capacity for carbon dioxide. However, this type of pore space is difficult to represent due to complexities associated with a wide range of pore sizes and variation in connectivity which poses a considerable challenge for quantitative predictions of transport across multiple scales.A new concept unifying X-ray tomography experiment and direct numerical simulation has been developed that relies on full description flow and solute transport at the pore scale. Differential imaging method (Lin et al. 2016) provides rich information in microporous space, while advective and diffusive mass transport are simulated on micro-CT images of pore-space: Navier-Stokes equations are solved for flow in the image voxels comprising the pore space, streamline-based simulation is used to account for advection, and diffusion is superimposed by random walk.Quantitative validation has been done on analytical solutions for diffusion and by comparing the model predictions versus the experimental NMR measurements in the dual porosity beadpack. Furthermore, we discriminate signatures of multi-scale transport behaviour for a range of carbonate rock (Figure 1), dependent on the heterogeneity of the inter- and intra-grain pore space, heterogeneity in the flow field, and the mass transfer characteristics of the porous media. Finally, we demonstrate the predictive capabilities of the model through an analysis that includes a number of probability density functions flow and transport (PDFs) measures of non-Fickian transport on the micro-CT i935mages. In complex porous media separation of scales exists, leading to flow and transport signatures that need to be described by
Rosenberg's Self-Esteem Scale: Two Factors or Method Effects.
Tomas, Jose M.; Oliver, Amparo
1999-01-01
Results of a study with 640 Spanish high school students suggest the existence of a global self-esteem factor underlying responses to Rosenberg's (M. Rosenberg, 1965) Self-Esteem Scale, although the inclusion of method effects is needed to achieve a good model fit. Method effects are associated with item wording. (SLD)
A multi-scale method of mapping urban influence
Timothy G. Wade; James D. Wickham; Nicola Zacarelli; Kurt H. Riitters
2009-01-01
Urban development can impact environmental quality and ecosystem services well beyond urban extent. Many methods to map urban areas have been developed and used in the past, but most have simply tried to map existing extent of urban development, and all have been single-scale techniques. The method presented here uses a clustering approach to look beyond the extant...
Complexity analysis of accelerated MCMC methods for Bayesian inversion
International Nuclear Information System (INIS)
Hoang, Viet Ha; Schwab, Christoph; Stuart, Andrew M
2013-01-01
The Bayesian approach to inverse problems, in which the posterior probability distribution on an unknown field is sampled for the purposes of computing posterior expectations of quantities of interest, is starting to become computationally feasible for partial differential equation (PDE) inverse problems. Balancing the sources of error arising from finite-dimensional approximation of the unknown field, the PDE forward solution map and the sampling of the probability space under the posterior distribution are essential for the design of efficient computational Bayesian methods for PDE inverse problems. We study Bayesian inversion for a model elliptic PDE with an unknown diffusion coefficient. We provide complexity analyses of several Markov chain Monte Carlo (MCMC) methods for the efficient numerical evaluation of expectations under the Bayesian posterior distribution, given data δ. Particular attention is given to bounds on the overall work required to achieve a prescribed error level ε. Specifically, we first bound the computational complexity of ‘plain’ MCMC, based on combining MCMC sampling with linear complexity multi-level solvers for elliptic PDE. Our (new) work versus accuracy bounds show that the complexity of this approach can be quite prohibitive. Two strategies for reducing the computational complexity are then proposed and analyzed: first, a sparse, parametric and deterministic generalized polynomial chaos (gpc) ‘surrogate’ representation of the forward response map of the PDE over the entire parameter space, and, second, a novel multi-level Markov chain Monte Carlo strategy which utilizes sampling from a multi-level discretization of the posterior and the forward PDE. For both of these strategies, we derive asymptotic bounds on work versus accuracy, and hence asymptotic bounds on the computational complexity of the algorithms. In particular, we provide sufficient conditions on the regularity of the unknown coefficients of the PDE and on the
International Nuclear Information System (INIS)
Ulissi, Zachary W.; Medford, Andrew J.; Bligaard, Thomas; Nørskov, Jens K.
2017-01-01
Surface reaction networks involving hydrocarbons exhibit enormous complexity with thousands of species and reactions for all but the very simplest of chemistries. We present a framework for optimization under uncertainty for heterogeneous catalysis reaction networks using surrogate models that are trained on the fly. The surrogate model is constructed by teaching a Gaussian process adsorption energies based on group additivity fingerprints, combined with transition-state scaling relations and a simple classifier for determining the rate-limiting step. The surrogate model is iteratively used to predict the most important reaction step to be calculated explicitly with computationally demanding electronic structure theory. Applying these methods to the reaction of syngas on rhodium(111), we identify the most likely reaction mechanism. Lastly, propagating uncertainty throughout this process yields the likelihood that the final mechanism is complete given measurements on only a subset of the entire network and uncertainty in the underlying density functional theory calculations.
Molecular photoionization using the complex Kohn variational method
International Nuclear Information System (INIS)
Lynch, D.L.; Schneider, B.I.
1992-01-01
We have applied the complex Kohn variational method to the study of molecular-photoionization processes. This requires electron-ion scattering calculations enforcing incoming boundary conditions. The sensitivity of these results to the choice of the cutoff function in the Kohn method has been studied and we have demonstrated that a simple matching of the irregular function to a linear combination of regular functions produces accurate scattering phase shifts
Measurement of complex permittivity of composite materials using waveguide method
Tereshchenko, O.V.; Buesink, Frederik Johannes Karel; Leferink, Frank Bernardus Johannes
2011-01-01
Complex dielectric permittivity of 4 different composite materials has been measured using the transmissionline method. A waveguide fixture in L, S, C and X band was used for the measurements. Measurement accuracy is influenced by air gaps between test fixtures and the materials tested. One of the
SCALE-6 Sensitivity/Uncertainty Methods and Covariance Data
International Nuclear Information System (INIS)
Williams, Mark L.; Rearden, Bradley T.
2008-01-01
Computational methods and data used for sensitivity and uncertainty analysis within the SCALE nuclear analysis code system are presented. The methodology used to calculate sensitivity coefficients and similarity coefficients and to perform nuclear data adjustment is discussed. A description is provided of the SCALE-6 covariance library based on ENDF/B-VII and other nuclear data evaluations, supplemented by 'low-fidelity' approximate covariances. SCALE (Standardized Computer Analyses for Licensing Evaluation) is a modular code system developed by Oak Ridge National Laboratory (ORNL) to perform calculations for criticality safety, reactor physics, and radiation shielding applications. SCALE calculations typically use sequences that execute a predefined series of executable modules to compute particle fluxes and responses like the critical multiplication factor. SCALE also includes modules for sensitivity and uncertainty (S/U) analysis of calculated responses. The S/U codes in SCALE are collectively referred to as TSUNAMI (Tools for Sensitivity and UNcertainty Analysis Methodology Implementation). SCALE-6-scheduled for release in 2008-contains significant new capabilities, including important enhancements in S/U methods and data. The main functions of TSUNAMI are to (a) compute nuclear data sensitivity coefficients and response uncertainties, (b) establish similarity between benchmark experiments and design applications, and (c) reduce uncertainty in calculated responses by consolidating integral benchmark experiments. TSUNAMI includes easy-to-use graphical user interfaces for defining problem input and viewing three-dimensional (3D) geometries, as well as an integrated plotting package.
Application of discrete scale invariance method on pipe rupture
International Nuclear Information System (INIS)
Rajkovic, M.; Mihailovic, Z.; Riznic, J.
2007-01-01
'Full text:' A process of material failure of a mechanical system in the form of cracks and microcracks, a catastrophic phenomenon of considerable technological and scientific importance, may be forecasted according to the recent advances in the theory of critical phenomena in statistical physics. Critical rupture scenario states that, in many concrete and composite heterogeneous materials under compression and materials with large distributed residual stresses, rupture is a genuine critical point, i.e., the culmination of a self-organization of damage and cracking characterized by power law signatures. The concept of discrete scale invariance leads to a complex critical exponent (or dimension) and may occur spontaneously in systems and materials developing rupture. It establishes, theoretically, the power law dependence of a measurable observable, such as the rate of acoustic emissions radiated during loading or rate of heat released during the process, upon the time to failure. However, the problem is the power law can be distinguished from other parametric functional forms such as an exponential only close to the critical time. In this paper we modify the functional renormalization method to include the noise elimination procedure and dimension reduction. The aim is to obtain the prediction of the critical rupture time only from the knowledge of the power law parameters at early times prior to rupture, and based on the assumption that the dynamics close to rupture is governed by the power law dependence of the temperature measured along the perimeter of the tube upon the time-to-failure. Such an analysis would not only enhance the precision of prediction related to the rupture mechanism but also significantly help in determining and predicting the leak rates. The prediction will be compared to experimental data on Zr-2.5%Nb made tubes. Note: The views expressed in the paper are those of the authors and do not necessary represents those of the commission. (author)
Computational RNA secondary structure design: empirical complexity and improved methods
Directory of Open Access Journals (Sweden)
Condon Anne
2007-01-01
Full Text Available Abstract Background We investigate the empirical complexity of the RNA secondary structure design problem, that is, the scaling of the typical difficulty of the design task for various classes of RNA structures as the size of the target structure is increased. The purpose of this work is to understand better the factors that make RNA structures hard to design for existing, high-performance algorithms. Such understanding provides the basis for improving the performance of one of the best algorithms for this problem, RNA-SSD, and for characterising its limitations. Results To gain insights into the practical complexity of the problem, we present a scaling analysis on random and biologically motivated structures using an improved version of the RNA-SSD algorithm, and also the RNAinverse algorithm from the Vienna package. Since primary structure constraints are relevant for designing RNA structures, we also investigate the correlation between the number and the location of the primary structure constraints when designing structures and the performance of the RNA-SSD algorithm. The scaling analysis on random and biologically motivated structures supports the hypothesis that the running time of both algorithms scales polynomially with the size of the structure. We also found that the algorithms are in general faster when constraints are placed only on paired bases in the structure. Furthermore, we prove that, according to the standard thermodynamic model, for some structures that the RNA-SSD algorithm was unable to design, there exists no sequence whose minimum free energy structure is the target structure. Conclusion Our analysis helps to better understand the strengths and limitations of both the RNA-SSD and RNAinverse algorithms, and suggests ways in which the performance of these algorithms can be further improved.
Analytical Method to Estimate the Complex Permittivity of Oil Samples
Directory of Open Access Journals (Sweden)
Lijuan Su
2018-03-01
Full Text Available In this paper, an analytical method to estimate the complex dielectric constant of liquids is presented. The method is based on the measurement of the transmission coefficient in an embedded microstrip line loaded with a complementary split ring resonator (CSRR, which is etched in the ground plane. From this response, the dielectric constant and loss tangent of the liquid under test (LUT can be extracted, provided that the CSRR is surrounded by such LUT, and the liquid level extends beyond the region where the electromagnetic fields generated by the CSRR are present. For that purpose, a liquid container acting as a pool is added to the structure. The main advantage of this method, which is validated from the measurement of the complex dielectric constant of olive and castor oil, is that reference samples for calibration are not required.
Ethnographic methods for process evaluations of complex health behaviour interventions.
Morgan-Trimmer, Sarah; Wood, Fiona
2016-05-04
This article outlines the contribution that ethnography could make to process evaluations for trials of complex health-behaviour interventions. Process evaluations are increasingly used to examine how health-behaviour interventions operate to produce outcomes and often employ qualitative methods to do this. Ethnography shares commonalities with the qualitative methods currently used in health-behaviour evaluations but has a distinctive approach over and above these methods. It is an overlooked methodology in trials of complex health-behaviour interventions that has much to contribute to the understanding of how interventions work. These benefits are discussed here with respect to three strengths of ethnographic methodology: (1) producing valid data, (2) understanding data within social contexts, and (3) building theory productively. The limitations of ethnography within the context of process evaluations are also discussed.
Unplanned Complex Suicide-A Consideration of Multiple Methods.
Ateriya, Navneet; Kanchan, Tanuj; Shekhawat, Raghvendra Singh; Setia, Puneet; Saraf, Ashish
2018-05-01
Detailed death investigations are mandatory to find out the exact cause and manner in non-natural deaths. In this reference, use of multiple methods in suicide poses a challenge for the investigators especially when the choice of methods to cause death is unplanned. There is an increased likelihood that doubts of homicide are raised in cases of unplanned complex suicides. A case of complex suicide is reported where the victim resorted to multiple methods to end his life, and what appeared to be an unplanned variant based on the death scene investigations. A meticulous crime scene examination, interviews of the victim's relatives and other witnesses, and a thorough autopsy are warranted to conclude on the cause and manner of death in all such cases. © 2017 American Academy of Forensic Sciences.
Kernel methods for large-scale genomic data analysis
Xing, Eric P.; Schaid, Daniel J.
2015-01-01
Machine learning, particularly kernel methods, has been demonstrated as a promising new tool to tackle the challenges imposed by today’s explosive data growth in genomics. They provide a practical and principled approach to learning how a large number of genetic variants are associated with complex phenotypes, to help reveal the complexity in the relationship between the genetic markers and the outcome of interest. In this review, we highlight the potential key role it will have in modern genomic data processing, especially with regard to integration with classical methods for gene prioritizing, prediction and data fusion. PMID:25053743
Comparison of association mapping methods in a complex pedigreed population
DEFF Research Database (Denmark)
Sahana, Goutam; Guldbrandtsen, Bernt; Janss, Luc
2010-01-01
to collect SNP signals in intervals, to avoid the scattering of a QTL signal over multiple neighboring SNPs. Methods not accounting for genetic background (full pedigree information) performed worse, and methods using haplotypes were considerably worse with a high false-positive rate, probably due...... to the presence of low-frequency haplotypes. It was necessary to account for full relationships among individuals to avoid excess false discovery. Although the methods were tested on a cattle pedigree, the results are applicable to any population with a complex pedigree structure...
Test equating, scaling, and linking methods and practices
Kolen, Michael J
2014-01-01
This book provides an introduction to test equating, scaling, and linking, including those concepts and practical issues that are critical for developers and all other testing professionals. In addition to statistical procedures, successful equating, scaling, and linking involves many aspects of testing, including procedures to develop tests, to administer and score tests, and to interpret scores earned on tests. Test equating methods are used with many standardized tests in education and psychology to ensure that scores from multiple test forms can be used interchangeably. Test scaling is the process of developing score scales that are used when scores on standardized tests are reported. In test linking, scores from two or more tests are related to one another. Linking has received much recent attention, due largely to investigations of linking similarly named tests from different test publishers or tests constructed for different purposes. In recent years, researchers from the education, psychology, and...
Scale factor measure method without turntable for angular rate gyroscope
Qi, Fangyi; Han, Xuefei; Yao, Yanqing; Xiong, Yuting; Huang, Yuqiong; Wang, Hua
2018-03-01
In this paper, a scale factor test method without turntable is originally designed for the angular rate gyroscope. A test system which consists of test device, data acquisition circuit and data processing software based on Labview platform is designed. Taking advantage of gyroscope's sensitivity of angular rate, a gyroscope with known scale factor, serves as a standard gyroscope. The standard gyroscope is installed on the test device together with a measured gyroscope. By shaking the test device around its edge which is parallel to the input axis of gyroscope, the scale factor of the measured gyroscope can be obtained in real time by the data processing software. This test method is fast. It helps test system miniaturized, easy to carry or move. Measure quarts MEMS gyroscope's scale factor multi-times by this method, the difference is less than 0.2%. Compare with testing by turntable, the scale factor difference is less than 1%. The accuracy and repeatability of the test system seems good.
Atwood, Erin; Brady, Nancy C; Esplund, Amy
There is a great need in the United States to develop presymbolic evaluation tools that are widely available and accurate for individuals that come from a bilingual and/or multicultural setting. The Communication Complexity Scale (CCS) is a measure that evaluates expressive presymbolic communication including gestures, vocalizations and eye gaze. Studying the effectiveness of this tool in a Spanish speaking environment was undertaken to determine the applicability of the CCS with Spanish speaking children. Methods & Procedures: In 2011-2012, researchers from the University of Kansas and Centro Ann Sullivan del Perú (CASP) investigated communication in a cohort of 71 young Spanish speaking children with developmental disabilities and a documented history of self-injurious, stereotyped and aggressive behaviors. Communication was assessed first by parental report with translated versions of the Communication and Symbolic Behavior Scales (CSBS), a well-known assessment of early communication, and then eleven months later with the CCS. We hypothesized that the CCS and the CSBS measures would be significantly correlated in this population of Spanish speaking children. The CSBS scores from time 1 with a mean participant age of 41 months were determined to have a strong positive relationship to the CCS scores obtained at time 2 with a mean participant age of 52 months. The CCS is strongly correlated to a widely accepted measure of early communication. These findings support the validity of the Spanish version of the CCS and demonstrate its usefulness for children from another culture and for children in a Spanish speaking environment.
Han, Zhiwu; Li, Bo; Mu, Zhengzhi; Yang, Meng; Niu, Shichao; Zhang, Junqiu; Ren, Luquan
2015-11-01
The polydimethylsiloxane (PDMS) positive replica templated twice from the excellent light trapping surface of butterfly Trogonoptera brookiana wing scales was fabricated by a simple and promising route. The exact SiO2 negative replica was fabricated by using a synthesis method combining a sol-gel process and subsequent selective etching. Afterwards, a vacuum-aided process was introduced to make PDMS gel fill into the SiO2 negative replica, and the PDMS gel was solidified in an oven. Then, the SiO2 negative replica was used as secondary template and the structures in its surface was transcribed onto the surface of PDMS. At last, the PDMS positive replica was obtained. After comparing the PDMS positive replica and the original bio-template in terms of morphology, dimensions and reflectance spectra and so on, it is evident that the excellent light trapping structures of butterfly wing scales were inherited by the PDMS positive replica faithfully. This bio-inspired route could facilitate the preparation of complex light trapping nanostructure surfaces without any assistance from other power-wasting and expensive nanofabrication technologies.
Complex finite element sensitivity method for creep analysis
International Nuclear Information System (INIS)
Gomez-Farias, Armando; Montoya, Arturo; Millwater, Harry
2015-01-01
The complex finite element method (ZFEM) has been extended to perform sensitivity analysis for mechanical and structural systems undergoing creep deformation. ZFEM uses a complex finite element formulation to provide shape, material, and loading derivatives of the system response, providing an insight into the essential factors which control the behavior of the system as a function of time. A complex variable-based quadrilateral user element (UEL) subroutine implementing the power law creep constitutive formulation was incorporated within the Abaqus commercial finite element software. The results of the complex finite element computations were verified by comparing them to the reference solution for the steady-state creep problem of a thick-walled cylinder in the power law creep range. A practical application of the ZFEM implementation to creep deformation analysis is the calculation of the skeletal point of a notched bar test from a single ZFEM run. In contrast, the standard finite element procedure requires multiple runs. The value of the skeletal point is that it identifies the location where the stress state is accurate, regardless of the certainty of the creep material properties. - Highlights: • A novel finite element sensitivity method (ZFEM) for creep was introduced. • ZFEM has the capability to calculate accurate partial derivatives. • ZFEM can be used for identification of the skeletal point of creep structures. • ZFEM can be easily implemented in a commercial software, e.g. Abaqus. • ZFEM results were shown to be in excellent agreement with analytical solutions
Evaluating the response of complex systems to environmental threats: the Σ II method
International Nuclear Information System (INIS)
Corynen, G.C.
1983-05-01
The Σ II method was developed to model and compute the probabilistic performance of systems that operate in a threatening environment. Although we emphasize the vulnerability of complex systems to earthquakes and to electromagnetic threats such as EMP (electromagnetic pulse), the method applies in general to most large-scale systems or networks that are embedded in a potentially harmful environment. Other methods exist for obtaining system vulnerability, but their complexity increases exponentially as the size of systems is increased. The complexity of the Σ II method is polynomial, and accurate solutions are now possible for problems for which current methods require the use of rough statistical bounds, confidence statements, and other approximations. For super-large problems, where the costs of precise answers may be prohibitive, a desired accuracy can be specified, and the Σ II algorithms will halt when that accuracy has been reached. We summarize the results of a theoretical complexity analysis - which is reported elsewhere - and validate the theory with computer experiments conducted both on worst-case academic problems and on more reasonable problems occurring in practice. Finally, we compare our method with the exact methods of Abraham and Nakazawa, and with current bounding methods, and we demonstrate the computational efficiency and accuracy of Σ II
Multiple time-scale methods in particle simulations of plasmas
International Nuclear Information System (INIS)
Cohen, B.I.
1985-01-01
This paper surveys recent advances in the application of multiple time-scale methods to particle simulation of collective phenomena in plasmas. These methods dramatically improve the efficiency of simulating low-frequency kinetic behavior by allowing the use of a large timestep, while retaining accuracy. The numerical schemes surveyed provide selective damping of unwanted high-frequency waves and preserve numerical stability in a variety of physics models: electrostatic, magneto-inductive, Darwin and fully electromagnetic. The paper reviews hybrid simulation models, the implicitmoment-equation method, the direct implicit method, orbit averaging, and subcycling
鄭, 艶花; Zheng, Yanhua
2004-01-01
The purpose of this study is to analyze and clarify the independence consciousness of female university students of China applying psychological research methods. In the course of the study a questionnaire research was conducted on eighty three Chinese female university students with regard to the scales of Cinderella complex and the social role attitudes. Firstly the results indicate positive correlations between the independent variable of "defend-family-traditionalism factor" with three fa...
The Language Teaching Methods Scale: Reliability and Validity Studies
Okmen, Burcu; Kilic, Abdurrahman
2016-01-01
The aim of this research is to develop a scale to determine the language teaching methods used by English teachers. The research sample consisted of 300 English teachers who taught at Duzce University and in primary schools, secondary schools and high schools in the Provincial Management of National Education in the city of Duzce in 2013-2014…
A comparison of multidimensional scaling methods for perceptual mapping
Bijmolt, T.H.A.; Wedel, M.
Multidimensional scaling has been applied to a wide range of marketing problems, in particular to perceptual mapping based on dissimilarity judgments. The introduction of methods based on the maximum likelihood principle is one of the most important developments. In this article, the authors compare
Correlates of the Rosenberg Self-Esteem Scale Method Effects
Quilty, Lena C.; Oakman, Jonathan M.; Risko, Evan
2006-01-01
Investigators of personality assessment are becoming aware that using positively and negatively worded items in questionnaires to prevent acquiescence may negatively impact construct validity. The Rosenberg Self-Esteem Scale (RSES) has demonstrated a bifactorial structure typically proposed to result from these method effects. Recent work suggests…
POWER ANALYSIS FOR COMPLEX MEDIATIONAL DESIGNS USING MONTE CARLO METHODS
Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.
2010-01-01
Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex mediational models. The approach is based on the well known technique of generating a large number of samples in a Monte Carlo study, and estimating power...
Learning with Generalization Capability by Kernel Methods of Bounded Complexity
Czech Academy of Sciences Publication Activity Database
Kůrková, Věra; Sanguineti, M.
2005-01-01
Roč. 21, č. 3 (2005), s. 350-367 ISSN 0885-064X R&D Projects: GA AV ČR 1ET100300419 Institutional research plan: CEZ:AV0Z10300504 Keywords : supervised learning * generalization * model complexity * kernel methods * minimization of regularized empirical errors * upper bounds on rates of approximate optimization Subject RIV: BA - General Mathematics Impact factor: 1.186, year: 2005
Comparison of topotactic fluorination methods for complex oxide films
Moon, E. J.; Choquette, A. K.; Huon, A.; Kulesa, S. Z.; Barbash, D.; May, S. J.
2015-06-01
We have investigated the synthesis of SrFeO3-αFγ (α and γ ≤ 1) perovskite films using topotactic fluorination reactions utilizing poly(vinylidene fluoride) as a fluorine source. Two different fluorination methods, a spin-coating and a vapor transport approach, were performed on as-grown SrFeO2.5 films. We highlight differences in the structural, compositional, and optical properties of the oxyfluoride films obtained via the two methods, providing insight into how fluorination reactions can be used to modify electronic and optical behavior in complex oxide heterostructures.
Comparison of topotactic fluorination methods for complex oxide films
Energy Technology Data Exchange (ETDEWEB)
Moon, E. J., E-mail: em582@drexel.edu; Choquette, A. K.; Huon, A.; Kulesa, S. Z.; May, S. J., E-mail: smay@coe.drexel.edu [Department of Materials Science and Engineering, Drexel University, Philadelphia, Pennsylvania 19104 (United States); Barbash, D. [Centralized Research Facilities, Drexel University, Philadelphia, Pennsylvania 19104 (United States)
2015-06-01
We have investigated the synthesis of SrFeO{sub 3−α}F{sub γ} (α and γ ≤ 1) perovskite films using topotactic fluorination reactions utilizing poly(vinylidene fluoride) as a fluorine source. Two different fluorination methods, a spin-coating and a vapor transport approach, were performed on as-grown SrFeO{sub 2.5} films. We highlight differences in the structural, compositional, and optical properties of the oxyfluoride films obtained via the two methods, providing insight into how fluorination reactions can be used to modify electronic and optical behavior in complex oxide heterostructures.
Complex Data Modeling and Computationally Intensive Statistical Methods
Mantovan, Pietro
2010-01-01
The last years have seen the advent and development of many devices able to record and store an always increasing amount of complex and high dimensional data; 3D images generated by medical scanners or satellite remote sensing, DNA microarrays, real time financial data, system control datasets. The analysis of this data poses new challenging problems and requires the development of novel statistical models and computational methods, fueling many fascinating and fast growing research areas of modern statistics. The book offers a wide variety of statistical methods and is addressed to statistici
Comparison of topotactic fluorination methods for complex oxide films
Directory of Open Access Journals (Sweden)
E. J. Moon
2015-06-01
Full Text Available We have investigated the synthesis of SrFeO3−αFγ (α and γ ≤ 1 perovskite films using topotactic fluorination reactions utilizing poly(vinylidene fluoride as a fluorine source. Two different fluorination methods, a spin-coating and a vapor transport approach, were performed on as-grown SrFeO2.5 films. We highlight differences in the structural, compositional, and optical properties of the oxyfluoride films obtained via the two methods, providing insight into how fluorination reactions can be used to modify electronic and optical behavior in complex oxide heterostructures.
Method for analysis the complex grounding cables system
International Nuclear Information System (INIS)
Ackovski, R.; Acevski, N.
2002-01-01
A new iterative method for the analysis of the performances of the complex grounding systems (GS) in underground cable power networks with coated and/or uncoated metal sheathed cables is proposed in this paper. The analyzed grounding system consists of the grounding grid of a high voltage (HV) supplying transformer station (TS), middle voltage/low voltage (MV/LV) consumer TSs and arbitrary number of power cables, connecting them. The derived method takes into consideration the drops of voltage in the cable sheets and the mutual influence among all earthing electrodes, due to the resistive coupling through the soil. By means of the presented method it is possible to calculate the main grounding system performances, such as earth electrode potentials under short circuit fault to ground conditions, earth fault current distribution in the whole complex grounding system, step and touch voltages in the nearness of the earthing electrodes dissipating the fault current in the earth, impedances (resistances) to ground of all possible fault locations, apparent shield impedances to ground of all power cables, e.t.c. The proposed method is based on the admittance summation method [1] and is appropriately extended, so that it takes into account resistive coupling between the elements that the GS. (Author)
Schmengler, A. C.; Vlek, P. L. G.
2012-04-01
Modelling soil erosion requires a holistic understanding of the sediment dynamics in a complex environment. As most erosion models are scale-dependent and their parameterization is spatially limited, their application often requires special care, particularly in data-scarce environments. This study presents a hierarchical approach to overcome the limitations of a single model by using various quantitative methods and soil erosion models to cope with the issues of scale. At hillslope scale, the physically-based Water Erosion Prediction Project (WEPP)-model is used to simulate soil loss and deposition processes. Model simulations of soil loss vary between 5 to 50 t ha-1 yr-1 dependent on the spatial location on the hillslope and have only limited correspondence with the results of the 137Cs technique. These differences in absolute soil loss values could be either due to internal shortcomings of each approach or to external scale-related uncertainties. Pedo-geomorphological soil investigations along a catena confirm that estimations by the 137Cs technique are more appropriate in reflecting both the spatial extent and magnitude of soil erosion at hillslope scale. In order to account for sediment dynamics at a larger scale, the spatially-distributed WaTEM/SEDEM model is used to simulate soil erosion at catchment scale and to predict sediment delivery rates into a small water reservoir. Predicted sediment yield rates are compared with results gained from a bathymetric survey and sediment core analysis. Results show that specific sediment rates of 0.6 t ha-1 yr-1 by the model are in close agreement with observed sediment yield calculated from stratigraphical changes and downcore variations in 137Cs concentrations. Sediment erosion rates averaged over the entire catchment of 1 to 2 t ha-1 yr-1 are significantly lower than results obtained at hillslope scale confirming an inverse correlation between the magnitude of erosion rates and the spatial scale of the model. The
A dissipative particle dynamics method for arbitrarily complex geometries
Li, Zhen; Bian, Xin; Tang, Yu-Hang; Karniadakis, George Em
2018-02-01
Dissipative particle dynamics (DPD) is an effective Lagrangian method for modeling complex fluids in the mesoscale regime but so far it has been limited to relatively simple geometries. Here, we formulate a local detection method for DPD involving arbitrarily shaped geometric three-dimensional domains. By introducing an indicator variable of boundary volume fraction (BVF) for each fluid particle, the boundary of arbitrary-shape objects is detected on-the-fly for the moving fluid particles using only the local particle configuration. Therefore, this approach eliminates the need of an analytical description of the boundary and geometry of objects in DPD simulations and makes it possible to load the geometry of a system directly from experimental images or computer-aided designs/drawings. More specifically, the BVF of a fluid particle is defined by the weighted summation over its neighboring particles within a cutoff distance. Wall penetration is inferred from the value of the BVF and prevented by a predictor-corrector algorithm. The no-slip boundary condition is achieved by employing effective dissipative coefficients for liquid-solid interactions. Quantitative evaluations of the new method are performed for the plane Poiseuille flow, the plane Couette flow and the Wannier flow in a cylindrical domain and compared with their corresponding analytical solutions and (high-order) spectral element solution of the Navier-Stokes equations. We verify that the proposed method yields correct no-slip boundary conditions for velocity and generates negligible fluctuations of density and temperature in the vicinity of the wall surface. Moreover, we construct a very complex 3D geometry - the "Brown Pacman" microfluidic device - to explicitly demonstrate how to construct a DPD system with complex geometry directly from loading a graphical image. Subsequently, we simulate the flow of a surfactant solution through this complex microfluidic device using the new method. Its
Directed forgetting of complex pictures in an item method paradigm.
Hauswald, Anne; Kissler, Johanna
2008-11-01
An item-cued directed forgetting paradigm was used to investigate the ability to control episodic memory and selectively encode complex coloured pictures. A series of photographs was presented to 21 participants who were instructed to either remember or forget each picture after it was presented. Memory performance was later tested with a recognition task where all presented items had to be retrieved, regardless of the initial instructions. A directed forgetting effect--that is, better recognition of "to-be-remembered" than of "to-be-forgotten" pictures--was observed, although its size was smaller than previously reported for words or line drawings. The magnitude of the directed forgetting effect correlated negatively with participants' depression and dissociation scores. The results indicate that, at least in an item method, directed forgetting occurs for complex pictures as well as words and simple line drawings. Furthermore, people with higher levels of dissociative or depressive symptoms exhibit altered memory encoding patterns.
Equivalence of the generalized and complex Kohn variational methods
Energy Technology Data Exchange (ETDEWEB)
Cooper, J N; Armour, E A G [School of Mathematical Sciences, University Park, Nottingham NG7 2RD (United Kingdom); Plummer, M, E-mail: pmxjnc@googlemail.co [STFC Daresbury Laboratory, Daresbury, Warrington, Cheshire WA4 4AD (United Kingdom)
2010-04-30
For Kohn variational calculations on low energy (e{sup +} - H{sub 2}) elastic scattering, we prove that the phase shift approximation, obtained using the complex Kohn method, is precisely equal to a value which can be obtained immediately via the real-generalized Kohn method. Our treatment is sufficiently general to be applied directly to arbitrary potential scattering or single open channel scattering problems, with exchange if required. In the course of our analysis, we develop a framework formally to describe the anomalous behaviour of our generalized Kohn calculations in the regions of the well-known Schwartz singularities. This framework also explains the mathematical origin of the anomaly-free singularities we reported in a previous article. Moreover, we demonstrate a novelty: that explicit solutions of the Kohn equations are not required in order to calculate optimal phase shift approximations. We relate our rigorous framework to earlier descriptions of the Kohn-type methods.
Equivalence of the generalized and complex Kohn variational methods
International Nuclear Information System (INIS)
Cooper, J N; Armour, E A G; Plummer, M
2010-01-01
For Kohn variational calculations on low energy (e + - H 2 ) elastic scattering, we prove that the phase shift approximation, obtained using the complex Kohn method, is precisely equal to a value which can be obtained immediately via the real-generalized Kohn method. Our treatment is sufficiently general to be applied directly to arbitrary potential scattering or single open channel scattering problems, with exchange if required. In the course of our analysis, we develop a framework formally to describe the anomalous behaviour of our generalized Kohn calculations in the regions of the well-known Schwartz singularities. This framework also explains the mathematical origin of the anomaly-free singularities we reported in a previous article. Moreover, we demonstrate a novelty: that explicit solutions of the Kohn equations are not required in order to calculate optimal phase shift approximations. We relate our rigorous framework to earlier descriptions of the Kohn-type methods.
Global Stability of Complex-Valued Genetic Regulatory Networks with Delays on Time Scales
Directory of Open Access Journals (Sweden)
Wang Yajing
2016-01-01
Full Text Available In this paper, the global exponential stability of complex-valued genetic regulatory networks with delays is investigated. Besides presenting conditions guaranteeing the existence of a unique equilibrium pattern, its global exponential stability is discussed. Some numerical examples for different time scales.
Software quality assurance: in large scale and complex software-intensive systems
Mistrik, I.; Soley, R.; Ali, N.; Grundy, J.; Tekinerdogan, B.
2015-01-01
Software Quality Assurance in Large Scale and Complex Software-intensive Systems presents novel and high-quality research related approaches that relate the quality of software architecture to system requirements, system architecture and enterprise-architecture, or software testing. Modern software
Maxwell iteration for the lattice Boltzmann method with diffusive scaling
Zhao, Weifeng; Yong, Wen-An
2017-03-01
In this work, we present an alternative derivation of the Navier-Stokes equations from Bhatnagar-Gross-Krook models of the lattice Boltzmann method with diffusive scaling. This derivation is based on the Maxwell iteration and can expose certain important features of the lattice Boltzmann solutions. Moreover, it will be seen to be much more straightforward and logically clearer than the existing approaches including the Chapman-Enskog expansion.
Variable scaling method and Stark effect in hydrogen atom
International Nuclear Information System (INIS)
Choudhury, R.K.R.; Ghosh, B.
1983-09-01
By relating the Stark effect problem in hydrogen-like atoms to that of the spherical anharmonic oscillator we have found simple formulas for energy eigenvalues for the Stark effect. Matrix elements have been calculated using 0(2,1) algebra technique after Armstrong and then the variable scaling method has been used to find optimal solutions. Our numerical results are compared with those of Hioe and Yoo and also with the results obtained by Lanczos. (author)
Multivariate Multi-Scale Permutation Entropy for Complexity Analysis of Alzheimer’s Disease EEG
Directory of Open Access Journals (Sweden)
Isabella Palamara
2012-07-01
Full Text Available An original multivariate multi-scale methodology for assessing the complexity of physiological signals is proposed. The technique is able to incorporate the simultaneous analysis of multi-channel data as a unique block within a multi-scale framework. The basic complexity measure is done by using Permutation Entropy, a methodology for time series processing based on ordinal analysis. Permutation Entropy is conceptually simple, structurally robust to noise and artifacts, computationally very fast, which is relevant for designing portable diagnostics. Since time series derived from biological systems show structures on multiple spatial-temporal scales, the proposed technique can be useful for other types of biomedical signal analysis. In this work, the possibility of distinguish among the brain states related to Alzheimer’s disease patients and Mild Cognitive Impaired subjects from normal healthy elderly is checked on a real, although quite limited, experimental database.
The linearly scaling 3D fragment method for large scale electronic structure calculations
Energy Technology Data Exchange (ETDEWEB)
Zhao Zhengji [National Energy Research Scientific Computing Center (NERSC) (United States); Meza, Juan; Shan Hongzhang; Strohmaier, Erich; Bailey, David; Wang Linwang [Computational Research Division, Lawrence Berkeley National Laboratory (United States); Lee, Byounghak, E-mail: ZZhao@lbl.go [Physics Department, Texas State University (United States)
2009-07-01
The linearly scaling three-dimensional fragment (LS3DF) method is an O(N) ab initio electronic structure method for large-scale nano material simulations. It is a divide-and-conquer approach with a novel patching scheme that effectively cancels out the artificial boundary effects, which exist in all divide-and-conquer schemes. This method has made ab initio simulations of thousand-atom nanosystems feasible in a couple of hours, while retaining essentially the same accuracy as the direct calculation methods. The LS3DF method won the 2008 ACM Gordon Bell Prize for algorithm innovation. Our code has reached 442 Tflop/s running on 147,456 processors on the Cray XT5 (Jaguar) at OLCF, and has been run on 163,840 processors on the Blue Gene/P (Intrepid) at ALCF, and has been applied to a system containing 36,000 atoms. In this paper, we will present the recent parallel performance results of this code, and will apply the method to asymmetric CdSe/CdS core/shell nanorods, which have potential applications in electronic devices and solar cells.
Hexographic Method of Complex Town-Planning Terrain Estimate
Khudyakov, A. Ju
2017-11-01
The article deals with the vital problem of a complex town-planning analysis based on the “hexographic” graphic analytic method, makes a comparison with conventional terrain estimate methods and contains the method application examples. It discloses a procedure of the author’s estimate of restrictions and building of a mathematical model which reflects not only conventional town-planning restrictions, but also social and aesthetic aspects of the analyzed territory. The method allows one to quickly get an idea of the territory potential. It is possible to use an unlimited number of estimated factors. The method can be used for the integrated assessment of urban areas. In addition, it is possible to use the methods of preliminary evaluation of the territory commercial attractiveness in the preparation of investment projects. The technique application results in simple informative graphics. Graphical interpretation is straightforward from the experts. A definite advantage is the free perception of the subject results as they are not prepared professionally. Thus, it is possible to build a dialogue between professionals and the public on a new level allowing to take into account the interests of various parties. At the moment, the method is used as a tool for the preparation of integrated urban development projects at the Department of Architecture in Federal State Autonomous Educational Institution of Higher Education “South Ural State University (National Research University)”, FSAEIHE SUSU (NRU). The methodology is included in a course of lectures as the material on architectural and urban design for architecture students. The same methodology was successfully tested in the preparation of business strategies for the development of some territories in the Chelyabinsk region. This publication is the first in a series of planned activities developing and describing the methodology of hexographical analysis in urban and architectural practice. It is also
Methods of Scientific Research: Teaching Scientific Creativity at Scale
Robbins, Dennis; Ford, K. E. Saavik
2016-01-01
We present a scaling-up plan for AstroComNYC's Methods of Scientific Research (MSR), a course designed to improve undergraduate students' understanding of science practices. The course format and goals, notably the open-ended, hands-on, investigative nature of the curriculum are reviewed. We discuss how the course's interactive pedagogical techniques empower students to learn creativity within the context of experimental design and control of variables thinking. To date the course has been offered to a limited numbers of students in specific programs. The goals of broadly implementing MSR is to reach more students and early in their education—with the specific purpose of supporting and improving retention of students pursuing STEM careers. However, we also discuss challenges in preserving the effectiveness of the teaching and learning experience at scale.
Multi-scale seismic tomography of the Merapi-Merbabu volcanic complex, Indonesia
Mujid Abdullah, Nur; Valette, Bernard; Potin, Bertrand; Ramdhan, Mohamad
2017-04-01
Merapi-Merbabu volcanic complex is the most active volcano located on Java Island, Indonesia, where the Indian plate subducts beneath Eurasian plate. We present a preliminary study of a multi-scale seismic tomography of the substructures of the volcanic complex. The main objective of our study is to image the feeding paths of the volcanic complex at an intermediate scale by using the data from the dense network (about 5 km spacing) constituted by 53 stations of the French-Indonesian DOMERAPI experiment complemented by the data of the German-Indonesian MERAMEX project (134 stations) and of the Indonesia Tsunami Early Warning System (InaTEWS) located in the vicinity of the complex. The inversion was performed using the INSIGHT algorithm, which follows a non-linear least squares approach based on a stochastic description of data and model. In total, 1883 events and 41846 phases (26647 P and 15199 S) have been processed, and a two-scale approach was adopted. The model obtained at regional scale is consistent with the previous studies. We selected the most reliable regional model as a prior model for the local tomography performed with a variant of the INSIGHT code. The algorithm of this code is based on the fact that inverting differences of data when transporting the errors in probability is equivalent to inverting initial data while introducing specific correlation terms in the data covariance matrix. The local tomography provides images of the substructure of the volcanic complex with a sufficiently good resolution to allow identification of a probable magma chamber at about 20 km.
BOX-COX REGRESSION METHOD IN TIME SCALING
Directory of Open Access Journals (Sweden)
ATİLLA GÖKTAŞ
2013-06-01
Full Text Available Box-Cox regression method with λj, for j = 1, 2, ..., k, power transformation can be used when dependent variable and error term of the linear regression model do not satisfy the continuity and normality assumptions. The situation obtaining the smallest mean square error when optimum power λj, transformation for j = 1, 2, ..., k, of Y has been discussed. Box-Cox regression method is especially appropriate to adjust existence skewness or heteroscedasticity of error terms for a nonlinear functional relationship between dependent and explanatory variables. In this study, the advantage and disadvantage use of Box-Cox regression method have been discussed in differentiation and differantial analysis of time scale concept.
International Nuclear Information System (INIS)
Sig Drellack, Lance Prothro
2007-01-01
simulations are challenged by the distributed sources in each of the Corrective Action Units, by complex mass transfer processes, and by the size and complexity of the field-scale flow models. An efficient methodology utilizing particle tracking results and convolution integrals provides in situ concentrations appropriate for Monte Carlo analysis. Uncertainty in source releases and transport parameters including effective porosity, fracture apertures and spacing, matrix diffusion coefficients, sorption coefficients, and colloid load and mobility are considered. With the distributions of input uncertainties and output plume volumes, global analysis methods including stepwise regression, contingency table analysis, and classification tree analysis are used to develop sensitivity rankings of parameter uncertainties for each model considered, thus assisting a variety of decisions
International Nuclear Information System (INIS)
Seguin, B.; Courault, D.; Guerif, M.
1994-01-01
Remotely sensed surface temperatures have proven useful for monitoring evapotranspiration (ET) rates and crop water use because of their direct relationship with sensible and latent energy exchange processes. Procedures for using the thermal infrared (IR) obtained with hand-held radiometers deployed at ground level are now well established and even routine for many agricultural research and management purposes. The availability of IR from meteorological satellites at scales from 1 km (NOAA-AVHRR) to 5 km (METEOSAT) permits extension of local, ground-based approaches to larger scale crop monitoring programs. Regional observations of surface minus air temperature (i.e., the stress degree day) and remote estimates of daily ET were derived from satellite data over sites in France, the Sahel, and North Africa and summarized here. Results confirm that similar approaches can be applied at local and regional scales despite differences in pixel size and heterogeneity. This article analyzes methods for obtaining these data and outlines the potential utility of satellite data for operational use at the regional scale. (author)
Optimization of a method for preparing solid complexes of essential clove oil with β-cyclodextrins.
Hernández-Sánchez, Pilar; López-Miranda, Santiago; Guardiola, Lucía; Serrano-Martínez, Ana; Gabaldón, José Antonio; Nuñez-Delicado, Estrella
2017-01-01
Clove oil (CO) is an aromatic oily liquid used in the food, cosmetics and pharmaceutical industries for its functional properties. However, its disadvantages of pungent taste, volatility, light sensitivity and poor water solubility can be solved by applying microencapsulation or complexation techniques. Essential CO was successfully solubilized in aqueous solution by forming inclusion complexes with β-cyclodextrins (β-CDs). Moreover, phase solubility studies demonstrated that essential CO also forms insoluble complexes with β-CDs. Based on these results, essential CO-β-CD solid complexes were prepared by the novel approach of microwave irradiation (MWI), followed by three different drying methods: vacuum oven drying (VO), freeze-drying (FD) or spray-drying (SD). FD was the best option for drying the CO-β-CD solid complexes, followed by VO and SD. MWI can be used efficiently to prepare essential CO-β-CD complexes with good yield on an industrial scale. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Analysis and application of classification methods of complex carbonate reservoirs
Li, Xiongyan; Qin, Ruibao; Ping, Haitao; Wei, Dan; Liu, Xiaomei
2018-06-01
There are abundant carbonate reservoirs from the Cenozoic to Mesozoic era in the Middle East. Due to variation in sedimentary environment and diagenetic process of carbonate reservoirs, several porosity types coexist in carbonate reservoirs. As a result, because of the complex lithologies and pore types as well as the impact of microfractures, the pore structure is very complicated. Therefore, it is difficult to accurately calculate the reservoir parameters. In order to accurately evaluate carbonate reservoirs, based on the pore structure evaluation of carbonate reservoirs, the classification methods of carbonate reservoirs are analyzed based on capillary pressure curves and flow units. Based on the capillary pressure curves, although the carbonate reservoirs can be classified, the relationship between porosity and permeability after classification is not ideal. On the basis of the flow units, the high-precision functional relationship between porosity and permeability after classification can be established. Therefore, the carbonate reservoirs can be quantitatively evaluated based on the classification of flow units. In the dolomite reservoirs, the average absolute error of calculated permeability decreases from 15.13 to 7.44 mD. Similarly, the average absolute error of calculated permeability of limestone reservoirs is reduced from 20.33 to 7.37 mD. Only by accurately characterizing pore structures and classifying reservoir types, reservoir parameters could be calculated accurately. Therefore, characterizing pore structures and classifying reservoir types are very important to accurate evaluation of complex carbonate reservoirs in the Middle East.
Experimental methods for laboratory-scale ensilage of lignocellulosic biomass
International Nuclear Information System (INIS)
Tanjore, Deepti; Richard, Tom L.; Marshall, Megan N.
2012-01-01
Anaerobic fermentation is a potential storage method for lignocellulosic biomass in biofuel production processes. Since biomass is seasonally harvested, stocks are often dried or frozen at laboratory scale prior to fermentation experiments. Such treatments prior to fermentation studies cause irreversible changes in the plant cells, influencing the initial state of biomass and thereby the progression of the fermentation processes itself. This study investigated the effects of drying, refrigeration, and freezing relative to freshly harvested corn stover in lab-scale ensilage studies. Particle sizes, as well as post-ensilage drying temperatures for compositional analysis, were tested to identify the appropriate sample processing methods. After 21 days of ensilage the lowest pH value (3.73 ± 0.03), lowest dry matter loss (4.28 ± 0.26 g. 100 g-1DM), and highest water soluble carbohydrate (WSC) concentrations (7.73 ± 0.26 g. 100 g-1DM) were observed in control biomass (stover ensiled within 12 h of harvest without any treatments). WSC concentration was significantly reduced in samples refrigerated for 7 days prior to ensilage (3.86 ± 0.49 g. 100 g −1 DM). However, biomass frozen prior to ensilage produced statistically similar results to the fresh biomass control, especially in treatments with cell wall degrading enzymes. Grinding to decrease particle size reduced the variance amongst replicates for pH values of individual reactors to a minor extent. Drying biomass prior to extraction of WSCs resulted in degradation of the carbohydrates and a reduced estimate of their concentrations. The methods developed in this study can be used to improve ensilage experiments and thereby help in developing ensilage as a storage method for biofuel production. -- Highlights: ► Laboratory-scale methods to assess the influence of ensilage biofuel production. ► Drying, freezing, and refrigeration of biomass influenced microbial fermentation. ► Freshly ensiled stover exhibited
Del Carpio R., Maikol; Hashemi, M. Javad; Mosqueda, Gilberto
2017-10-01
This study examines the performance of integration methods for hybrid simulation of large and complex structural systems in the context of structural collapse due to seismic excitations. The target application is not necessarily for real-time testing, but rather for models that involve large-scale physical sub-structures and highly nonlinear numerical models. Four case studies are presented and discussed. In the first case study, the accuracy of integration schemes including two widely used methods, namely, modified version of the implicit Newmark with fixed-number of iteration (iterative) and the operator-splitting (non-iterative) is examined through pure numerical simulations. The second case study presents the results of 10 hybrid simulations repeated with the two aforementioned integration methods considering various time steps and fixed-number of iterations for the iterative integration method. The physical sub-structure in these tests consists of a single-degree-of-freedom (SDOF) cantilever column with replaceable steel coupons that provides repeatable highlynonlinear behavior including fracture-type strength and stiffness degradations. In case study three, the implicit Newmark with fixed-number of iterations is applied for hybrid simulations of a 1:2 scale steel moment frame that includes a relatively complex nonlinear numerical substructure. Lastly, a more complex numerical substructure is considered by constructing a nonlinear computational model of a moment frame coupled to a hybrid model of a 1:2 scale steel gravity frame. The last two case studies are conducted on the same porotype structure and the selection of time steps and fixed number of iterations are closely examined in pre-test simulations. The generated unbalance forces is used as an index to track the equilibrium error and predict the accuracy and stability of the simulations.
Interpreting Popov criteria in Lure´ systems with complex scaling stability analysis
Zhou, J.
2018-06-01
The paper presents a novel frequency-domain interpretation of Popov criteria for absolute stability in Lure´ systems by means of what we call complex scaling stability analysis. The complex scaling technique is developed for exponential/asymptotic stability in LTI feedback systems, which dispenses open-loop poles distribution, contour/locus orientation and prior frequency sweeping. Exploiting the technique for alternatively revealing positive realness of transfer functions, re-interpreting Popov criteria is explicated. More specifically, the suggested frequency-domain stability conditions are conformable both in scalar and multivariable cases, and can be implemented either graphically with locus plotting or numerically without; in particular, the latter is suitable as a design tool with auxiliary parameter freedom. The interpretation also reveals further frequency-domain facts about Lure´ systems. Numerical examples are included to illustrate the main results.
Studies on combined model based on functional objectives of large scale complex engineering
Yuting, Wang; Jingchun, Feng; Jiabao, Sun
2018-03-01
As various functions were included in large scale complex engineering, and each function would be conducted with completion of one or more projects, combined projects affecting their functions should be located. Based on the types of project portfolio, the relationship of projects and their functional objectives were analyzed. On that premise, portfolio projects-technics based on their functional objectives were introduced, then we studied and raised the principles of portfolio projects-technics based on the functional objectives of projects. In addition, The processes of combined projects were also constructed. With the help of portfolio projects-technics based on the functional objectives of projects, our research findings laid a good foundation for management of large scale complex engineering portfolio management.
Calibration of a complex activated sludge model for the full-scale wastewater treatment plant
Liwarska-Bizukojc, Ewa; Olejnik, Dorota; Biernacki, Rafal; Ledakowicz, Stanislaw
2011-01-01
In this study, the results of the calibration of the complex activated sludge model implemented in BioWin software for the full-scale wastewater treatment plant are presented. Within the calibration of the model, sensitivity analysis of its parameters and the fractions of carbonaceous substrate were performed. In the steady-state and dynamic calibrations, a successful agreement between the measured and simulated values of the output variables was achieved. Sensitivity analysis revealed that u...
Multi-scale complexity analysis of muscle coactivation during gait in children with cerebral palsy
Directory of Open Access Journals (Sweden)
Wen eTao
2015-07-01
Full Text Available The objective of this study is to characterize complexity of lower-extremity muscle coactivation and coordination during gait in children with cerebral palsy (CP, children with typical development (TD and healthy adults, by applying recently developed multivariate multi-scale entropy (MMSE analysis to surface EMG signals. Eleven CP children (CP group, eight TD children and seven healthy adults (consider as an entire control group were asked to walk while surface EMG signals were collected from 5 thigh muscles and 3 lower leg muscles on each leg (16 EMG channels in total. The 16-channel surface EMG data, recorded during a series of consecutive gait cycles, were simultaneously processed by multivariate empirical mode decomposition (MEMD, to generate fully aligned data scales for subsequent MMSE analysis. In order to conduct extensive examination of muscle coactivation complexity using the MEMD-enhanced MMSE, 14 data analysis schemes were designed by varying partial muscle combinations and time durations of data segments. Both TD children and healthy adults showed almost consistent MMSE curves over multiple scales for all the 14 schemes, without any significant difference (p > 0.09. However, quite diversity in MMSE curve was observed in the CP group when compared with those in the control group. There appears to be diverse neuropathological processes in CP that may affect dynamical complexity of muscle coactivation and coordination during gait. The abnormal complexity patterns emerging in CP group can be attributed to different factors such as motor control impairments, loss of muscle couplings, and spasticity or paralysis in individual muscles. All these findings expand our knowledge of neuropathology of CP from a novel point of view of muscle co-activation complexity, also indicating the potential to derive a quantitative index for assessing muscle activation characteristics as well as motor function in CP.
Parameter and State Estimation of Large-Scale Complex Systems Using Python Tools
Directory of Open Access Journals (Sweden)
M. Anushka S. Perera
2015-07-01
Full Text Available This paper discusses the topics related to automating parameter, disturbance and state estimation analysis of large-scale complex nonlinear dynamic systems using free programming tools. For large-scale complex systems, before implementing any state estimator, the system should be analyzed for structural observability and the structural observability analysis can be automated using Modelica and Python. As a result of structural observability analysis, the system may be decomposed into subsystems where some of them may be observable --- with respect to parameter, disturbances, and states --- while some may not. The state estimation process is carried out for those observable subsystems and the optimum number of additional measurements are prescribed for unobservable subsystems to make them observable. In this paper, an industrial case study is considered: the copper production process at Glencore Nikkelverk, Kristiansand, Norway. The copper production process is a large-scale complex system. It is shown how to implement various state estimators, in Python, to estimate parameters and disturbances, in addition to states, based on available measurements.
Optimization of large-scale industrial systems : an emerging method
Energy Technology Data Exchange (ETDEWEB)
Hammache, A.; Aube, F.; Benali, M.; Cantave, R. [Natural Resources Canada, Varennes, PQ (Canada). CANMET Energy Technology Centre
2006-07-01
This paper reviewed optimization methods of large-scale industrial production systems and presented a novel systematic multi-objective and multi-scale optimization methodology. The methodology was based on a combined local optimality search with global optimality determination, and advanced system decomposition and constraint handling. The proposed method focused on the simultaneous optimization of the energy, economy and ecology aspects of industrial systems (E{sup 3}-ISO). The aim of the methodology was to provide guidelines for decision-making strategies. The approach was based on evolutionary algorithms (EA) with specifications including hybridization of global optimality determination with a local optimality search; a self-adaptive algorithm to account for the dynamic changes of operating parameters and design variables occurring during the optimization process; interactive optimization; advanced constraint handling and decomposition strategy; and object-oriented programming and parallelization techniques. Flowcharts of the working principles of the basic EA were presented. It was concluded that the EA uses a novel decomposition and constraint handling technique to enhance the Pareto solution search procedure for multi-objective problems. 6 refs., 9 figs.
Complex Quantum Network Manifolds in Dimension d > 2 are Scale-Free
Bianconi, Ginestra; Rahmede, Christoph
2015-09-01
In quantum gravity, several approaches have been proposed until now for the quantum description of discrete geometries. These theoretical frameworks include loop quantum gravity, causal dynamical triangulations, causal sets, quantum graphity, and energetic spin networks. Most of these approaches describe discrete spaces as homogeneous network manifolds. Here we define Complex Quantum Network Manifolds (CQNM) describing the evolution of quantum network states, and constructed from growing simplicial complexes of dimension . We show that in d = 2 CQNM are homogeneous networks while for d > 2 they are scale-free i.e. they are characterized by large inhomogeneities of degrees like most complex networks. From the self-organized evolution of CQNM quantum statistics emerge spontaneously. Here we define the generalized degrees associated with the -faces of the -dimensional CQNMs, and we show that the statistics of these generalized degrees can either follow Fermi-Dirac, Boltzmann or Bose-Einstein distributions depending on the dimension of the -faces.
Modified multiple time scale method for solving strongly nonlinear damped forced vibration systems
Razzak, M. A.; Alam, M. Z.; Sharif, M. N.
2018-03-01
In this paper, modified multiple time scale (MTS) method is employed to solve strongly nonlinear forced vibration systems. The first-order approximation is only considered in order to avoid complexicity. The formulations and the determination of the solution procedure are very easy and straightforward. The classical multiple time scale (MS) and multiple scales Lindstedt-Poincare method (MSLP) do not give desire result for the strongly damped forced vibration systems with strong damping effects. The main aim of this paper is to remove these limitations. Two examples are considered to illustrate the effectiveness and convenience of the present procedure. The approximate external frequencies and the corresponding approximate solutions are determined by the present method. The results give good coincidence with corresponding numerical solution (considered to be exact) and also provide better result than other existing results. For weak nonlinearities with weak damping effect, the absolute relative error measures (first-order approximate external frequency) in this paper is only 0.07% when amplitude A = 1.5 , while the relative error gives MSLP method is surprisingly 28.81%. Furthermore, for strong nonlinearities with strong damping effect, the absolute relative error found in this article is only 0.02%, whereas the relative error obtained by MSLP method is 24.18%. Therefore, the present method is not only valid for weakly nonlinear damped forced systems, but also gives better result for strongly nonlinear systems with both small and strong damping effect.
A Porosity Method to Describe Complex 3D-Structures Theory and Application to an Explosion
Directory of Open Access Journals (Sweden)
M.-F. Robbe
2006-01-01
Full Text Available A theoretical method was developed to be able to describe the influence of structures of complex shape on a transient fluid flow without meshing the structures. Structures are considered as solid pores inside the fluid and act as an obstacle for the flow. The method was specifically adapted to fast transient cases.The porosity method was applied to the simulation of a Hypothetical Core Disruptive Accident in a small-scale replica of a Liquid Metal Fast Breeder Reactor. A 2D-axisymmetrical simulation of the MARS test was performed with the EUROPLEXUS code. Whereas the central internal structures of the mock-up could be described with a classical shell model, the influence of the 3D peripheral structures was taken into account with the porosity method.
Complex transformation method and resonances in one-body quantum systems
International Nuclear Information System (INIS)
Sigal, I.M.
1984-01-01
We develop a new spectral deformation method in order to treat the resonance problem in one-body systems. Our result on the meromorphic continuation of matrix elements of the resolvent across the continuous spectrum overlaps considerably with an earlier result of E. Balslev [B] but our method is much simpler and more convenient, we believe, in applications. It is inspired by the local distortion technique of Nuttall-Thomas-Babbitt-Balslev, further developed in [B] but patterned on the complex scaling method of Combes and Balslev. The method is applicable to the multicenter problems in which each potential can be represented, roughly speaking, as a sum of exponentially decaying and dilation-analytic, spherically symmetric parts
2013-09-01
observations, linear regression finds the straight line that explains the linear relationship of the sample. This line is given by the equation y = mx + b...SENSITIVITY ANALYSIS OF A SEVERE DOWNSLOPE WINDSTORM IN COMPLEX TERRAIN: IMPLICATIONS FOR FORECAST PREDICTABILITY SCALES AND TARGETED OBSERVING...SENSITIVITY ANALYSIS OF A SEVERE DOWNSLOPE WINDSTORM IN COMPLEX TERRAIN: IMPLICATIONS FOR FORECAST PREDICTABILITY SCALES AND TARGETED OBSERVING NETWORKS
Complexity methods applied to turbulence in plasma astrophysics
Vlahos, L.; Isliker, H.
2016-09-01
In this review many of the well known tools for the analysis of Complex systems are used in order to study the global coupling of the turbulent convection zone with the solar atmosphere where the magnetic energy is dissipated explosively. Several well documented observations are not easy to interpret with the use of Magnetohydrodynamic (MHD) and/or Kinetic numerical codes. Such observations are: (1) The size distribution of the Active Regions (AR) on the solar surface, (2) The fractal and multi fractal characteristics of the observed magnetograms, (3) The Self-Organised characteristics of the explosive magnetic energy release and (4) the very efficient acceleration of particles during the flaring periods in the solar corona. We review briefly the work published the last twenty five years on the above issues and propose solutions by using methods borrowed from the analysis of complex systems. The scenario which emerged is as follows: (a) The fully developed turbulence in the convection zone generates and transports magnetic flux tubes to the solar surface. Using probabilistic percolation models we were able to reproduce the size distribution and the fractal properties of the emerged and randomly moving magnetic flux tubes. (b) Using a Non Linear Force Free (NLFF) magnetic extrapolation numerical code we can explore how the emerged magnetic flux tubes interact nonlinearly and form thin and Unstable Current Sheets (UCS) inside the coronal part of the AR. (c) The fragmentation of the UCS and the redistribution of the magnetic field locally, when the local current exceeds a Critical threshold, is a key process which drives avalanches and forms coherent structures. This local reorganization of the magnetic field enhances the energy dissipation and influences the global evolution of the complex magnetic topology. Using a Cellular Automaton and following the simple rules of Self Organized Criticality (SOC), we were able to reproduce the statistical characteristics of the
Cerbino, Roberto; Cicuta, Pietro
2017-09-01
Differential dynamic microscopy (DDM) is a technique that exploits optical microscopy to obtain local, multi-scale quantitative information about dynamic samples, in most cases without user intervention. It is proving extremely useful in understanding dynamics in liquid suspensions, soft materials, cells, and tissues. In DDM, image sequences are analyzed via a combination of image differences and spatial Fourier transforms to obtain information equivalent to that obtained by means of light scattering techniques. Compared to light scattering, DDM offers obvious advantages, principally (a) simplicity of the setup; (b) possibility of removing static contributions along the optical path; (c) power of simultaneous different microscopy contrast mechanisms; and (d) flexibility of choosing an analysis region, analogous to a scattering volume. For many questions, DDM has also advantages compared to segmentation/tracking approaches and to correlation techniques like particle image velocimetry. The very straightforward DDM approach, originally demonstrated with bright field microscopy of aqueous colloids, has lately been used to probe a variety of other complex fluids and biological systems with many different imaging methods, including dark-field, differential interference contrast, wide-field, light-sheet, and confocal microscopy. The number of adopting groups is rapidly increasing and so are the applications. Here, we briefly recall the working principles of DDM, we highlight its advantages and limitations, we outline recent experimental breakthroughs, and we provide a perspective on future challenges and directions. DDM can become a standard primary tool in every laboratory equipped with a microscope, at the very least as a first bias-free automated evaluation of the dynamics in a system.
Complexities and uncertainties in transitioning small-scale coral reef fisheries
Directory of Open Access Journals (Sweden)
Pierre eLeenhardt
2016-05-01
Full Text Available Coral reef fisheries support the development of local and national economies and are the basis of important cultural practices and worldviews. Transitioning economies, human development and environmental stress can harm this livelihood. Here we focus on a transitioning social-ecological system as case study (Moorea, French Polynesia. We review fishing practices and three decades of effort and landing estimates with the broader goal of informing management. Fishery activities in Moorea are quite challenging to quantify because of the diversity of gears used, the lack of centralized access points or markets, the high participation rates of the population in the fishery, and the overlapping cultural and economic motivations to catch fish. Compounding this challenging diversity, we lack a basic understanding of the complex interplay between the cultural, subsistence, and commercial use of Moorea's reefs. In Moorea, we found an order of magnitude gap between estimates of fishery yield produced by catch monitoring methods (~2 t km-2 year-1 and estimates produced using consumption or participatory socioeconomic consumer surveys (~24 t km-2 year-1. Several lines of evidence suggest reef resources may be overexploited and stakeholders have a diversity of opinions as to whether trends in the stocks are a cause for concern. The reefs, however, remain ecologically resilient. The relative health of the reef is striking given the socio-economic context. Moorea has a relatively high population density, a modern economic system linked into global flows of trade and travel, and the fishery has little remaining traditional or customary management. Other islands in the Pacific that continue to develop economically may have small-scale fisheries that increasingly resemble Moorea. Therefore, understanding Moorea's reef fisheries may provide insight into their future.
Microscopic methods for the interactions between complex nuclei
International Nuclear Information System (INIS)
Ikeda, Kiyomi; Tamagaki, Ryozo; Saito, Sakae; Horiuchi, Hisashi; Tohsaki-Suzuki, Akihiro.
1978-01-01
Microscopic study on composite-particle interaction performed in Japan is described in this paper. In chapter 1, brief historical description of the study is presented. In chapter 2, the theory of resonating group method (RGM) for describing microscopically the interaction between nuclei (clusters) is reviewed, and formulation on the description is presented. It is shown that the generator coordinate method (GCM) is a useful one for the description of interaction between shell model clusters, and that the kernels in the RGM are easily obtained from those of the GCM. The inter-cluster interaction can be well described by the orthogonality condition model (OCM). In chapter 3, the calculational procedures for the kernels of GCN, RGM and OCM and some properties related to their calculation are discussed. The GCM kernels for various types of systems are treated. The RGM kernels are evaluated by the integral transformation of GCM kernels. The problems related to the RGM norm kernel (RGM-NK) are discussed. The projection operator onto the Pauli-allowed state in OCM is obtained directly from the solution of the eigenvalue problem of RGM-NK. In chapter 4, the exchange kernels due to the antisymmetrization are derived in analytical way with the symbolical use of computer memory by taking the α + O 16 system as a typical example. New algorisms for deriving analytically the generator coordinate kernel (GCM kernel) are presented. In chapter 5, precise generalization of the Kohn-Hulthen-Kato variational method for scattering matrix is made for the purpose of microscopic study of reactions between complex nuclei with many channels coupled. (Kato, T.)
Deep graphs—A general framework to represent and analyze heterogeneous complex systems across scales
Traxl, Dominik; Boers, Niklas; Kurths, Jürgen
2016-06-01
Network theory has proven to be a powerful tool in describing and analyzing systems by modelling the relations between their constituent objects. Particularly in recent years, a great progress has been made by augmenting "traditional" network theory in order to account for the multiplex nature of many networks, multiple types of connections between objects, the time-evolution of networks, networks of networks and other intricacies. However, existing network representations still lack crucial features in order to serve as a general data analysis tool. These include, most importantly, an explicit association of information with possibly heterogeneous types of objects and relations, and a conclusive representation of the properties of groups of nodes as well as the interactions between such groups on different scales. In this paper, we introduce a collection of definitions resulting in a framework that, on the one hand, entails and unifies existing network representations (e.g., network of networks and multilayer networks), and on the other hand, generalizes and extends them by incorporating the above features. To implement these features, we first specify the nodes and edges of a finite graph as sets of properties (which are permitted to be arbitrary mathematical objects). Second, the mathematical concept of partition lattices is transferred to the network theory in order to demonstrate how partitioning the node and edge set of a graph into supernodes and superedges allows us to aggregate, compute, and allocate information on and between arbitrary groups of nodes. The derived partition lattice of a graph, which we denote by deep graph, constitutes a concise, yet comprehensive representation that enables the expression and analysis of heterogeneous properties, relations, and interactions on all scales of a complex system in a self-contained manner. Furthermore, to be able to utilize existing network-based methods and models, we derive different representations of
Formal methods applied to industrial complex systems implementation of the B method
Boulanger, Jean-Louis
2014-01-01
This book presents real-world examples of formal techniques in an industrial context. It covers formal methods such as SCADE and/or the B Method, in various fields such as railways, aeronautics, and the automotive industry. The purpose of this book is to present a summary of experience on the use of "formal methods" (based on formal techniques such as proof, abstract interpretation and model-checking) in industrial examples of complex systems, based on the experience of people currently involved in the creation and assessment of safety critical system software. The involvement of people from
New Models and Methods for the Electroweak Scale
Energy Technology Data Exchange (ETDEWEB)
Carpenter, Linda [The Ohio State Univ., Columbus, OH (United States). Dept. of Physics
2017-09-26
This is the Final Technical Report to the US Department of Energy for grant DE-SC0013529, New Models and Methods for the Electroweak Scale, covering the time period April 1, 2015 to March 31, 2017. The goal of this project was to maximize the understanding of fundamental weak scale physics in light of current experiments, mainly the ongoing run of the Large Hadron Collider and the space based satellite experiements searching for signals Dark Matter annihilation or decay. This research program focused on the phenomenology of supersymmetry, Higgs physics, and Dark Matter. The properties of the Higgs boson are currently being measured by the Large Hadron collider, and could be a sensitive window into new physics at the weak scale. Supersymmetry is the leading theoretical candidate to explain the natural nessof the electroweak theory, however new model space must be explored as the Large Hadron collider has disfavored much minimal model parameter space. In addition the nature of Dark Matter, the mysterious particle that makes up 25% of the mass of the universe is still unknown. This project sought to address measurements of the Higgs boson couplings to the Standard Model particles, new LHC discovery scenarios for supersymmetric particles, and new measurements of Dark Matter interactions with the Standard Model both in collider production and annihilation in space. Accomplishments include new creating tools for analyses of Dark Matter models in Dark Matter which annihilates into multiple Standard Model particles, including new visualizations of bounds for models with various Dark Matter branching ratios; benchmark studies for new discovery scenarios of Dark Matter at the Large Hardon Collider for Higgs-Dark Matter and gauge boson-Dark Matter interactions; New target analyses to detect direct decays of the Higgs boson into challenging final states like pairs of light jets, and new phenomenological analysis of non-minimal supersymmetric models, namely the set of Dirac
Posch, Andreas E; Spadiut, Oliver; Herwig, Christoph
2012-06-22
Filamentous fungi are versatile cell factories and widely used for the production of antibiotics, organic acids, enzymes and other industrially relevant compounds at large scale. As a fact, industrial production processes employing filamentous fungi are commonly based on complex raw materials. However, considerable lot-to-lot variability of complex media ingredients not only demands for exhaustive incoming components inspection and quality control, but unavoidably affects process stability and performance. Thus, switching bioprocesses from complex to defined media is highly desirable. This study presents a strategy for strain characterization of filamentous fungi on partly complex media using redundant mass balancing techniques. Applying the suggested method, interdependencies between specific biomass and side-product formation rates, production of fructooligosaccharides, specific complex media component uptake rates and fungal strains were revealed. A 2-fold increase of the overall penicillin space time yield and a 3-fold increase in the maximum specific penicillin formation rate were reached in defined media compared to complex media. The newly developed methodology enabled fast characterization of two different industrial Penicillium chrysogenum candidate strains on complex media based on specific complex media component uptake kinetics and identification of the most promising strain for switching the process from complex to defined conditions. Characterization at different complex/defined media ratios using only a limited number of analytical methods allowed maximizing the overall industrial objectives of increasing both, method throughput and the generation of scientific process understanding.
Marsh, C.; Pomeroy, J. W.; Wheater, H. S.
2017-12-01
Accurate management of water resources is necessary for social, economic, and environmental sustainability worldwide. In locations with seasonal snowcovers, the accurate prediction of these water resources is further complicated due to frozen soils, solid-phase precipitation, blowing snow transport, and snowcover-vegetation-atmosphere interactions. Complex process interactions and feedbacks are a key feature of hydrological systems and may result in emergent phenomena, i.e., the arising of novel and unexpected properties within a complex system. One example is the feedback associated with blowing snow redistribution, which can lead to drifts that cause locally-increased soil moisture, thus increasing plant growth that in turn subsequently impacts snow redistribution, creating larger drifts. Attempting to simulate these emergent behaviours is a significant challenge, however, and there is concern that process conceptualizations within current models are too incomplete to represent the needed interactions. An improved understanding of the role of emergence in hydrological systems often requires high resolution distributed numerical hydrological models that incorporate the relevant process dynamics. The Canadian Hydrological Model (CHM) provides a novel tool for examining cold region hydrological systems. Key features include efficient terrain representation, allowing simulations at various spatial scales, reduced computational overhead, and a modular process representation allowing for an alternative-hypothesis framework. Using both physics-based and conceptual process representations sourced from long term process studies and the current cold regions literature allows for comparison of process representations and importantly, their ability to produce emergent behaviours. Examining the system in a holistic, process-based manner can hopefully derive important insights and aid in development of improved process representations.
Psychometric validation of the Italian Rehabilitation Complexity Scale-Extended version 13
Agosti, Maurizio; Merlo, Andrea; Maini, Maurizio; Lombardi, Francesco; Tedeschi, Claudio; Benedetti, Maria Grazia; Basaglia, Nino; Contini, Mara; Nicolotti, Domenico; Brianti, Rodolfo
2017-01-01
In Italy, at present, a well-known problem is inhomogeneous provision of rehabilitative services, as stressed by MoH, requiring appropriate criteria and parameters to plan rehabilitation actions. According to the Italian National Rehabilitation Plan, Comorbidity, Disability and Clinical Complexity should be assessed to define the patient’s real needs. However, to date, clinical complexity is still difficult to measure with shared and validated tools. The study aims to psychometrically validate the Italian Rehabilitation Complexity Scale-Extended v13 (RCS-E v13), in order to meet the guidelines requirements. An observational multicentre prospective cohort study, involving 8 intensive rehabilitation facilities of the Emilia-Romagna Region and 1712 in-patients, [823 male (48%) and 889 female (52%), mean age 68.34 years (95% CI 67.69–69.00 years)] showing neurological, orthopaedic and cardiological problems, was carried out. The construct and concurrent validity of the RCS-E v13 was confirmed through its correlation to Barthel Index (disability) and Cumulative Illness Rating Scale (comorbidity) and appropriate admission criteria (not yet published), respectively. Furthermore, the factor analysis indicated two different components (“Basic Care or Risk—Equipment” and “Medical—Nursing Needs and Therapy Disciplines”) of the RCS-E v13. In conclusion, the Italian RCS-E v13 appears to be a useful tool to assess clinical complexity in the Italian rehab scenario case-mix and its psychometric validation may have an important clinical rehabilitation impact allowing the assessment of the rehabilitation needs considering all three dimensions (disability, comorbidity and clinical complexity) as required by the Guidelines and the inhomogeneity could be reduced. PMID:29045409
Etoile Project : Social Intelligent ICT-System for very large scale education in complex systems
Bourgine, P.; Johnson, J.
2009-04-01
The project will devise new theory and implement new ICT-based methods of delivering high-quality low-cost postgraduate education to many thousands of people in a scalable way, with the cost of each extra student being negligible (Socially Intelligent Resource Mining system to gather large volumes of high quality educational resources from the internet; new methods to deconstruct these to produce a semantically tagged Learning Object Database; a Living Course Ecology to support the creation and maintenance of evolving course materials; systems to deliver courses; and a ‘socially intelligent assessment system'. The system will be tested on one to ten thousand postgraduate students in Europe working towards the Complex System Society's title of European PhD in Complex Systems. Étoile will have a very high impact both scientifically and socially by (i) the provision of new scalable ICT-based methods for providing very low cost scientific education, (ii) the creation of new mathematical and statistical theory for the multiscale dynamics of complex systems, (iii) the provision of a working example of adaptation and emergence in complex socio-technical systems, and (iv) making a major educational contribution to European complex systems science and its applications.
Complex of radioanalytical methods for radioecological study of STS
International Nuclear Information System (INIS)
Artemev, O.I.; Larin, V.N.; Ptitskaya, L.D.; Smagulova, G.S.
1998-01-01
Today the main task of the Institute of Radiation Safety and Ecology is the assessment of parameters of radioecological situation in areas of nuclear testing on the territory of the former Semipalatinsk Test Site (STS). According to the diagram below, the radioecological study begins with the Field radiometry and environmental sampling followed by the coordinate fixation. This work is performed by the staff of the Radioecology Laboratory equipped with the state-of-the-art devices of dosimetry and radiometry. All the devices annually undergo the State Check by the RK Gosstandard Centre in Almaty. The air samples are also collected for determination of radon content. Environmental samples are measured for the total gamma activity in order to dispatch and discard samples with the insufficient level of homogenization. Samples are measured with the gamma radiometry installation containing NaJ(TI) scintillation detector. The installation background is measured everyday and many times. Time duration of measurement depends on sample activity. Further, samples are measured with alpha and beta radiometers for the total alpha and beta activity that characterizes the radioactive contamination of sampling locations. Apart from the Radiometry Laboratory the analytical complex includes the Radiochemistry and Gamma Spectrometry Laboratories. The direct gamma spectral (instrumental) methods in most cases allow to obtain the sufficiently rapid information about the radionuclides present in a sample. The state-of-the-art equipment together with the computer technology provide the high quantitative and qualitative precision and high productivity as well. One of the advantages of the method is that samples after measurement maintain their state and can be used for the repeated measurements or radiochemical reanalyzes. The Gamma Spectrometry Laboratory has three state-of-the-art gamma spectral installations consisting of high resolution semi-conductive detectors and equipped with
Petascale Many Body Methods for Complex Correlated Systems
Pruschke, Thomas
2012-02-01
Correlated systems constitute an important class of materials in modern condensed matter physics. Correlation among electrons are at the heart of all ordering phenomena and many intriguing novel aspects, such as quantum phase transitions or topological insulators, observed in a variety of compounds. Yet, theoretically describing these phenomena is still a formidable task, even if one restricts the models used to the smallest possible set of degrees of freedom. Here, modern computer architectures play an essential role, and the joint effort to devise efficient algorithms and implement them on state-of-the art hardware has become an extremely active field in condensed-matter research. To tackle this task single-handed is quite obviously not possible. The NSF-OISE funded PIRE collaboration ``Graduate Education and Research in Petascale Many Body Methods for Complex Correlated Systems'' is a successful initiative to bring together leading experts around the world to form a virtual international organization for addressing these emerging challenges and educate the next generation of computational condensed matter physicists. The collaboration includes research groups developing novel theoretical tools to reliably and systematically study correlated solids, experts in efficient computational algorithms needed to solve the emerging equations, and those able to use modern heterogeneous computer architectures to make then working tools for the growing community.
Number theoretic methods in cryptography complexity lower bounds
Shparlinski, Igor
1999-01-01
The book introduces new techniques which imply rigorous lower bounds on the complexity of some number theoretic and cryptographic problems. These methods and techniques are based on bounds of character sums and numbers of solutions of some polynomial equations over finite fields and residue rings. It also contains a number of open problems and proposals for further research. We obtain several lower bounds, exponential in terms of logp, on the de grees and orders of • polynomials; • algebraic functions; • Boolean functions; • linear recurring sequences; coinciding with values of the discrete logarithm modulo a prime p at suf ficiently many points (the number of points can be as small as pI/He). These functions are considered over the residue ring modulo p and over the residue ring modulo an arbitrary divisor d of p - 1. The case of d = 2 is of special interest since it corresponds to the representation of the right most bit of the discrete logarithm and defines whether the argument is a quadratic...
Fuzzy Entropy Method for Quantifying Supply Chain Networks Complexity
Zhang, Jihui; Xu, Junqin
Supply chain is a special kind of complex network. Its complexity and uncertainty makes it very difficult to control and manage. Supply chains are faced with a rising complexity of products, structures, and processes. Because of the strong link between a supply chain’s complexity and its efficiency the supply chain complexity management becomes a major challenge of today’s business management. The aim of this paper is to quantify the complexity and organization level of an industrial network working towards the development of a ‘Supply Chain Network Analysis’ (SCNA). By measuring flows of goods and interaction costs between different sectors of activity within the supply chain borders, a network of flows is built and successively investigated by network analysis. The result of this study shows that our approach can provide an interesting conceptual perspective in which the modern supply network can be framed, and that network analysis can handle these issues in practice.
French, Jon; Payo, Andres; Murray, Brad; Orford, Julian; Eliot, Matt; Cowell, Peter
2016-03-01
Coastal and estuarine landforms provide a physical template that not only accommodates diverse ecosystem functions and human activities, but also mediates flood and erosion risks that are expected to increase with climate change. In this paper, we explore some of the issues associated with the conceptualisation and modelling of coastal morphological change at time and space scales relevant to managers and policy makers. Firstly, we revisit the question of how to define the most appropriate scales at which to seek quantitative predictions of landform change within an age defined by human interference with natural sediment systems and by the prospect of significant changes in climate and ocean forcing. Secondly, we consider the theoretical bases and conceptual frameworks for determining which processes are most important at a given scale of interest and the related problem of how to translate this understanding into models that are computationally feasible, retain a sound physical basis and demonstrate useful predictive skill. In particular, we explore the limitations of a primary scale approach and the extent to which these can be resolved with reference to the concept of the coastal tract and application of systems theory. Thirdly, we consider the importance of different styles of landform change and the need to resolve not only incremental evolution of morphology but also changes in the qualitative dynamics of a system and/or its gross morphological configuration. The extreme complexity and spatially distributed nature of landform systems means that quantitative prediction of future changes must necessarily be approached through mechanistic modelling of some form or another. Geomorphology has increasingly embraced so-called 'reduced complexity' models as a means of moving from an essentially reductionist focus on the mechanics of sediment transport towards a more synthesist view of landform evolution. However, there is little consensus on exactly what constitutes
Directory of Open Access Journals (Sweden)
Samyadip Chakraborty
2015-12-01
Full Text Available Concepts like supply chain network complexity, interdependence and risk assessment have been prominently discussed directly and indirectly in management literature over past decades and plenty of frameworks and conceptual prescriptive research works have been published contributing towards building the body of knowledge. However previous studies often lacked quantification of the findings. Consequently, the need for suitable scales becomes prominent for measuring those constructs to empirically support the conceptualized relationships. This paper expands the understanding of supply chain network complexity (SCNC and also highlights its implications on interdependence (ID between the actors and risk assessment (RAS in transaction relationships. In doing so, SCNC and RAS are operationalized to understand how SCNC affects interdependence and risk assessment between the actors in the supply chain network. The contribution of this study lies in developing and validating multi-item scales for these constructs and empirically establishing the hypothesized relationships in the Indian context based on firm data collected using survey–based questionnaire. The methodology followed included structural equation modeling. The study findings indicate that SCNC had significant relationship with interdependence, which in turn significantly affected risk assessment. This study carries both academic and managerial implications and provides an empirically supported framework linking network complexity with the two key variables (ID and RAS, playing crucial roles in managerial decision making. This study contributes to the body of knowledge and aims at guiding managers in better understanding transaction relationships.
Simulating Engineering Flows through Complex Porous Media via the Lattice Boltzmann Method
Directory of Open Access Journals (Sweden)
Vesselin Krassimirov Krastev
2018-03-01
Full Text Available In this paper, recent achievements in the application of the lattice Boltzmann method (LBM to complex fluid flows are reported. More specifically, we focus on flows through reactive porous media, such as the flow through the substrate of a selective catalytic reactor (SCR for the reduction of gaseous pollutants in the automotive field; pulsed-flow analysis through heterogeneous catalyst architectures; and transport and electro-chemical phenomena in microbial fuel cells (MFC for novel waste-to-energy applications. To the authors’ knowledge, this is the first known application of LBM modeling to the study of MFCs, which represents by itself a highly innovative and challenging research area. The results discussed here essentially confirm the capabilities of the LBM approach as a flexible and accurate computational tool for the simulation of complex multi-physics phenomena of scientific and technological interest, across physical scales.
Vertical equilibrium with sub-scale analytical methods for geological CO2 sequestration
Gasda, S. E.
2009-04-23
Large-scale implementation of geological CO2 sequestration requires quantification of risk and leakage potential. One potentially important leakage pathway for the injected CO2 involves existing oil and gas wells. Wells are particularly important in North America, where more than a century of drilling has created millions of oil and gas wells. Models of CO 2 injection and leakage will involve large uncertainties in parameters associated with wells, and therefore a probabilistic framework is required. These models must be able to capture both the large-scale CO 2 plume associated with the injection and the small-scale leakage problem associated with localized flow along wells. Within a typical simulation domain, many hundreds of wells may exist. One effective modeling strategy combines both numerical and analytical models with a specific set of simplifying assumptions to produce an efficient numerical-analytical hybrid model. The model solves a set of governing equations derived by vertical averaging with assumptions of a macroscopic sharp interface and vertical equilibrium. These equations are solved numerically on a relatively coarse grid, with an analytical model embedded to solve for wellbore flow occurring at the sub-gridblock scale. This vertical equilibrium with sub-scale analytical method (VESA) combines the flexibility of a numerical method, allowing for heterogeneous and geologically complex systems, with the efficiency and accuracy of an analytical method, thereby eliminating expensive grid refinement for sub-scale features. Through a series of benchmark problems, we show that VESA compares well with traditional numerical simulations and to a semi-analytical model which applies to appropriately simple systems. We believe that the VESA model provides the necessary accuracy and efficiency for applications of risk analysis in many CO2 sequestration problems. © 2009 Springer Science+Business Media B.V.
Knowledge based method for solving complexity in design problems
Vermeulen, B.
2007-01-01
The process of design aircraft systems is becoming more and more complex, due to an increasing amount of requirements. Moreover, the knowledge on how to solve these complex design problems becomes less readily available, because of a decrease in availability of intellectual resources and reduced
Hoffman, Karen; West, Anita; Nott, Philippa; Cole, Elaine; Playford, Diane; Liu, Clarence; Brohi, Karim
2013-01-01
Injury severity, disability and care dependency are frequently used as surrogate measures for rehabilitation requirements following trauma. The true rehabilitation needs of patients may be different but there are no validated tools for the measurement of rehabilitation complexity in acute trauma care. The aim of the study was to evaluate the potential utility of the Rehabilitation Complexity Scale (RCS) version 2 in measuring acute rehabilitation needs in trauma patients. A prospective observation study of 103 patients with traumatic injuries in a Major Trauma Centre. Rehabilitation complexity was measured using the RCS and disability was measured using the Barthel Index. Demographic information and injury characteristics were obtained from the trauma database. The RCS was closely correlated with injury severity (r=0.69, p<0.001) and the Barthel Index (r=0.91, p<0.001). However the Barthel was poor at discriminating between patients rehabilitation needs, especially for patients with higher injury severities. Of 58 patients classified as 'very dependent' by the Barthel, 21 (36%) had low or moderate rehabilitation complexity. The RCS correlated with acute hospital length of stay (r=0.64, p=<0.001) and patients with a low RCS were more likely to be discharged home. The Barthel which had a flooring effect (56% of patients classified as very dependent were discharged home) and lacked discrimination despite close statistical correlation. The RCS outperformed the ISS and the Barthel in its ability to identify rehabilitation requirements in relation to injury severity, rehabilitation complexity, length of stay and discharge destination. The RCS is potentially a feasible and useful tool for the assessment of rehabilitation complexity in acute trauma care by providing specific measurement of patients' rehabilitation requirements. A larger longitudinal study is needed to evaluate the RCS in the assessment of patient need, service provision and trauma system performance
Hu, Eric Y; Bouteiller, Jean-Marie C; Song, Dong; Baudry, Michel; Berger, Theodore W
2015-01-01
Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations.
Pavlov, A. N.; Pavlova, O. N.; Abdurashitov, A. S.; Sindeeva, O. A.; Semyachkina-Glushkovskaya, O. V.; Kurths, J.
2018-01-01
The scaling properties of complex processes may be highly influenced by the presence of various artifacts in experimental recordings. Their removal produces changes in the singularity spectra and the Hölder exponents as compared with the original artifacts-free data, and these changes are significantly different for positively correlated and anti-correlated signals. While signals with power-law correlations are nearly insensitive to the loss of significant parts of data, the removal of fragments of anti-correlated signals is more crucial for further data analysis. In this work, we study the ability of characterizing scaling features of chaotic and stochastic processes with distinct correlation properties using a wavelet-based multifractal analysis, and discuss differences between the effect of missed data for synchronous and asynchronous oscillatory regimes. We show that even an extreme data loss allows characterizing physiological processes such as the cerebral blood flow dynamics.
Complexity Index as Applied to Magnetic Resonance: Study Based on a Scale of Relative Units
International Nuclear Information System (INIS)
Capelastegui, A.; Villanua, J.
2003-01-01
To analyze the merit and repercussions of measuring magnetic resonance (MR) activity in units of radiological activity, and of using complexity index (CI) as an activity indicator. We studied the MR activity of Osatek, Inc. during an 8-year period (1994-2001). We measured this activity both in number of MR procedures performed and in units of radiological activity, such units being based on the scale of relative units published in the Radiological Services Administration Guidelines published by the Spanish Society or Medical Radiology. We calculated the annual complexity index, this being a quotient between the number of MR procedures performed and corresponding value in units of radiological activity. We also analyzed factors that can have an impact on the CI: type of exploration and power of the equipment's magnetic field. The CL stayed practically stable during the first 4 years of the study, while it increased during the second 4 years. There exists a direct relationship between this increase and the percentage of explorations that we term complex (basically, body-and angio-MR). The increasing complexity of MR studies in the last years is evident from a consideration of CI. MR productivity is more realistically expressed in units of radiological activity than in number of procedures performed by any one center. It also allows for making external comparisons. CI is a useful indicator that can be utilized as an administrative tool. (Author) 13 refs
Control protocol: large scale implementation at the CERN PS complex - a first assessment
International Nuclear Information System (INIS)
Abie, H.; Benincasa, G.; Coudert, G.; Davydenko, Y.; Dehavay, C.; Gavaggio, R.; Gelato, G.; Heinze, W.; Legras, M.; Lustig, H.; Merard, L.; Pearson, T.; Strubin, P.; Tedesco, J.
1994-01-01
The Control Protocol is a model-based, uniform access procedure from a control system to accelerator equipment. It was proposed at CERN about 5 years ago and prototypes were developed in the following years. More recently, this procedure has been finalized and implemented at a large scale in the PS Complex. More than 300 pieces of equipment are now using this protocol in normal operation and another 300 are under implementation. These include power converters, vacuum systems, beam instrumentation devices, RF equipment, etc. This paper describes how the single general procedure is applied to the different kinds of equipment. The advantages obtained are also discussed. ((orig.))
Sonnentag, O.; Helbig, M.; Connon, R.; Hould Gosselin, G.; Ryu, Y.; Karoline, W.; Hanisch, J.; Moore, T. R.; Quinton, W. L.
2017-12-01
The permafrost region of the Northern Hemisphere has been experiencing twice the rate of climate warming compared to the rest of the Earth, resulting in the degradation of the cryosphere. A large portion of the high-latitude boreal forests of northwestern Canada grows on low-lying organic-rich lands with relative warm and thin isolated, sporadic and discontinuous permafrost. Along this southern limit of permafrost, increasingly warmer temperatures have caused widespread permafrost thaw leading to land cover changes at unprecedented rates. A prominent change includes wetland expansion at the expense of Picea mariana (black spruce)-dominated forest due to ground surface subsidence caused by the thawing of ice-rich permafrost leading to collapsing peat plateaus. Recent conceptual advances have provided important new insights into high-latitude boreal forest hydrology. However, refined quantitative understanding of the mechanisms behind water storage and movement at subcatchment and catchment scales is needed from a water resources management perspective. Here we combine multi-year daily runoff measurements with spatially explicit estimates of evapotranspiration, modelled with the Breathing Earth System Simulator, to characterize the monthly growing season catchment scale ( 150 km2) hydrological response of a boreal headwater peatland complex with sporadic permafrost in the southern Northwest Territories. The corresponding water budget components at subcatchment scale ( 0.1 km2) were obtained from concurrent cutthroat flume runoff and eddy covariance evapotranspiration measurements. The highly significant linear relationships for runoff (r2=0.64) and evapotranspiration (r2=0.75) between subcatchment and catchment scales suggest that the mineral upland-dominated downstream portion of the catchment acts hydrologically similar to the headwater portion dominated by boreal peatland complexes. Breakpoint analysis in combination with moving window statistics on multi
Inhibitory effect of glutamic acid on the scale formation process using electrochemical methods.
Karar, A; Naamoune, F; Kahoul, A; Belattar, N
2016-08-01
The formation of calcium carbonate CaCO3 in water has some important implications in geoscience researches, ocean chemistry studies, CO2 emission issues and biology. In industry, the scaling phenomenon may cause technical problems, such as reduction in heat transfer efficiency in cooling systems and obstruction of pipes. This paper focuses on the study of the glutamic acid (GA) for reducing CaCO3 scale formation on metallic surfaces in the water of Bir Aissa region. The anti-scaling properties of glutamic acid (GA), used as a complexing agent of Ca(2+) ions, have been evaluated by the chronoamperometry and electrochemical impedance spectroscopy methods in conjunction with a microscopic examination. Chemical and electrochemical study of this water shows a high calcium concentration. The characterization using X-ray diffraction reveals that while the CaCO3 scale formed chemically is a mixture of calcite, aragonite and vaterite, the one deposited electrochemically is a pure calcite. The effect of temperature on the efficiency of the inhibitor was investigated. At 30 and 40°C, a complete scaling inhibition was obtained at a GA concentration of 18 mg/L with 90.2% efficiency rate. However, the efficiency of GA decreased at 50 and 60°C.
Developing integrated methods to address complex resource and environmental issues
Smith, Kathleen S.; Phillips, Jeffrey D.; McCafferty, Anne E.; Clark, Roger N.
2016-02-08
applications of project products and research findings are included in this circular. The work helped support the USGS mission to “provide reliable scientific information to describe and understand the Earth; minimize loss of life and property from natural disasters; manage water, biological, energy, and mineral resources; and enhance and protect our quality of life.” Activities within the project include the following:Spanned scales from microscopic to planetary;Demonstrated broad applications across disciplines;Included life-cycle studies of mineral resources;Incorporated specialized areas of expertise in applied geochemistry including mineralogy, hydrogeology, analytical chemistry, aqueous geochemistry, biogeochemistry, microbiology, aquatic toxicology, and public health; andIncorporated specialized areas of expertise in geophysics including magnetics, gravity, radiometrics, electromagnetics, seismic, ground-penetrating radar, borehole radar, and imaging spectroscopy.This circular consists of eight sections that contain summaries of various activities under the project. The eight sections are listed below:Laboratory Facilities and Capabilities, which includes brief descriptions of the various types of laboratories and capabilities used for the project;Method and Software Development, which includes summaries of remote-sensing, geophysical, and mineralogical methods developed or enhanced by the project;Instrument Development, which includes descriptions of geophysical instruments developed under the project;Minerals, Energy, and Climate, which includes summaries of research that applies to mineral or energy resources, environmental processes and monitoring, and carbon sequestration by earth materials;Element Cycling, Toxicity, and Health, which includes summaries of several process-oriented geochemical and biogeochemical studies and health-related research activities;Hydrogeology and Water Quality, which includes descriptions of innovative geophysical, remote
Harpold, A. A.; Brooks, P. D.; Biederman, J. A.; Swetnam, T.
2011-12-01
Difficulty estimating snowpack variability across complex forested terrain currently hinders the prediction of water resources in the semi-arid Southwestern U.S. Catchment-scale estimates of snowpack variability are necessary for addressing ecological, hydrological, and water resources issues, but are often interpolated from a small number of point-scale observations. In this study, we used LiDAR-derived distributed datasets to investigate how elevation, aspect, topography, and vegetation interact to control catchment-scale snowpack variability. The study area is the Redondo massif in the Valles Caldera National Preserve, NM, a resurgent dome that varies from 2500 to 3430 m and drains from all aspects. Mean LiDAR-derived snow depths from four catchments (2.2 to 3.4 km^2) draining different aspects of the Redondo massif varied by 30%, despite similar mean elevations and mixed conifer forest cover. To better quantify this variability in snow depths we performed a multiple linear regression (MLR) at a 7.3 by 7.3 km study area (5 x 106 snow depth measurements) comprising the four catchments. The MLR showed that elevation explained 45% of the variability in snow depths across the study area, aspect explained 18% (dominated by N-S aspect), and vegetation 2% (canopy density and height). This linear relationship was not transferable to the catchment-scale however, where additional MLR analyses showed the influence of aspect and elevation differed between the catchments. The strong influence of North-South aspect in most catchments indicated that the solar radiation is an important control on snow depth variability. To explore the role of solar radiation, a model was used to generate winter solar forcing index (SFI) values based on the local and remote topography. The SFI was able to explain a large amount of snow depth variability in areas with similar elevation and aspect. Finally, the SFI was modified to include the effects of shading from vegetation (in and out of
Complex molecular orbital method: open-shell theory
International Nuclear Information System (INIS)
Hendekovic, J.
1976-01-01
A singe-determinant open-shell formalism for complex molecular orbitals is developed. An iterative algorithm for solving the resulting secular equations is constructed. It is based on a sequence of similarity transformations and matrix triangularizations
Uranium complex recycling method of purifying uranium liquors
International Nuclear Information System (INIS)
Elikan, L.; Lyon, W.L.; Sundar, P.S.
1976-01-01
Uranium is separated from contaminating cations in an aqueous liquor containing uranyl ions. The liquor is mixed with sufficient recycled uranium complex to raise the weight ratio of uranium to said cations preferably to at least about three. The liquor is then extracted with at least enough non-interfering, water-immiscible, organic solvent to theoretically extract about all of the uranium in the liquor. The organic solvent contains a reagent which reacts with the uranyl ions to form a complex soluble in the solvent. If the aqueous liquor is acidic, the organic solvent is then scrubbed with water. The organic solvent is stripped with a solution containing at least enough ammonium carbonate to precipitate the uranium complex. A portion of the uranium complex is recycled and the remainder can be collected and calcined to produce U 3 O 8 or UO 2
Accessible methods for the dynamic time-scale decomposition of biochemical systems.
Surovtsova, Irina; Simus, Natalia; Lorenz, Thomas; König, Artjom; Sahle, Sven; Kummer, Ursula
2009-11-01
The growing complexity of biochemical models asks for means to rationally dissect the networks into meaningful and rather independent subnetworks. Such foregoing should ensure an understanding of the system without any heuristics employed. Important for the success of such an approach is its accessibility and the clarity of the presentation of the results. In order to achieve this goal, we developed a method which is a modification of the classical approach of time-scale separation. This modified method as well as the more classical approach have been implemented for time-dependent application within the widely used software COPASI. The implementation includes different possibilities for the representation of the results including 3D-visualization. The methods are included in COPASI which is free for academic use and available at www.copasi.org. irina.surovtsova@bioquant.uni-heidelberg.de Supplementary data are available at Bioinformatics online.
Computational methods for criticality safety analysis within the scale system
International Nuclear Information System (INIS)
Parks, C.V.; Petrie, L.M.; Landers, N.F.; Bucholz, J.A.
1986-01-01
The criticality safety analysis capabilities within the SCALE system are centered around the Monte Carlo codes KENO IV and KENO V.a, which are both included in SCALE as functional modules. The XSDRNPM-S module is also an important tool within SCALE for obtaining multiplication factors for one-dimensional system models. This paper reviews the features and modeling capabilities of these codes along with their implementation within the Criticality Safety Analysis Sequences (CSAS) of SCALE. The CSAS modules provide automated cross-section processing and user-friendly input that allow criticality safety analyses to be done in an efficient and accurate manner. 14 refs., 2 figs., 3 tabs
Concussion As a Multi-Scale Complex System: An Interdisciplinary Synthesis of Current Knowledge
Directory of Open Access Journals (Sweden)
Erin S. Kenzie
2017-09-01
Full Text Available Traumatic brain injury (TBI has been called “the most complicated disease of the most complex organ of the body” and is an increasingly high-profile public health issue. Many patients report long-term impairments following even “mild” injuries, but reliable criteria for diagnosis and prognosis are lacking. Every clinical trial for TBI treatment to date has failed to demonstrate reliable and safe improvement in outcomes, and the existing body of literature is insufficient to support the creation of a new classification system. Concussion, or mild TBI, is a highly heterogeneous phenomenon, and numerous factors interact dynamically to influence an individual’s recovery trajectory. Many of the obstacles faced in research and clinical practice related to TBI and concussion, including observed heterogeneity, arguably stem from the complexity of the condition itself. To improve understanding of this complexity, we review the current state of research through the lens provided by the interdisciplinary field of systems science, which has been increasingly applied to biomedical issues. The review was conducted iteratively, through multiple phases of literature review, expert interviews, and systems diagramming and represents the first phase in an effort to develop systems models of concussion. The primary focus of this work was to examine concepts and ways of thinking about concussion that currently impede research design and block advancements in care of TBI. Results are presented in the form of a multi-scale conceptual framework intended to synthesize knowledge across disciplines, improve research design, and provide a broader, multi-scale model for understanding concussion pathophysiology, classification, and treatment.
McGowan, Anna-Maria R.; Seifert, Colleen M.; Papalambros, Panos Y.
2012-01-01
The design of large-scale complex engineered systems (LaCES) such as an aircraft is inherently interdisciplinary. Multiple engineering disciplines, drawing from a team of hundreds to thousands of engineers and scientists, are woven together throughout the research, development, and systems engineering processes to realize one system. Though research and development (R&D) is typically focused in single disciplines, the interdependencies involved in LaCES require interdisciplinary R&D efforts. This study investigates the interdisciplinary interactions that take place during the R&D and early conceptual design phases in the design of LaCES. Our theoretical framework is informed by both engineering practices and social science research on complex organizations. This paper provides preliminary perspective on some of the organizational influences on interdisciplinary interactions based on organization theory (specifically sensemaking), data from a survey of LaCES experts, and the authors experience in the research and design. The analysis reveals couplings between the engineered system and the organization that creates it. Survey respondents noted the importance of interdisciplinary interactions and their significant benefit to the engineered system, such as innovation and problem mitigation. Substantial obstacles to interdisciplinarity are uncovered beyond engineering that include communication and organizational challenges. Addressing these challenges may ultimately foster greater efficiencies in the design and development of LaCES and improved system performance by assisting with the collective integration of interdependent knowledge bases early in the R&D effort. This research suggests that organizational and human dynamics heavily influence and even constrain the engineering effort for large-scale complex systems.
Directory of Open Access Journals (Sweden)
Richard J. Siegert
2018-03-01
Full Text Available Objective: To investigate the scaling properties of the Patient Categorisation Tool (PCAT as an instrument to measure complexity of rehabilitation needs. Design: Psychometric analysis in a multicentre cohort from the UK national clinical database. Patients: A total of 8,222 patents admitted for specialist inpatient rehabilitation following acquired brain injury. Methods: Dimensionality was explored using principal components analysis with Varimax rotation, followed by Rasch analysis on a random sample of n = 500. Results: Principal components analysis identified 3 components explaining 50% of variance. The partial credit Rasch model was applied for the 17-item PCAT scale using a “super-items” methodology based on the principal components analysis results. Two out of 5 initially created super-items displayed signs of local dependency, which significantly affected the estimates. They were combined into a single super-item resulting in satisfactory model fit and unidimensionality. Differential item functioning (DIF of 2 super-items was addressed by splitting between age groups (<65 and ≥ 65 years to produce the best model fit (χ2/df = 54.72, p = 0.235 and reliability (Person Separation Index (PSI = 0.79. Ordinal-to-interval conversion tables were produced. Conclusion: The PCAT has satisfied expectations of the unidimensional Rasch model in the current sample after minor modifications, and demonstrated acceptable reliability for individual assessment of rehabilitation complexity.
An applet for the Gabor similarity scaling of the differences between complex stimuli.
Margalit, Eshed; Biederman, Irving; Herald, Sarah B; Yue, Xiaomin; von der Malsburg, Christoph
2016-11-01
It is widely accepted that after the first cortical visual area, V1, a series of stages achieves a representation of complex shapes, such as faces and objects, so that they can be understood and recognized. A major challenge for the study of complex shape perception has been the lack of a principled basis for scaling of the physical differences between stimuli so that their similarity can be specified, unconfounded by early-stage differences. Without the specification of such similarities, it is difficult to make sound inferences about the contributions of later stages to neural activity or psychophysical performance. A Web-based app is described that is based on the Malsburg Gabor-jet model (Lades et al., 1993), which allows easy specification of the V1 similarity of pairs of stimuli, no matter how intricate. The model predicts the psycho physical discriminability of metrically varying faces and complex blobs almost perfectly (Yue, Biederman, Mangini, von der Malsburg, & Amir, 2012), and serves as the input stage of a large family of contemporary neurocomputational models of vision.
Directory of Open Access Journals (Sweden)
Matthias Dehmer
Full Text Available This paper aims to investigate information-theoretic network complexity measures which have already been intensely used in mathematical- and medicinal chemistry including drug design. Numerous such measures have been developed so far but many of them lack a meaningful interpretation, e.g., we want to examine which kind of structural information they detect. Therefore, our main contribution is to shed light on the relatedness between some selected information measures for graphs by performing a large scale analysis using chemical networks. Starting from several sets containing real and synthetic chemical structures represented by graphs, we study the relatedness between a classical (partition-based complexity measure called the topological information content of a graph and some others inferred by a different paradigm leading to partition-independent measures. Moreover, we evaluate the uniqueness of network complexity measures numerically. Generally, a high uniqueness is an important and desirable property when designing novel topological descriptors having the potential to be applied to large chemical databases.
Energy-scales convergence for optimal and robust quantum transport in photosynthetic complexes
Energy Technology Data Exchange (ETDEWEB)
Mohseni, M. [Google Research, Venice, California 90291 (United States); Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 (United States); Shabani, A. [Department of Chemistry, Princeton University, Princeton, New Jersey 08544 (United States); Department of Chemistry, University of California at Berkeley, Berkeley, California 94720 (United States); Lloyd, S. [Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 (United States); Rabitz, H. [Department of Chemistry, Princeton University, Princeton, New Jersey 08544 (United States)
2014-01-21
Underlying physical principles for the high efficiency of excitation energy transfer in light-harvesting complexes are not fully understood. Notably, the degree of robustness of these systems for transporting energy is not known considering their realistic interactions with vibrational and radiative environments within the surrounding solvent and scaffold proteins. In this work, we employ an efficient technique to estimate energy transfer efficiency of such complex excitonic systems. We observe that the dynamics of the Fenna-Matthews-Olson (FMO) complex leads to optimal and robust energy transport due to a convergence of energy scales among all important internal and external parameters. In particular, we show that the FMO energy transfer efficiency is optimum and stable with respect to important parameters of environmental interactions including reorganization energy λ, bath frequency cutoff γ, temperature T, and bath spatial correlations. We identify the ratio of k{sub B}λT/ℏγg as a single key parameter governing quantum transport efficiency, where g is the average excitonic energy gap.
Energy-scales convergence for optimal and robust quantum transport in photosynthetic complexes
International Nuclear Information System (INIS)
Mohseni, M.; Shabani, A.; Lloyd, S.; Rabitz, H.
2014-01-01
Underlying physical principles for the high efficiency of excitation energy transfer in light-harvesting complexes are not fully understood. Notably, the degree of robustness of these systems for transporting energy is not known considering their realistic interactions with vibrational and radiative environments within the surrounding solvent and scaffold proteins. In this work, we employ an efficient technique to estimate energy transfer efficiency of such complex excitonic systems. We observe that the dynamics of the Fenna-Matthews-Olson (FMO) complex leads to optimal and robust energy transport due to a convergence of energy scales among all important internal and external parameters. In particular, we show that the FMO energy transfer efficiency is optimum and stable with respect to important parameters of environmental interactions including reorganization energy λ, bath frequency cutoff γ, temperature T, and bath spatial correlations. We identify the ratio of k B λT/ℏγg as a single key parameter governing quantum transport efficiency, where g is the average excitonic energy gap
A new entropy based method for computing software structural complexity
Roca, J L
2002-01-01
In this paper a new methodology for the evaluation of software structural complexity is described. It is based on the entropy evaluation of the random uniform response function associated with the so called software characteristic function SCF. The behavior of the SCF with the different software structures and their relationship with the number of inherent errors is investigated. It is also investigated how the entropy concept can be used to evaluate the complexity of a software structure considering the SCF as a canonical representation of the graph associated with the control flow diagram. The functions, parameters and algorithms that allow to carry out this evaluation are also introduced. After this analytic phase follows the experimental phase, verifying the consistency of the proposed metric and their boundary conditions. The conclusion is that the degree of software structural complexity can be measured as the entropy of the random uniform response function of the SCF. That entropy is in direct relation...
Complexity in the scaling of velocity fluctuations in the high-latitude F-region ionosphere
Directory of Open Access Journals (Sweden)
M. L. Parkinson
2008-09-01
Full Text Available The temporal scaling properties of F-region velocity fluctuations, δv_{los}, were characterised over 17 octaves of temporal scale from τ=1 s to <1 day using a new data base of 1-s time resolution SuperDARN radar measurements. After quality control, 2.9 (1.9 million fluctuations were recorded during 31.5 (40.4 days of discretionary mode soundings using the Tasmanian (New Zealand radars. If the fluctuations were statistically self-similar, the probability density functions (PDFs of δv_{los} would collapse onto the same PDF using the scaling P_{s} (δv_{s}, τ=τ^{α}P (δv_{los}, τ and δv_{s}=δv_{los}τ^{−α} where α is the scaling exponent. The variations in scaling exponents α and multi-fractal behaviour were estimated using peak scaling and generalised structure function (GSF analyses, and a new method based upon minimising the differences between re-scaled probability density functions (PDFs. The efficiency of this method enabled calculation of "α spectra", the temporal spectra of scaling exponents from τ=1 s to ~2048 s. The large number of samples enabled calculation of α spectra for data separated into 2-h bins of MLT as well as two main physical regimes: Population A echoes with Doppler spectral width <75 m s^{−1} concentrated on closed field lines, and Population B echoes with spectral width >150 m s^{−1} concentrated on open field lines. For all data there was a scaling break at τ~10 s and the similarity of the fluctuations beneath this scale may be related to the large spatial averaging (~100 km×45 km employed by SuperDARN radars. For Tasmania Population B, the velocity fluctuations exhibited approximately mono fractal power law scaling between τ~8 s and 2048 s (34 min, and probably up to several hours. The scaling exponents were generally less than that expected for basic MHD
Directory of Open Access Journals (Sweden)
L. Xiao
2013-04-01
Full Text Available The growing convergence among mobile computing device and smart sensors boosts the development of ubiquitous computing and smart spaces, where localization is an essential part to realize the big vision. The general localization methods based on GPS and cellular techniques are not suitable for tracking numerous small size and limited power objects in the indoor case. In this paper, we propose and demonstrate a new localization method, this method is an easy-setup and cost-effective indoor localization system based on off-the-shelf active RFID technology. Our system is not only compatible with the future smart spaces and ubiquitous computing systems, but also suitable for large-scale indoor localization. The use of low-complexity Gaussian Filter (GF, Wheel Graph Model (WGM and Probabilistic Localization Algorithm (PLA make the proposed algorithm robust and suitable for large-scale indoor positioning from uncertainty, self-adjective to varying indoor environment. Using MATLAB simulation, we study the system performances, especially the dependence on a number of system and environment parameters, and their statistical properties. The simulation results prove that our proposed system is an accurate and cost-effective candidate for indoor localization.
A stochastic immersed boundary method for fluid-structure dynamics at microscopic length scales
International Nuclear Information System (INIS)
Atzberger, Paul J.; Kramer, Peter R.; Peskin, Charles S.
2007-01-01
In modeling many biological systems, it is important to take into account flexible structures which interact with a fluid. At the length scale of cells and cell organelles, thermal fluctuations of the aqueous environment become significant. In this work, it is shown how the immersed boundary method of [C.S. Peskin, The immersed boundary method, Acta Num. 11 (2002) 1-39.] for modeling flexible structures immersed in a fluid can be extended to include thermal fluctuations. A stochastic numerical method is proposed which deals with stiffness in the system of equations by handling systematically the statistical contributions of the fastest dynamics of the fluid and immersed structures over long time steps. An important feature of the numerical method is that time steps can be taken in which the degrees of freedom of the fluid are completely underresolved, partially resolved, or fully resolved while retaining a good level of accuracy. Error estimates in each of these regimes are given for the method. A number of theoretical and numerical checks are furthermore performed to assess its physical fidelity. For a conservative force, the method is found to simulate particles with the correct Boltzmann equilibrium statistics. It is shown in three dimensions that the diffusion of immersed particles simulated with the method has the correct scaling in the physical parameters. The method is also shown to reproduce a well-known hydrodynamic effect of a Brownian particle in which the velocity autocorrelation function exhibits an algebraic (τ -3/2 ) decay for long times [B.J. Alder, T.E. Wainwright, Decay of the Velocity Autocorrelation Function, Phys. Rev. A 1(1) (1970) 18-21]. A few preliminary results are presented for more complex systems which demonstrate some potential application areas of the method. Specifically, we present simulations of osmotic effects of molecular dimers, worm-like chain polymer knots, and a basic model of a molecular motor immersed in fluid subject to a
Multi-Scale Entropy Analysis as a Method for Time-Series Analysis of Climate Data
Directory of Open Access Journals (Sweden)
Heiko Balzter
2015-03-01
Full Text Available Evidence is mounting that the temporal dynamics of the climate system are changing at the same time as the average global temperature is increasing due to multiple climate forcings. A large number of extreme weather events such as prolonged cold spells, heatwaves, droughts and floods have been recorded around the world in the past 10 years. Such changes in the temporal scaling behaviour of climate time-series data can be difficult to detect. While there are easy and direct ways of analysing climate data by calculating the means and variances for different levels of temporal aggregation, these methods can miss more subtle changes in their dynamics. This paper describes multi-scale entropy (MSE analysis as a tool to study climate time-series data and to identify temporal scales of variability and their change over time in climate time-series. MSE estimates the sample entropy of the time-series after coarse-graining at different temporal scales. An application of MSE to Central European, variance-adjusted, mean monthly air temperature anomalies (CRUTEM4v is provided. The results show that the temporal scales of the current climate (1960–2014 are different from the long-term average (1850–1960. For temporal scale factors longer than 12 months, the sample entropy increased markedly compared to the long-term record. Such an increase can be explained by systems theory with greater complexity in the regional temperature data. From 1961 the patterns of monthly air temperatures are less regular at time-scales greater than 12 months than in the earlier time period. This finding suggests that, at these inter-annual time scales, the temperature variability has become less predictable than in the past. It is possible that climate system feedbacks are expressed in altered temporal scales of the European temperature time-series data. A comparison with the variance and Shannon entropy shows that MSE analysis can provide additional information on the
Efficacy of Two Different Instructional Methods Involving Complex Ecological Content
Randler, Christoph; Bogner, Franz X.
2009-01-01
Teaching and learning approaches in ecology very often follow linear conceptions of ecosystems. Empirical studies with an ecological focus consistent with existing syllabi and focusing on cognitive achievement are scarce. Consequently, we concentrated on a classroom unit that offers learning materials and highlights the existing complexity rather…
Markov Renewal Methods in Restart Problems in Complex Systems
DEFF Research Database (Denmark)
Asmussen, Søren; Lipsky, Lester; Thompson, Stephen
A task with ideal execution time L such as the execution of a computer program or the transmission of a file on a data link may fail, and the task then needs to be restarted. The task is handled by a complex system with features similar to the ones in classical reliability: failures may...
Studies of lanthanide complexes by a combination of spectroscopic methods
Czech Academy of Sciences Publication Activity Database
Krupová, Monika; Bouř, Petr; Andrushchenko, Valery
2015-01-01
Roč. 22, č. 1 (2015), s. 44 ISSN 1211-5894. [Discussions in Structural Molecular Biology. Annual Meeting of the Czech Society for Structural Biology /13./. 19.03.2015-21.03.2015, Nové Hrady] Institutional support: RVO:61388963 Keywords : lanthanide complexes * chirality sensing * chirality amplification * spectroscopy Subject RIV: CF - Physical ; Theoretical Chemistry
International Nuclear Information System (INIS)
Zhong, Z.
1985-01-01
A new approach to the solution of certain differential equations, the double complex function method, is developed, combining ordinary complex numbers and hyperbolic complex numbers. This method is applied to the theory of stationary axisymmetric Einstein equations in general relativity. A family of exact double solutions, double transformation groups, and n-soliton double solutions are obtained
Modeling complex biological flows in multi-scale systems using the APDEC framework
Trebotich, David
2006-09-01
We have developed advanced numerical algorithms to model biological fluids in multiscale flow environments using the software framework developed under the SciDAC APDEC ISIC. The foundation of our computational effort is an approach for modeling DNA laden fluids as ''bead-rod'' polymers whose dynamics are fully coupled to an incompressible viscous solvent. The method is capable of modeling short range forces and interactions between particles using soft potentials and rigid constraints. Our methods are based on higher-order finite difference methods in complex geometry with adaptivity, leveraging algorithms and solvers in the APDEC Framework. Our Cartesian grid embedded boundary approach to incompressible viscous flow in irregular geometries has also been interfaced to a fast and accurate level-sets method within the APDEC Framework for extracting surfaces from volume renderings of medical image data and used to simulate cardio-vascular and pulmonary flows in critical anatomies.
Sensemaking in a Value Based Context for Large Scale Complex Engineered Systems
Sikkandar Basha, Nazareen
The design and the development of Large-Scale Complex Engineered Systems (LSCES) requires the involvement of multiple teams and numerous levels of the organization and interactions with large numbers of people and interdisciplinary departments. Traditionally, requirements-driven Systems Engineering (SE) is used in the design and development of these LSCES. The requirements are used to capture the preferences of the stakeholder for the LSCES. Due to the complexity of the system, multiple levels of interactions are required to elicit the requirements of the system within the organization. Since LSCES involves people and interactions between the teams and interdisciplinary departments, it should be socio-technical in nature. The elicitation of the requirements of most large-scale system projects are subjected to creep in time and cost due to the uncertainty and ambiguity of requirements during the design and development. In an organization structure, the cost and time overrun can occur at any level and iterate back and forth thus increasing the cost and time. To avoid such creep past researches have shown that rigorous approaches such as value based designing can be used to control it. But before the rigorous approaches can be used, the decision maker should have a proper understanding of requirements creep and the state of the system when the creep occurs. Sensemaking is used to understand the state of system when the creep occurs and provide a guidance to decision maker. This research proposes the use of the Cynefin framework, sensemaking framework which can be used in the design and development of LSCES. It can aide in understanding the system and decision making to minimize the value gap due to requirements creep by eliminating ambiguity which occurs during design and development. A sample hierarchical organization is used to demonstrate the state of the system at the occurrence of requirements creep in terms of cost and time using the Cynefin framework. These
Directory of Open Access Journals (Sweden)
Sungho Won
2015-01-01
Full Text Available Owing to recent improvement of genotyping technology, large-scale genetic data can be utilized to identify disease susceptibility loci and this successful finding has substantially improved our understanding of complex diseases. However, in spite of these successes, most of the genetic effects for many complex diseases were found to be very small, which have been a big hurdle to build disease prediction model. Recently, many statistical methods based on penalized regressions have been proposed to tackle the so-called “large P and small N” problem. Penalized regressions including least absolute selection and shrinkage operator (LASSO and ridge regression limit the space of parameters, and this constraint enables the estimation of effects for very large number of SNPs. Various extensions have been suggested, and, in this report, we compare their accuracy by applying them to several complex diseases. Our results show that penalized regressions are usually robust and provide better accuracy than the existing methods for at least diseases under consideration.
Method for synthesizing metal bis(borano) hypophosphite complexes
Cordaro, Joseph G.
2013-06-18
The present invention describes the synthesis of a family of metal bis(borano) hypophosphite complexes. One procedure described in detail is the syntheses of complexes beginning from phosphorus trichloride and sodium borohydride. Temperature, solvent, concentration, and atmosphere are all critical to ensure product formation. In the case of sodium bis(borano) hypophosphite, hydrogen gas was evolved upon heating at temperatures above 150.degree. C. Included in this family of materials are the salts of the alkali metals Li, Na and K, and those of the alkaline earth metals Mg and Ca. Hydrogen storage materials are possible. In particular the lithium salt, Li[PH.sub.2(BH.sub.3).sub.2], theoretically would contain nearly 12 wt % hydrogen. Analytical data for product characterization and thermal properties are given.
Determinantal method for complex angular momenta in potential scattering
Energy Technology Data Exchange (ETDEWEB)
Lee, B. W. [University of Pennsylvania, Philadelphia, PA (United States)
1963-01-15
In this paper I would like do describe a formulation of the complex angular momenta in potential scattering based on the Lippmann-Schwinger integral equation rather than on the Schrödinger differential equation. This is intended as a preliminary to the paper by SAWYER on the Regge poles and high energy limits in field theory (Bethe-Salpeter amplitudes), where the integral formulation is definitely more advantageous than the differential formulation.
Directed forgetting of complex pictures in an item method paradigm
Hauswald, Anne; Kissler, Johanna
2008-01-01
An item-cued directed forgetting paradigm was used to investigate the ability to control episodic memory and selectively encode complex coloured pictures. A series of photographs was presented to 21 participants who were instructed to either remember or forget each picture after it was presented. Memory performance was later tested with a recognition task where all presented items had to be retrieved, regardless of the initial instructions. A directed forgetting effect that is, better recogni...
A new entropy based method for computing software structural complexity
International Nuclear Information System (INIS)
Roca, Jose L.
2002-01-01
In this paper a new methodology for the evaluation of software structural complexity is described. It is based on the entropy evaluation of the random uniform response function associated with the so called software characteristic function SCF. The behavior of the SCF with the different software structures and their relationship with the number of inherent errors is investigated. It is also investigated how the entropy concept can be used to evaluate the complexity of a software structure considering the SCF as a canonical representation of the graph associated with the control flow diagram. The functions, parameters and algorithms that allow to carry out this evaluation are also introduced. After this analytic phase follows the experimental phase, verifying the consistency of the proposed metric and their boundary conditions. The conclusion is that the degree of software structural complexity can be measured as the entropy of the random uniform response function of the SCF. That entropy is in direct relationship with the number of inherent software errors and it implies a basic hazard failure rate for it, so that a minimum structure assures a certain stability and maturity of the program. This metric can be used, either to evaluate the product or the process of software development, as development tool or for monitoring the stability and the quality of the final product. (author)
Saxena, Anupam; Lipson, Hod; Valero-Cuevas, Francisco J
2012-01-01
In systems and computational biology, much effort is devoted to functional identification of systems and networks at the molecular-or cellular scale. However, similarly important networks exist at anatomical scales such as the tendon network of human fingers: the complex array of collagen fibers that transmits and distributes muscle forces to finger joints. This network is critical to the versatility of the human hand, and its function has been debated since at least the 16(th) century. Here, we experimentally infer the structure (both topology and parameter values) of this network through sparse interrogation with force inputs. A population of models representing this structure co-evolves in simulation with a population of informative future force inputs via the predator-prey estimation-exploration algorithm. Model fitness depends on their ability to explain experimental data, while the fitness of future force inputs depends on causing maximal functional discrepancy among current models. We validate our approach by inferring two known synthetic Latex networks, and one anatomical tendon network harvested from a cadaver's middle finger. We find that functionally similar but structurally diverse models can exist within a narrow range of the training set and cross-validation errors. For the Latex networks, models with low training set error [functional structure of complex anatomical networks. This work expands current bioinformatics inference approaches by demonstrating that sparse, yet informative interrogation of biological specimens holds significant computational advantages in accurate and efficient inference over random testing, or assuming model topology and only inferring parameters values. These findings also hold clues to both our evolutionary history and the development of versatile machines.
Large-scale Comparative Study of Hi-C-based Chromatin 3D Structure Modeling Methods
Wang, Cheng
2018-05-17
Chromatin is a complex polymer molecule in eukaryotic cells, primarily consisting of DNA and histones. Many works have shown that the 3D folding of chromatin structure plays an important role in DNA expression. The recently proposed Chro- mosome Conformation Capture technologies, especially the Hi-C assays, provide us an opportunity to study how the 3D structures of the chromatin are organized. Based on the data from Hi-C experiments, many chromatin 3D structure modeling methods have been proposed. However, there is limited ground truth to validate these methods and no robust chromatin structure alignment algorithms to evaluate the performance of these methods. In our work, we first made a thorough literature review of 25 publicly available population Hi-C-based chromatin 3D structure modeling methods. Furthermore, to evaluate and to compare the performance of these methods, we proposed a novel data simulation method, which combined the population Hi-C data and single-cell Hi-C data without ad hoc parameters. Also, we designed a global and a local alignment algorithms to measure the similarity between the templates and the chromatin struc- tures predicted by different modeling methods. Finally, the results from large-scale comparative tests indicated that our alignment algorithms significantly outperform the algorithms in literature.
Identifying influential spreaders in complex networks based on kshell hybrid method
Namtirtha, Amrita; Dutta, Animesh; Dutta, Biswanath
2018-06-01
Influential spreaders are the key players in maximizing or controlling the spreading in a complex network. Identifying the influential spreaders using kshell decomposition method has become very popular in the recent time. In the literature, the core nodes i.e. with the largest kshell index of a network are considered as the most influential spreaders. We have studied the kshell method and spreading dynamics of nodes using Susceptible-Infected-Recovered (SIR) epidemic model to understand the behavior of influential spreaders in terms of its topological location in the network. From the study, we have found that every node in the core area is not the most influential spreader. Even a strategically placed lower shell node can also be a most influential spreader. Moreover, the core area can also be situated at the periphery of the network. The existing indexing methods are only designed to identify the most influential spreaders from core nodes and not from lower shells. In this work, we propose a kshell hybrid method to identify highly influential spreaders not only from the core but also from lower shells. The proposed method comprises the parameters such as kshell power, node's degree, contact distance, and many levels of neighbors' influence potential. The proposed method is evaluated using nine real world network datasets. In terms of the spreading dynamics, the experimental results show the superiority of the proposed method over the other existing indexing methods such as the kshell method, the neighborhood coreness centrality, the mixed degree decomposition, etc. Furthermore, the proposed method can also be applied to large-scale networks by considering the three levels of neighbors' influence potential.
Energy Technology Data Exchange (ETDEWEB)
Avlyanov, Zh K; Kabanov, N M; Zezin, A B
1985-01-01
Polarographic investigation of cadmium complex with polyacrylate-anion in aqueous KCl solution is carried out. It is shown that the polarographic method allows one to define equilibrium constants of polymer metallic complex (PMC) formation even in the case when current magnitudes are defined by PMC dissociation reaction kinetic characteristics. The obtained equilibrium constants of stepped complexing provide the values of mean coordination PAAxCd complex number of approximately 1.5, that coincides with the value obtained by the potentiometric method.
International Nuclear Information System (INIS)
Avlyanov, Zh.K.; Kabanov, N.M.; Zezin, A.B.
1985-01-01
Polarographic investigation of cadmium complex with polyacrylate-anion in aqueous KCl solution is carried out. It is shown that the polarographic method allows one to define equilibrium constants of polymer metallic complex (PMC) formation even in the case, when current magnitudes are defined by PMC dissociation reaction kinetic characteristics. The obtained equilibrium constants of stepped complexing provide the values of mean coordination PAAxCd complex number of approximately 1.5, that coinsides with the value obtained by the potentiometric method
Critical initial-slip scaling for the noisy complex Ginzburg–Landau equation
International Nuclear Information System (INIS)
Liu, Weigang; Täuber, Uwe C
2016-01-01
We employ the perturbative fieldtheoretic renormalization group method to investigate the universal critical behavior near the continuous non-equilibrium phase transition in the complex Ginzburg–Landau equation with additive white noise. This stochastic partial differential describes a remarkably wide range of physical systems: coupled nonlinear oscillators subject to external noise near a Hopf bifurcation instability; spontaneous structure formation in non-equilibrium systems, e.g., in cyclically competing populations; and driven-dissipative Bose–Einstein condensation, realized in open systems on the interface of quantum optics and many-body physics, such as cold atomic gases and exciton-polaritons in pumped semiconductor quantum wells in optical cavities. Our starting point is a noisy, dissipative Gross–Pitaevski or nonlinear Schrödinger equation, or equivalently purely relaxational kinetics originating from a complex-valued Landau–Ginzburg functional, which generalizes the standard equilibrium model A critical dynamics of a non-conserved complex order parameter field. We study the universal critical behavior of this system in the early stages of its relaxation from a Gaussian-weighted fully randomized initial state. In this critical aging regime, time translation invariance is broken, and the dynamics is characterized by the stationary static and dynamic critical exponents, as well as an independent ‘initial-slip’ exponent. We show that to first order in the dimensional expansion about the upper critical dimension, this initial-slip exponent in the complex Ginzburg–Landau equation is identical to its equilibrium model A counterpart. We furthermore employ the renormalization group flow equations as well as construct a suitable complex spherical model extension to argue that this conclusion likely remains true to all orders in the perturbation expansion. (paper)
Complexity in built environment, health, and destination walking: a neighborhood-scale analysis.
Carlson, Cynthia; Aytur, Semra; Gardner, Kevin; Rogers, Shannon
2012-04-01
This study investigates the relationships between the built environment, the physical attributes of the neighborhood, and the residents' perceptions of those attributes. It focuses on destination walking and self-reported health, and does so at the neighborhood scale. The built environment, in particular sidewalks, road connectivity, and proximity of local destinations, correlates with destination walking, and similarly destination walking correlates with physical health. It was found, however, that the built environment and health metrics may not be simply, directly correlated but rather may be correlated through a series of feedback loops that may regulate risk in different ways in different contexts. In particular, evidence for a feedback loop between physical health and destination walking is observed, as well as separate feedback loops between destination walking and objective metrics of the built environment, and destination walking and perception of the built environment. These feedback loops affect the ability to observe how the built environment correlates with residents' physical health. Previous studies have investigated pieces of these associations, but are potentially missing the more complex relationships present. This study proposes a conceptual model describing complex feedback relationships between destination walking and public health, with the built environment expected to increase or decrease the strength of the feedback loop. Evidence supporting these feedback relationships is presented.
S-curve networks and an approximate method for estimating degree distributions of complex networks
International Nuclear Information System (INIS)
Guo Jin-Li
2010-01-01
In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference values for optimizing the distribution of IPv4 address resource and the development of IPv6. Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing limit, it proposes a finite network model with a bulk growth. The model is said to be an S-curve network. Analysis demonstrates that the analytic method based on uniform distributions (i.e., Barabási-Albert method) is not suitable for the network. It develops an approximate method to predict the growth dynamics of the individual nodes, and uses this to calculate analytically the degree distribution and the scaling exponents. The analytical result agrees with the simulation well, obeying an approximately power-law form. This method can overcome a shortcoming of Barabási-Albert method commonly used in current network research. (general)
S-curve networks and an approximate method for estimating degree distributions of complex networks
Guo, Jin-Li
2010-12-01
In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference values for optimizing the distribution of IPv4 address resource and the development of IPv6. Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing limit, it proposes a finite network model with a bulk growth. The model is said to be an S-curve network. Analysis demonstrates that the analytic method based on uniform distributions (i.e., Barabási-Albert method) is not suitable for the network. It develops an approximate method to predict the growth dynamics of the individual nodes, and uses this to calculate analytically the degree distribution and the scaling exponents. The analytical result agrees with the simulation well, obeying an approximately power-law form. This method can overcome a shortcoming of Barabási-Albert method commonly used in current network research.
Level III Reliability methods feasible for complex structures
Waarts, P.H.; Boer, A. de
2001-01-01
The paper describes the comparison between three types of reliability methods: code type level I used by a designer, full level I and a level III method. Two cases that are typical for civil engineering practise, a cable-stayed subjected to traffic load and the installation of a soil retaining sheet
Adaptive calibration method with on-line growing complexity
Directory of Open Access Journals (Sweden)
Šika Z.
2011-12-01
Full Text Available This paper describes a modified variant of a kinematical calibration algorithm. In the beginning, a brief review of the calibration algorithm and its simple modification are described. As the described calibration modification uses some ideas used by the Lolimot algorithm, the algorithm is described and explained. Main topic of this paper is a description of a synthesis of the Lolimot-based calibration that leads to an adaptive algorithm with an on-line growing complexity. The paper contains a comparison of simple examples results and a discussion. A note about future research topics is also included.
Method and program for complex calculation of heterogeneous reactor
International Nuclear Information System (INIS)
Kalashnikov, A.G.; Glebov, A.P.; Elovskaya, L.F.; Kuznetsova, L.I.
1988-01-01
An algorithm and the GITA program for complex one-dimensional calculation of a heterogeneous reactor which permits to conduct calculations for the reactor and its cell simultaneously using the same algorithm are described. Multigroup macrocross sections for reactor zones in the thermal energy range are determined according to the technique for calculating a cell with complicate structure and then the continuous multi group calculation of the reactor in the thermal energy range and in the range of neutron thermalization is made. The kinetic equation is solved using the Pi- and DSn- approximations [fr
A Scale Development for Teacher Competencies on Cooperative Learning Method
Kocabas, Ayfer; Erbil, Deniz Gokce
2017-01-01
Cooperative learning method is a learning method studied both in Turkey and in the world for long years as an active learning method. Although cooperative learning method takes place in training programs, it cannot be implemented completely in the direction of its principles. The results of the researches point out that teachers have problems with…
Directory of Open Access Journals (Sweden)
B. Y. Qu
2017-01-01
Full Text Available Portfolio optimization problems involve selection of different assets to invest in order to maximize the overall return and minimize the overall risk simultaneously. The complexity of the optimal asset allocation problem increases with an increase in the number of assets available to select from for investing. The optimization problem becomes computationally challenging when there are more than a few hundreds of assets to select from. To reduce the complexity of large-scale portfolio optimization, two asset preselection procedures that consider return and risk of individual asset and pairwise correlation to remove assets that may not potentially be selected into any portfolio are proposed in this paper. With these asset preselection methods, the number of assets considered to be included in a portfolio can be increased to thousands. To test the effectiveness of the proposed methods, a Normalized Multiobjective Evolutionary Algorithm based on Decomposition (NMOEA/D algorithm and several other commonly used multiobjective evolutionary algorithms are applied and compared. Six experiments with different settings are carried out. The experimental results show that with the proposed methods the simulation time is reduced while return-risk trade-off performances are significantly improved. Meanwhile, the NMOEA/D is able to outperform other compared algorithms on all experiments according to the comparative analysis.
Ecosystem assessment methods for cumulative effects at the regional scale
International Nuclear Information System (INIS)
Hunsaker, C.T.
1989-01-01
Environmental issues such as nonpoint-source pollution, acid rain, reduced biodiversity, land use change, and climate change have widespread ecological impacts and require an integrated assessment approach. Since 1978, the implementing regulations for the National Environmental Policy Act (NEPA) have required assessment of potential cumulative environmental impacts. Current environmental issues have encouraged ecologists to improve their understanding of ecosystem process and function at several spatial scales. However, management activities usually occur at the local scale, and there is little consideration of the potential impacts to the environmental quality of a region. This paper proposes that regional ecological risk assessment provides a useful approach for assisting scientists in accomplishing the task of assessing cumulative impacts. Critical issues such as spatial heterogeneity, boundary definition, and data aggregation are discussed. Examples from an assessment of acidic deposition effects on fish in Adirondack lakes illustrate the importance of integrated data bases, associated modeling efforts, and boundary definition at the regional scale
International Nuclear Information System (INIS)
Lee, Sang Il
1992-02-01
A Simple analytical scaling method is developed for a scaled-down test facility simulating SB-LOCAs in a passive PWR. The whole scenario of a SB-LOCA is divided into two phases on the basis of the pressure trend ; depressurization phase and pot-boiling phase. The pressure and the core mixture level are selected as the most critical parameters to be preserved between the prototype and the scaled-down model. In each phase the high important phenomena having the influence on the critical parameters are identified and the scaling parameters governing the high important phenomena are generated by the present method. To validate the model used, Marviken CFT and 336 rod bundle experiment are simulated. The models overpredict both the pressure and two phase mixture level, but it shows agreement at least qualitatively with experimental results. In order to validate whether the scaled-down model well represents the important phenomena, we simulate the nondimensional pressure response of a cold-leg 4-inch break transient for AP-600 and the scaled-down model. The results of the present method are in excellent agreement with those of AP-600. It can be concluded that the present method is suitable for scaling the test facility simulating SB-LOCAs in a passive PWR
Distributed Cooperation Solution Method of Complex System Based on MAS
Weijin, Jiang; Yuhui, Xu
To adapt the model in reconfiguring fault diagnosing to dynamic environment and the needs of solving the tasks of complex system fully, the paper introduced multi-Agent and related technology to the complicated fault diagnosis, an integrated intelligent control system is studied in this paper. Based on the thought of the structure of diagnostic decision and hierarchy in modeling, based on multi-layer decomposition strategy of diagnosis task, a multi-agent synchronous diagnosis federation integrated different knowledge expression modes and inference mechanisms are presented, the functions of management agent, diagnosis agent and decision agent are analyzed, the organization and evolution of agents in the system are proposed, and the corresponding conflict resolution algorithm in given, Layered structure of abstract agent with public attributes is build. System architecture is realized based on MAS distributed layered blackboard. The real world application shows that the proposed control structure successfully solves the fault diagnose problem of the complex plant, and the special advantage in the distributed domain.
Kernel methods and flexible inference for complex stochastic dynamics
Capobianco, Enrico
2008-07-01
Approximation theory suggests that series expansions and projections represent standard tools for random process applications from both numerical and statistical standpoints. Such instruments emphasize the role of both sparsity and smoothness for compression purposes, the decorrelation power achieved in the expansion coefficients space compared to the signal space, and the reproducing kernel property when some special conditions are met. We consider these three aspects central to the discussion in this paper, and attempt to analyze the characteristics of some known approximation instruments employed in a complex application domain such as financial market time series. Volatility models are often built ad hoc, parametrically and through very sophisticated methodologies. But they can hardly deal with stochastic processes with regard to non-Gaussianity, covariance non-stationarity or complex dependence without paying a big price in terms of either model mis-specification or computational efficiency. It is thus a good idea to look at other more flexible inference tools; hence the strategy of combining greedy approximation and space dimensionality reduction techniques, which are less dependent on distributional assumptions and more targeted to achieve computationally efficient performances. Advantages and limitations of their use will be evaluated by looking at algorithmic and model building strategies, and by reporting statistical diagnostics.
Calderer, Antoni; Guo, Xin; Shen, Lian; Sotiropoulos, Fotis
2018-02-01
We develop a numerical method for simulating coupled interactions of complex floating structures with large-scale ocean waves and atmospheric turbulence. We employ an efficient large-scale model to develop offshore wind and wave environmental conditions, which are then incorporated into a high resolution two-phase flow solver with fluid-structure interaction (FSI). The large-scale wind-wave interaction model is based on a two-fluid dynamically-coupled approach that employs a high-order spectral method for simulating the water motion and a viscous solver with undulatory boundaries for the air motion. The two-phase flow FSI solver is based on the level set method and is capable of simulating the coupled dynamic interaction of arbitrarily complex bodies with airflow and waves. The large-scale wave field solver is coupled with the near-field FSI solver with a one-way coupling approach by feeding into the latter waves via a pressure-forcing method combined with the level set method. We validate the model for both simple wave trains and three-dimensional directional waves and compare the results with experimental and theoretical solutions. Finally, we demonstrate the capabilities of the new computational framework by carrying out large-eddy simulation of a floating offshore wind turbine interacting with realistic ocean wind and waves.
Comparing methods of determining Legionella spp. in complex water matrices.
Díaz-Flores, Álvaro; Montero, Juan Carlos; Castro, Francisco Javier; Alejandres, Eva María; Bayón, Carmen; Solís, Inmaculada; Fernández-Lafuente, Roberto; Rodríguez, Guillermo
2015-04-29
Legionella testing conducted at environmental laboratories plays an essential role in assessing the risk of disease transmission associated with water systems. However, drawbacks of culture-based methodology used for Legionella enumeration can have great impact on the results and interpretation which together can lead to underestimation of the actual risk. Up to 20% of the samples analysed by these laboratories produced inconclusive results, making effective risk management impossible. Overgrowth of competing microbiota was reported as an important factor for culture failure. For quantitative polymerase chain reaction (qPCR), the interpretation of the results from the environmental samples still remains a challenge. Inhibitors may cause up to 10% of inconclusive results. This study compared a quantitative method based on immunomagnetic separation (IMS method) with culture and qPCR, as a new approach to routine monitoring of Legionella. First, pilot studies evaluated the recovery and detectability of Legionella spp using an IMS method, in the presence of microbiota and biocides. The IMS method results were not affected by microbiota while culture counts were significantly reduced (1.4 log) or negative in the same samples. Damage by biocides of viable Legionella was detected by the IMS method. Secondly, a total of 65 water samples were assayed by all three techniques (culture, qPCR and the IMS method). Of these, 27 (41.5%) were recorded as positive by at least one test. Legionella spp was detected by culture in 7 (25.9%) of the 27 samples. Eighteen (66.7%) of the 27 samples were positive by the IMS method, thirteen of them reporting counts below 10(3) colony forming units per liter (CFU l(-1)), six presented interfering microbiota and three presented PCR inhibition. Of the 65 water samples, 24 presented interfering microbiota by culture and 8 presented partial or complete inhibition of the PCR reaction. So the rate of inconclusive results of culture and PCR was 36
Feng, Guixiang; Ming, Dongping; Wang, Min; Yang, Jianyu
2017-06-01
Scale problems are a major source of concern in the field of remote sensing. Since the remote sensing is a complex technology system, there is a lack of enough cognition on the connotation of scale and scale effect in remote sensing. Thus, this paper first introduces the connotations of pixel-based scale and summarizes the general understanding of pixel-based scale effect. Pixel-based scale effect analysis is essentially important for choosing the appropriate remote sensing data and the proper processing parameters. Fractal dimension is a useful measurement to analysis pixel-based scale. However in traditional fractal dimension calculation, the impact of spatial resolution is not considered, which leads that the scale effect change with spatial resolution can't be clearly reflected. Therefore, this paper proposes to use spatial resolution as the modified scale parameter of two fractal methods to further analyze the pixel-based scale effect. To verify the results of two modified methods (MFBM (Modified Windowed Fractal Brownian Motion Based on the Surface Area) and MDBM (Modified Windowed Double Blanket Method)); the existing scale effect analysis method (information entropy method) is used to evaluate. And six sub-regions of building areas and farmland areas were cut out from QuickBird images to be used as the experimental data. The results of the experiment show that both the fractal dimension and information entropy present the same trend with the decrease of spatial resolution, and some inflection points appear at the same feature scales. Further analysis shows that these feature scales (corresponding to the inflection points) are related to the actual sizes of the geo-object, which results in fewer mixed pixels in the image, and these inflection points are significantly indicative of the observed features. Therefore, the experiment results indicate that the modified fractal methods are effective to reflect the pixel-based scale effect existing in remote sensing
Scaling Methods to Measure Psychopathology in Persons with Intellectual Disabilities
Matson, Johnny L.; Belva, Brian C.; Hattier, Megan A.; Matson, Michael L.
2012-01-01
Psychopathology prior to the last four decades was generally viewed as a set of problems and disorders that did not occur in persons with intellectual disabilities (ID). That notion now seems very antiquated. In no small part, a revolutionary development of scales worldwide has occurred for the assessment of emotional problems in persons with ID.…
The Large-Scale Structure of Scientific Method
Kosso, Peter
2009-01-01
The standard textbook description of the nature of science describes the proposal, testing, and acceptance of a theoretical idea almost entirely in isolation from other theories. The resulting model of science is a kind of piecemeal empiricism that misses the important network structure of scientific knowledge. Only the large-scale description of…
Newton Methods for Large Scale Problems in Machine Learning
Hansen, Samantha Leigh
2014-01-01
The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…
Dong, Yadong; Sun, Yongqi; Qin, Chao
2018-01-01
The existing protein complex detection methods can be broadly divided into two categories: unsupervised and supervised learning methods. Most of the unsupervised learning methods assume that protein complexes are in dense regions of protein-protein interaction (PPI) networks even though many true complexes are not dense subgraphs. Supervised learning methods utilize the informative properties of known complexes; they often extract features from existing complexes and then use the features to train a classification model. The trained model is used to guide the search process for new complexes. However, insufficient extracted features, noise in the PPI data and the incompleteness of complex data make the classification model imprecise. Consequently, the classification model is not sufficient for guiding the detection of complexes. Therefore, we propose a new robust score function that combines the classification model with local structural information. Based on the score function, we provide a search method that works both forwards and backwards. The results from experiments on six benchmark PPI datasets and three protein complex datasets show that our approach can achieve better performance compared with the state-of-the-art supervised, semi-supervised and unsupervised methods for protein complex detection, occasionally significantly outperforming such methods.
International Nuclear Information System (INIS)
Zhang, Chunwei; Cui, Guomin; Chen, Shang
2016-01-01
Highlights: • Two dimensionless uniformity factors are presented to heat exchange network. • The grouping of process streams reduces the computational complexity of large-scale HENS problems. • The optimal sub-network can be obtained by Powell particle swarm optimization algorithm. • The method is illustrated by a case study involving 39 process streams, with a better solution. - Abstract: The optimal design of large-scale heat exchanger networks is a difficult task due to the inherent non-linear characteristics and the combinatorial nature of heat exchangers. To solve large-scale heat exchanger network synthesis (HENS) problems, two dimensionless uniformity factors to describe the heat exchanger network (HEN) uniformity in terms of the temperature difference and the accuracy of process stream grouping are deduced. Additionally, a novel algorithm that combines deterministic and stochastic optimizations to obtain an optimal sub-network with a suitable heat load for a given group of streams is proposed, and is named the Powell particle swarm optimization (PPSO). As a result, the synthesis of large-scale heat exchanger networks is divided into two corresponding sub-parts, namely, the grouping of process streams and the optimization of sub-networks. This approach reduces the computational complexity and increases the efficiency of the proposed method. The robustness and effectiveness of the proposed method are demonstrated by solving a large-scale HENS problem involving 39 process streams, and the results obtained are better than those previously published in the literature.
Methods of Complex Data Processing from Technical Means of Monitoring
Directory of Open Access Journals (Sweden)
Serhii Tymchuk
2017-03-01
Full Text Available The problem of processing the information from different types of monitoring equipment was examined. The use of generalized methods of information processing, based on the techniques of clustering combined territorial information sources for monitoring and the use of framing model of knowledge base for identification of monitoring objects was proposed as a possible solution of the problem. Clustering methods were formed on the basis of Lance-Williams hierarchical agglomerative procedure using the Ward metrics. Frame model of knowledge base was built using the tools of object-oriented modeling.
Power Analysis for Complex Mediational Designs Using Monte Carlo Methods
Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.
2010-01-01
Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex…
Unsteady panel method for complex configurations including wake modeling
CSIR Research Space (South Africa)
Van Zyl, Lourens H
2008-01-01
Full Text Available implementations of the DLM are however not very versatile in terms of geometries that can be modeled. The ZONA6 code offers a versatile surface panel body model including a separated wake model, but uses a pressure panel method for lifting surfaces. This paper...
Scale Sensitivity and Question Order in the Contingent Valuation Method
Andersson, Henrik; Svensson, Mikael
2010-01-01
This study examines the effect on respondents' willingness to pay to reduce mortality risk by the order of the questions in a stated preference study. Using answers from an experiment conducted on a Swedish sample where respondents' cognitive ability was measured and where they participated in a contingent valuation survey, it was found that scale sensitivity is strongest when respondents are asked about a smaller risk reduction first ('bottom-up' approach). This contradicts some previous evi...
Managing Small-Scale Fisheries : Alternative Directions and Methods
International Development Research Centre (IDRC) Digital Library (Canada)
Managing Small-scale Fisheries va plus loin que le champ d'application de la gestion classique des pêches pour aborder d'autres concepts, outils, méthodes et ... Les gestionnaires des pêches, tant du secteur public que du secteur privé, les chargés de cours et les étudiants en gestion des pêches, les organisations et les ...
Mitigation of Power frequency Magnetic Fields. Using Scale Invariant and Shape Optimization Methods
Energy Technology Data Exchange (ETDEWEB)
Salinas, Ener; Yueqiang Liu; Daalder, Jaap; Cruz, Pedro; Antunez de Souza, Paulo Roberto Jr; Atalaya, Juan Carlos; Paula Marciano, Fabianna de; Eskinasy, Alexandre
2006-10-15
The present report describes the development and application of two novel methods for implementing mitigation techniques of magnetic fields at power frequencies. The first method makes use of scaling rules for electromagnetic quantities, while the second one applies a 2D shape optimization algorithm based on gradient methods. Before this project, the first method had already been successfully applied (by some of the authors of this report) to electromagnetic designs involving pure conductive Material (e.g. copper, aluminium) which implied a linear formulation. Here we went beyond this approach and tried to develop a formulation involving ferromagnetic (i.e. non-linear) Materials. Surprisingly, we obtained good equivalent replacement for test-transformers by varying the input current. In spite of the validity of this equivalence constrained to regions not too close to the source, the results can still be considered useful, as most field mitigation techniques are precisely developed for reducing the magnetic field in regions relatively far from the sources. The shape optimization method was applied in this project to calculate the optimal geometry of a pure conductive plate to mitigate the magnetic field originated from underground cables. The objective function was a weighted combination of magnetic energy at the region of interest and dissipated heat at the shielding Material. To our surprise, shapes of complex structure, difficult to interpret (and probably even harder to anticipate) were the results of the applied process. However, the practical implementation (using some approximation of these shapes) gave excellent experimental mitigation factors.
Efstratiadis, Andreas; Tsoukalas, Ioannis; Kossieris, Panayiotis; Karavokiros, George; Christofides, Antonis; Siskos, Alexandros; Mamassis, Nikos; Koutsoyiannis, Demetris
2015-04-01
Modelling of large-scale hybrid renewable energy systems (HRES) is a challenging task, for which several open computational issues exist. HRES comprise typical components of hydrosystems (reservoirs, boreholes, conveyance networks, hydropower stations, pumps, water demand nodes, etc.), which are dynamically linked with renewables (e.g., wind turbines, solar parks) and energy demand nodes. In such systems, apart from the well-known shortcomings of water resources modelling (nonlinear dynamics, unknown future inflows, large number of variables and constraints, conflicting criteria, etc.), additional complexities and uncertainties arise due to the introduction of energy components and associated fluxes. A major difficulty is the need for coupling two different temporal scales, given that in hydrosystem modeling, monthly simulation steps are typically adopted, yet for a faithful representation of the energy balance (i.e. energy production vs. demand) a much finer resolution (e.g. hourly) is required. Another drawback is the increase of control variables, constraints and objectives, due to the simultaneous modelling of the two parallel fluxes (i.e. water and energy) and their interactions. Finally, since the driving hydrometeorological processes of the integrated system are inherently uncertain, it is often essential to use synthetically generated input time series of large length, in order to assess the system performance in terms of reliability and risk, with satisfactory accuracy. To address these issues, we propose an effective and efficient modeling framework, key objectives of which are: (a) the substantial reduction of control variables, through parsimonious yet consistent parameterizations; (b) the substantial decrease of computational burden of simulation, by linearizing the combined water and energy allocation problem of each individual time step, and solve each local sub-problem through very fast linear network programming algorithms, and (c) the substantial
Method for VAWT Placement on a Complex Building Structure
2013-06-01
Dec. 2013. [5] F. Balduzzi, A. Bianchini, E. Carnevale, L Ferrari, S. Magnani, “Feasibility analysis of a Darrieus vertical-axis wind turbine ... turbines used to power the cooling system. A simulation of Building 216, which is the planned site of the cooling system, was performed. A wind flow...analysis found that optimum placement of the wind turbines is at the front of the south end of the building. The method for placing the wind turbines is
DGDFT: A massively parallel method for large scale density functional theory calculations.
Hu, Wei; Lin, Lin; Yang, Chao
2015-09-28
We describe a massively parallel implementation of the recently developed discontinuous Galerkin density functional theory (DGDFT) method, for efficient large-scale Kohn-Sham DFT based electronic structure calculations. The DGDFT method uses adaptive local basis (ALB) functions generated on-the-fly during the self-consistent field iteration to represent the solution to the Kohn-Sham equations. The use of the ALB set provides a systematic way to improve the accuracy of the approximation. By using the pole expansion and selected inversion technique to compute electron density, energy, and atomic forces, we can make the computational complexity of DGDFT scale at most quadratically with respect to the number of electrons for both insulating and metallic systems. We show that for the two-dimensional (2D) phosphorene systems studied here, using 37 basis functions per atom allows us to reach an accuracy level of 1.3 × 10(-4) Hartree/atom in terms of the error of energy and 6.2 × 10(-4) Hartree/bohr in terms of the error of atomic force, respectively. DGDFT can achieve 80% parallel efficiency on 128,000 high performance computing cores when it is used to study the electronic structure of 2D phosphorene systems with 3500-14 000 atoms. This high parallel efficiency results from a two-level parallelization scheme that we will describe in detail.
DGDFT: A massively parallel method for large scale density functional theory calculations
Energy Technology Data Exchange (ETDEWEB)
Hu, Wei, E-mail: whu@lbl.gov; Yang, Chao, E-mail: cyang@lbl.gov [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Lin, Lin, E-mail: linlin@math.berkeley.edu [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Department of Mathematics, University of California, Berkeley, California 94720 (United States)
2015-09-28
We describe a massively parallel implementation of the recently developed discontinuous Galerkin density functional theory (DGDFT) method, for efficient large-scale Kohn-Sham DFT based electronic structure calculations. The DGDFT method uses adaptive local basis (ALB) functions generated on-the-fly during the self-consistent field iteration to represent the solution to the Kohn-Sham equations. The use of the ALB set provides a systematic way to improve the accuracy of the approximation. By using the pole expansion and selected inversion technique to compute electron density, energy, and atomic forces, we can make the computational complexity of DGDFT scale at most quadratically with respect to the number of electrons for both insulating and metallic systems. We show that for the two-dimensional (2D) phosphorene systems studied here, using 37 basis functions per atom allows us to reach an accuracy level of 1.3 × 10{sup −4} Hartree/atom in terms of the error of energy and 6.2 × 10{sup −4} Hartree/bohr in terms of the error of atomic force, respectively. DGDFT can achieve 80% parallel efficiency on 128,000 high performance computing cores when it is used to study the electronic structure of 2D phosphorene systems with 3500-14 000 atoms. This high parallel efficiency results from a two-level parallelization scheme that we will describe in detail.
DGDFT: A massively parallel method for large scale density functional theory calculations
International Nuclear Information System (INIS)
Hu, Wei; Yang, Chao; Lin, Lin
2015-01-01
We describe a massively parallel implementation of the recently developed discontinuous Galerkin density functional theory (DGDFT) method, for efficient large-scale Kohn-Sham DFT based electronic structure calculations. The DGDFT method uses adaptive local basis (ALB) functions generated on-the-fly during the self-consistent field iteration to represent the solution to the Kohn-Sham equations. The use of the ALB set provides a systematic way to improve the accuracy of the approximation. By using the pole expansion and selected inversion technique to compute electron density, energy, and atomic forces, we can make the computational complexity of DGDFT scale at most quadratically with respect to the number of electrons for both insulating and metallic systems. We show that for the two-dimensional (2D) phosphorene systems studied here, using 37 basis functions per atom allows us to reach an accuracy level of 1.3 × 10 −4 Hartree/atom in terms of the error of energy and 6.2 × 10 −4 Hartree/bohr in terms of the error of atomic force, respectively. DGDFT can achieve 80% parallel efficiency on 128,000 high performance computing cores when it is used to study the electronic structure of 2D phosphorene systems with 3500-14 000 atoms. This high parallel efficiency results from a two-level parallelization scheme that we will describe in detail
Interior Point Methods for Large-Scale Nonlinear Programming
Czech Academy of Sciences Publication Activity Database
Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan
2005-01-01
Roč. 20, č. 4-5 (2005), s. 569-582 ISSN 1055-6788 R&D Projects: GA AV ČR IAA1030405 Institutional research plan: CEZ:AV0Z10300504 Keywords : nonlinear programming * interior point methods * KKT systems * indefinite preconditioners * filter methods * algorithms Subject RIV: BA - General Mathematics Impact factor: 0.477, year: 2005
Methods for testing of geometrical down-scaled rotor blades
DEFF Research Database (Denmark)
Branner, Kim; Berring, Peter
further developed since then. Structures in composite materials are generally difficult and time consuming to test for fatigue resistance. Therefore, several methods for testing of blades have been developed and exist today. Those methods are presented in [1]. Current experimental test performed on full...
International Nuclear Information System (INIS)
Liu, D.
2011-01-01
Increasing environmental concerns and discharge limitations have imposed additional challenges in treating process waters. Thus, the concept of 'Green Chemistry' was proposed and green scale inhibitors became a focus of water treatment technology. Finding some economical and environmentally friendly inhibitors is one of the major research focuses nowadays. In this dissertation, the inhibition performance of different phosphonates as CaCO 3 scale inhibitors in simulated cooling water was evaluated. Homo-, co-, and ter-polymers were also investigated for their performance as Ca-phosphonate inhibitors. Addition of polymers as inhibitors with phosphonates could reduce Ca-phosphonate precipitation and enhance the inhibition efficiency for CaCO 3 scale. The synergistic effect of poly-aspartic acid (PASP) and Poly-epoxy-succinic acid (PESA) on inhibition of scaling has been studied using both static and dynamic methods. Results showed that the anti-scaling performance of PASP combined with PESA was superior to that of PASP or PESA alone for CaCO 3 , CaSO 4 and BaSO 4 scale. The influence of dosage, temperature and Ca 2+ concentration was also investigated in simulated cooling water circuit. Moreover, SEM analysis demonstrated the modification of crystalline morphology in the presence of PASP and PESA. In this work, we also investigated the respective inhibition effectiveness of copper and zinc ions for scaling in drinking water by the method of Rapid Controlled Precipitation (RCP). The results indicated that the zinc ion and copper ion were high efficient inhibitors of low concentration, and the analysis of SEM and IR showed that copper and zinc ions could affect the calcium carbonate germination and change the crystal morphology. Moreover, the influence of temperature and dissolved CO 2 on the scaling potential of a mineral water (Salvetat) in the presence of copper and zinc ions was studied by laboratory experiments. An ideal scale inhibitor should be a solid form
Energy Technology Data Exchange (ETDEWEB)
Suga, K, E-mail: suga@me.osakafu-u.ac.jp [Department of Mechanical Engineering, Osaka Prefecture University, 1-1 Gakuen-cho, Naka-ku, Sakai, Osaka 599-8531 (Japan)
2013-06-15
The extensive evaluation studies of the lattice Boltzmann method for micro-scale flows ({mu}-flow LBM) by the author's group are summarized. For the two-dimensional test cases, force-driven Poiseuille flows, Couette flows, a combined nanochannel flow, and flows in a nanochannel with a square- or triangular cylinder are discussed. The three-dimensional (3D) test cases are nano-mesh flows and a flow between 3D bumpy walls. The reference data for the complex test flow geometries are from the molecular dynamics simulations of the Lennard-Jones fluid by the author's group. The focused flows are mainly in the slip and a part of the transitional flow regimes at Kn < 1. The evaluated schemes of the {mu}-flow LBMs are the lattice Bhatnagar-Gross-Krook and the multiple-relaxation time LBMs with several boundary conditions and discrete velocity models. The effects of the discrete velocity models, the wall boundary conditions, the near-wall correction models of the molecular mean free path and the regularization process are discussed to confirm the applicability and the limitations of the {mu}-flow LBMs for complex flow geometries. (invited review)
International Nuclear Information System (INIS)
Suga, K
2013-01-01
The extensive evaluation studies of the lattice Boltzmann method for micro-scale flows (μ-flow LBM) by the author's group are summarized. For the two-dimensional test cases, force-driven Poiseuille flows, Couette flows, a combined nanochannel flow, and flows in a nanochannel with a square- or triangular cylinder are discussed. The three-dimensional (3D) test cases are nano-mesh flows and a flow between 3D bumpy walls. The reference data for the complex test flow geometries are from the molecular dynamics simulations of the Lennard-Jones fluid by the author's group. The focused flows are mainly in the slip and a part of the transitional flow regimes at Kn < 1. The evaluated schemes of the μ-flow LBMs are the lattice Bhatnagar–Gross–Krook and the multiple-relaxation time LBMs with several boundary conditions and discrete velocity models. The effects of the discrete velocity models, the wall boundary conditions, the near-wall correction models of the molecular mean free path and the regularization process are discussed to confirm the applicability and the limitations of the μ-flow LBMs for complex flow geometries. (invited review)
Laser absorption spectroscopy - Method for monitoring complex trace gas mixtures
Green, B. D.; Steinfeld, J. I.
1976-01-01
A frequency stabilized CO2 laser was used for accurate determinations of the absorption coefficients of various gases in the wavelength region from 9 to 11 microns. The gases investigated were representative of the types of contaminants expected to build up in recycled atmospheres. These absorption coefficients were then used in determining the presence and amount of the gases in prepared mixtures. The effect of interferences on the minimum detectable concentration of the gases was measured. The accuracies of various methods of solution were also evaluated.
The MIMIC Method with Scale Purification for Detecting Differential Item Functioning
Wang, Wen-Chung; Shih, Ching-Lin; Yang, Chih-Chien
2009-01-01
This study implements a scale purification procedure onto the standard MIMIC method for differential item functioning (DIF) detection and assesses its performance through a series of simulations. It is found that the MIMIC method with scale purification (denoted as M-SP) outperforms the standard MIMIC method (denoted as M-ST) in controlling…
The US business cycle: power law scaling for interacting units with complex internal structure
Ormerod, Paul
2002-11-01
In the social sciences, there is increasing evidence of the existence of power law distributions. The distribution of recessions in capitalist economies has recently been shown to follow such a distribution. The preferred explanation for this is self-organised criticality. Gene Stanley and colleagues propose an alternative, namely that power law scaling can arise from the interplay between random multiplicative growth and the complex structure of the units composing the system. This paper offers a parsimonious model of the US business cycle based on similar principles. The business cycle, along with long-term growth, is one of the two features which distinguishes capitalism from all previously existing societies. Yet, economics lacks a satisfactory theory of the cycle. The source of cycles is posited in economic theory to be a series of random shocks which are external to the system. In this model, the cycle is an internal feature of the system, arising from the level of industrial concentration of the agents and the interactions between them. The model-in contrast to existing economic theories of the cycle-accounts for the key features of output growth in the US business cycle in the 20th century.
Calibration of a complex activated sludge model for the full-scale wastewater treatment plant.
Liwarska-Bizukojc, Ewa; Olejnik, Dorota; Biernacki, Rafal; Ledakowicz, Stanislaw
2011-08-01
In this study, the results of the calibration of the complex activated sludge model implemented in BioWin software for the full-scale wastewater treatment plant are presented. Within the calibration of the model, sensitivity analysis of its parameters and the fractions of carbonaceous substrate were performed. In the steady-state and dynamic calibrations, a successful agreement between the measured and simulated values of the output variables was achieved. Sensitivity analysis revealed that upon the calculations of normalized sensitivity coefficient (S(i,j)) 17 (steady-state) or 19 (dynamic conditions) kinetic and stoichiometric parameters are sensitive. Most of them are associated with growth and decay of ordinary heterotrophic organisms and phosphorus accumulating organisms. The rankings of ten most sensitive parameters established on the basis of the calculations of the mean square sensitivity measure (δ(msqr)j) indicate that irrespective of the fact, whether the steady-state or dynamic calibration was performed, there is an agreement in the sensitivity of parameters.
LARGE-SCALE CO MAPS OF THE LUPUS MOLECULAR CLOUD COMPLEX
International Nuclear Information System (INIS)
Tothill, N. F. H.; Loehr, A.; Stark, A. A.; Lane, A. P.; Harnett, J. I.; Bourke, T. L.; Myers, P. C.; Parshley, S. C.; Wright, G. A.; Walker, C. K.
2009-01-01
Fully sampled degree-scale maps of the 13 CO 2-1 and CO 4-3 transitions toward three members of the Lupus Molecular Cloud Complex-Lupus I, III, and IV-trace the column density and temperature of the molecular gas. Comparison with IR extinction maps from the c2d project requires most of the gas to have a temperature of 8-10 K. Estimates of the cloud mass from 13 CO emission are roughly consistent with most previous estimates, while the line widths are higher, around 2 km s -1 . CO 4-3 emission is found throughout Lupus I, indicating widespread dense gas, and toward Lupus III and IV. Enhanced line widths at the NW end and along the edge of the B 228 ridge in Lupus I, and a coherent velocity gradient across the ridge, are consistent with interaction between the molecular cloud and an expanding H I shell from the Upper-Scorpius subgroup of the Sco-Cen OB Association. Lupus III is dominated by the effects of two HAe/Be stars, and shows no sign of external influence. Slightly warmer gas around the core of Lupus IV and a low line width suggest heating by the Upper-Centaurus-Lupus subgroup of Sco-Cen, without the effects of an H I shell.
Sreekanth, J.; Moore, Catherine
2018-04-01
The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.
International Nuclear Information System (INIS)
Zhang Huiqun
2009-01-01
By using some exact solutions of an auxiliary ordinary differential equation, a direct algebraic method is described to construct the exact complex solutions for nonlinear partial differential equations. The method is implemented for the NLS equation, a new Hamiltonian amplitude equation, the coupled Schrodinger-KdV equations and the Hirota-Maccari equations. New exact complex solutions are obtained.
International Nuclear Information System (INIS)
Ogino, Masao
2016-01-01
Actual problems in science and industrial applications are modeled by multi-materials and large-scale unstructured mesh, and the finite element analysis has been widely used to solve such problems on the parallel computer. However, for large-scale problems, the iterative methods for linear finite element equations suffer from slow or no convergence. Therefore, numerical methods having both robust convergence and scalable parallel efficiency are in great demand. The domain decomposition method is well known as an iterative substructuring method, and is an efficient approach for parallel finite element methods. Moreover, the balancing preconditioner achieves robust convergence. However, in case of problems consisting of very different materials, the convergence becomes bad. There are some research to solve this issue, however not suitable for cases of complex shape and composite materials. In this study, to improve convergence of the balancing preconditioner for multi-materials, a balancing preconditioner combined with the diagonal scaling preconditioner, called Scaled-BDD method, is proposed. Some numerical results are included which indicate that the proposed method has robust convergence for the number of subdomains and shows high performances compared with the original balancing preconditioner. (author)
Mathematical programming methods for large-scale topology optimization problems
DEFF Research Database (Denmark)
Rojas Labanda, Susana
for mechanical problems, but has rapidly extended to many other disciplines, such as fluid dynamics and biomechanical problems. However, the novelty and improvements of optimization methods has been very limited. It is, indeed, necessary to develop of new optimization methods to improve the final designs......, and at the same time, reduce the number of function evaluations. Nonlinear optimization methods, such as sequential quadratic programming and interior point solvers, have almost not been embraced by the topology optimization community. Thus, this work is focused on the introduction of this kind of second...... for the classical minimum compliance problem. Two of the state-of-the-art optimization algorithms are investigated and implemented for this structural topology optimization problem. A Sequential Quadratic Programming (TopSQP) and an interior point method (TopIP) are developed exploiting the specific mathematical...
Laboratory-scale evaluations of alternative plutonium precipitation methods
International Nuclear Information System (INIS)
Martella, L.L.; Saba, M.T.; Campbell, G.K.
1984-01-01
Plutonium(III), (IV), and (VI) carbonate; plutonium(III) fluoride; plutonium(III) and (IV) oxalate; and plutonium(IV) and (VI) hydroxide precipitation methods were evaluated for conversion of plutonium nitrate anion-exchange eluate to a solid, and compared with the current plutonium peroxide precipitation method used at Rocky Flats. Plutonium(III) and (IV) oxalate, plutonium(III) fluoride, and plutonium(IV) hydroxide precipitations were the most effective of the alternative conversion methods tested because of the larger particle-size formation, faster filtration rates, and the low plutonium loss to the filtrate. These were found to be as efficient as, and in some cases more efficient than, the peroxide method. 18 references, 14 figures, 3 tables
SCALE--A Conceptual and Transactional Method of Legal Study.
Johnson, Darrell B.
1985-01-01
Southwestern University School of Law's two-year, intensive, year-round program, the Southwestern Conceptual Approach to Legal Education, which emphasizes hypothetical problems as teaching tools rather than the case-book method, is described. (MSE)
Modelling of complex heat transfer systems by the coupling method
Energy Technology Data Exchange (ETDEWEB)
Bacot, P.; Bonfils, R.; Neveu, A.; Ribuot, J. (Centre d' Energetique de l' Ecole des Mines de Paris, 75 (France))
1985-04-01
The coupling method proposed here is designed to reduce the size of matrices which appear in the modelling of heat transfer systems. It consists in isolating the elements that can be modelled separately, and among the input variables of a component, identifying those which will couple it to another component. By grouping these types of variable, one can thus identify a so-called coupling matrix of reduced size, and relate it to the overall system. This matrix allows the calculation of the coupling temperatures as a function of external stresses, and of the state of the overall system at the previous instant. The internal temperatures of the components are determined from for previous ones. Two examples of applications are presented, one concerning a dwelling unit, and the second a solar water heater.
Rotating Turbulent Flow Simulation with LES and Vreman Subgrid-Scale Models in Complex Geometries
Directory of Open Access Journals (Sweden)
Tao Guo
2014-07-01
Full Text Available The large eddy simulation (LES method based on Vreman subgrid-scale model and SIMPIEC algorithm were applied to accurately capture the flowing character in Francis turbine passage under the small opening condition. The methodology proposed is effective to understand the flow structure well. It overcomes the limitation of eddy-viscosity model which is excessive, dissipative. Distributions of pressure, velocity, and vorticity as well as some special flow structure in guide vane near-wall zones and blade passage were gained. The results show that the tangential velocity component of fluid has absolute superiority under small opening condition. This situation aggravates the impact between the wake vortices that shed from guide vanes. The critical influence on the balance of unit by spiral vortex in blade passage and the nonuniform flow around guide vane, combined with the transmitting of stress wave, has been confirmed.
DEFF Research Database (Denmark)
Ruiz-Ruiz, Antonio; Blunck, Henrik; Prentow, Thor Siiger
2014-01-01
realistic data to inform facility planning. In this paper, we propose analysis methods to extract knowledge from large sets of network collected WiFi traces to better inform facility management and planning in large building complexes. The analysis methods, which build on a rich set of temporal and spatial......The optimization of logistics in large building com- plexes with many resources, such as hospitals, require realistic facility management and planning. Current planning practices rely foremost on manual observations or coarse unverified as- sumptions and therefore do not properly scale or provide....... Spatio-temporal visualization tools built on top of these methods enable planners to inspect and explore extracted information to inform facility-planning activities. To evaluate the methods, we present results for a large hospital complex covering more than 10 hectares. The evaluation is based on Wi...
Magnetic storm generation by large-scale complex structure Sheath/ICME
Grigorenko, E. E.; Yermolaev, Y. I.; Lodkina, I. G.; Yermolaev, M. Y.; Riazantseva, M.; Borodkova, N. L.
2017-12-01
We study temporal profiles of interplanetary plasma and magnetic field parameters as well as magnetospheric indices. We use our catalog of large-scale solar wind phenomena for 1976-2000 interval (see the catalog for 1976-2016 in web-side ftp://ftp.iki.rssi.ru/pub/omni/ prepared on basis of OMNI database (Yermolaev et al., 2009)) and the double superposed epoch analysis method (Yermolaev et al., 2010). Our analysis showed (Yermolaev et al., 2015) that average profiles of Dst and Dst* indices decrease in Sheath interval (magnetic storm activity increases) and increase in ICME interval. This profile coincides with inverted distribution of storm numbers in both intervals (Yermolaev et al., 2017). This behavior is explained by following reasons. (1) IMF magnitude in Sheath is higher than in Ejecta and closed to value in MC. (2) Sheath has 1.5 higher efficiency of storm generation than ICME (Nikolaeva et al., 2015). The most part of so-called CME-induced storms are really Sheath-induced storms and this fact should be taken into account during Space Weather prediction. The work was in part supported by the Russian Science Foundation, grant 16-12-10062. References. 1. Nikolaeva N.S., Y. I. Yermolaev and I. G. Lodkina (2015), Modeling of the corrected Dst* index temporal profile on the main phase of the magnetic storms generated by different types of solar wind, Cosmic Res., 53(2), 119-127 2. Yermolaev Yu. I., N. S. Nikolaeva, I. G. Lodkina and M. Yu. Yermolaev (2009), Catalog of Large-Scale Solar Wind Phenomena during 1976-2000, Cosmic Res., , 47(2), 81-94 3. Yermolaev, Y. I., N. S. Nikolaeva, I. G. Lodkina, and M. Y. Yermolaev (2010), Specific interplanetary conditions for CIR-induced, Sheath-induced, and ICME-induced geomagnetic storms obtained by double superposed epoch analysis, Ann. Geophys., 28, 2177-2186 4. Yermolaev Yu. I., I. G. Lodkina, N. S. Nikolaeva and M. Yu. Yermolaev (2015), Dynamics of large-scale solar wind streams obtained by the double superposed epoch
Thinking Inside the Box: Simple Methods to Evaluate Complex Treatments
Directory of Open Access Journals (Sweden)
J. Michael Menke
2011-10-01
Full Text Available We risk ignoring cheaper and safer medical treatments because they cannot be patented, lack profit potential, require too much patient-contact time, or do not have scientific results. Novel medical treatments may be difficult to evaluate for a variety of reasons such as patient selection bias, the effect of the package of care, or the lack of identifying the active elements of treatment. Whole Systems Research (WSR is an approach designed to assess the performance of complete packages of clinical management. While the WSR method is compelling, there is no standard procedure for WSR, and its implementation may be intimidating. The truth is that WSR methodological tools are neither new nor complicated. There are two sequential steps, or boxes, that guide WSR methodology: establishing system predictability, followed by an audit of system element effectiveness. We describe the implementation of WSR with a particular attention to threats to validity (Shadish, Cook, & Campbell, 2002; Shadish & Heinsman, 1997. DOI: 10.2458/azu_jmmss.v2i1.12365
Conjugate gradient type methods for linear systems with complex symmetric coefficient matrices
Freund, Roland
1989-01-01
We consider conjugate gradient type methods for the solution of large sparse linear system Ax equals b with complex symmetric coefficient matrices A equals A(T). Such linear systems arise in important applications, such as the numerical solution of the complex Helmholtz equation. Furthermore, most complex non-Hermitian linear systems which occur in practice are actually complex symmetric. We investigate conjugate gradient type iterations which are based on a variant of the nonsymmetric Lanczos algorithm for complex symmetric matrices. We propose a new approach with iterates defined by a quasi-minimal residual property. The resulting algorithm presents several advantages over the standard biconjugate gradient method. We also include some remarks on the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.
Diego A. Riveros-Iregui; Brian L. McGlynn
2009-01-01
We investigated the spatial and temporal variability of soil CO2 efflux across 62 sites of a 393-ha complex watershed of the northern Rocky Mountains. Growing season (83 day) cumulative soil CO2 efflux varied from ~300 to ~2000 g CO2 m-2, depending upon landscape position, with a median of 879.8 g CO2 m-2. Our findings revealed that highest soil CO2 efflux rates were...
International Nuclear Information System (INIS)
Zhao Yi; Small, Michael; Coward, David; Howell, Eric; Zhao Chunnong; Ju Li; Blair, David
2006-01-01
We describe the application of complexity estimation and the surrogate data method to identify deterministic dynamics in simulated gravitational wave (GW) data contaminated with white and coloured noises. The surrogate method uses algorithmic complexity as a discriminating statistic to decide if noisy data contain a statistically significant level of deterministic dynamics (the GW signal). The results illustrate that the complexity method is sensitive to a small amplitude simulated GW background (SNR down to 0.08 for white noise and 0.05 for coloured noise) and is also more robust than commonly used linear methods (autocorrelation or Fourier analysis)
Investigating salt frost scaling by using statistical methods
DEFF Research Database (Denmark)
Hasholt, Marianne Tange; Clemmensen, Line Katrine Harder
2010-01-01
A large data set comprising data for 118 concrete mixes on mix design, air void structure, and the outcome of freeze/thaw testing according to SS 13 72 44 has been analysed by use of statistical methods. The results show that with regard to mix composition, the most important parameter...
Modeling Hydrodynamics on the Wave Group Scale in Topographically Complex Reef Environments
Reyns, J.; Becker, J. M.; Merrifield, M. A.; Roelvink, J. A.
2016-02-01
The knowledge of the characteristics of waves and the associated wave-driven currents is important for sediment transport and morphodynamics, nutrient dynamics and larval dispersion within coral reef ecosystems. Reef-lined coasts differ from sandy beaches in that they have a steep offshore slope, that the non-sandy bottom topography is very rough, and that the distance between the point of maximum short wave dissipation and the actual coastline is usually large. At this short wave breakpoint, long waves are released, and these infragravity (IG) scale motions account for the bulk of the water level variance on the reef flat, the lagoon and eventually, the sandy beaches fronting the coast through run-up. These IG energy dominated water level motions are reinforced during extreme events such as cyclones or swells through larger incident band wave heights and low frequency wave resonance on the reef. Recently, a number of hydro(-morpho)dynamic models that have the capability to model these IG waves have successfully been applied to morphologically differing reef environments. One of these models is the XBeach model, which is curvilinear in nature. This poses serious problems when trying to model an entire atoll for example, as it is extremely difficult to build curvilinear grids that are optimal for the simulation of hydrodynamic processes, while maintaining the topology in the grid. One solution to remediate this problem of grid connectivity is the use of unstructured grids. We present an implementation of the wave action balance on the wave group scale with feedback to the flow momentum balance, which is the foundation of XBeach, within the framework of the unstructured Delft3D Flexible Mesh model. The model can be run in stationary as well as in instationary mode, and it can be forced by regular waves, time series or wave spectra. We show how the code is capable of modeling the wave generated flow at a number of topographically complex reef sites and for a number of
Energy Technology Data Exchange (ETDEWEB)
Chen, Li; He, Ya-Ling [Key Laboratory of Thermo-Fluid Science and Engineering of MOE, School of Energy and Power Engineering, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China); Kang, Qinjun [Computational Earth Science Group (EES-16), Los Alamos National Laboratory, Los Alamos, NM (United States); Tao, Wen-Quan, E-mail: wqtao@mail.xjtu.edu.cn [Key Laboratory of Thermo-Fluid Science and Engineering of MOE, School of Energy and Power Engineering, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China)
2013-12-15
A coupled (hybrid) simulation strategy spatially combining the finite volume method (FVM) and the lattice Boltzmann method (LBM), called CFVLBM, is developed to simulate coupled multi-scale multi-physicochemical processes. In the CFVLBM, computational domain of multi-scale problems is divided into two sub-domains, i.e., an open, free fluid region and a region filled with porous materials. The FVM and LBM are used for these two regions, respectively, with information exchanged at the interface between the two sub-domains. A general reconstruction operator (RO) is proposed to derive the distribution functions in the LBM from the corresponding macro scalar, the governing equation of which obeys the convection–diffusion equation. The CFVLBM and the RO are validated in several typical physicochemical problems and then are applied to simulate complex multi-scale coupled fluid flow, heat transfer, mass transport, and chemical reaction in a wall-coated micro reactor. The maximum ratio of the grid size between the FVM and LBM regions is explored and discussed. -- Highlights: •A coupled simulation strategy for simulating multi-scale phenomena is developed. •Finite volume method and lattice Boltzmann method are coupled. •A reconstruction operator is derived to transfer information at the sub-domains interface. •Coupled multi-scale multiple physicochemical processes in micro reactor are simulated. •Techniques to save computational resources and improve the efficiency are discussed.
LARGE SCALE METHOD FOR THE PRODUCTION AND PURIFICATION OF CURIUM
Higgins, G.H.; Crane, W.W.T.
1959-05-19
A large-scale process for production and purification of Cm/sup 242/ is described. Aluminum slugs containing Am are irradiated and declad in a NaOH-- NaHO/sub 3/ solution at 85 to 100 deg C. The resulting slurry filtered and washed with NaOH, NH/sub 4/OH, and H/sub 2/O. Recovery of Cm from filtrate and washings is effected by an Fe(OH)/sub 3/ precipitation. The precipitates are then combined and dissolved ln HCl and refractory oxides centrifuged out. These oxides are then fused with Na/sub 2/CO/sub 3/ and dissolved in HCl. The solution is evaporated and LiCl solution added. The Cm, rare earths, and anionic impurities are adsorbed on a strong-base anfon exchange resin. Impurities are eluted with LiCl--HCl solution, rare earths and Cm are eluted by HCl. Other ion exchange steps further purify the Cm. The Cm is then precipitated as fluoride and used in this form or further purified and processed. (T.R.H.)
Using mixed methods to develop and evaluate complex interventions in palliative care research.
Farquhar, Morag C; Ewing, Gail; Booth, Sara
2011-12-01
there is increasing interest in combining qualitative and quantitative research methods to provide comprehensiveness and greater knowledge yield. Mixed methods are valuable in the development and evaluation of complex interventions. They are therefore particularly valuable in palliative care research where the majority of interventions are complex, and the identification of outcomes particularly challenging. this paper aims to introduce the role of mixed methods in the development and evaluation of complex interventions in palliative care, and how they may be used in palliative care research. the paper defines mixed methods and outlines why and how mixed methods are used to develop and evaluate complex interventions, with a pragmatic focus on design and data collection issues and data analysis. Useful texts are signposted and illustrative examples provided of mixed method studies in palliative care, including a detailed worked example of the development and evaluation of a complex intervention in palliative care for breathlessness. Key challenges to conducting mixed methods in palliative care research are identified in relation to data collection, data integration in analysis, costs and dissemination and how these might be addressed. the development and evaluation of complex interventions in palliative care benefit from the application of mixed methods. Mixed methods enable better understanding of whether and how an intervention works (or does not work) and inform the design of subsequent studies. However, they can be challenging: mixed method studies in palliative care will benefit from working with agreed protocols, multidisciplinary teams and engaging staff with appropriate skill sets.
From fuel cells to batteries: Synergies, scales and simulation methods
Bessler, Wolfgang G.
2011-01-01
The recent years have shown a dynamic growth of battery research and development activities both in academia and industry, supported by large governmental funding initiatives throughout the world. A particular focus is being put on lithium-based battery technologies. This situation provides a stimulating environment for the fuel cell modeling community, as there are considerable synergies in the modeling and simulation methods for fuel cells and batteries. At the same time, batter...
The Needs and Provision Complexity Scale: a first psychometric analysis using multicentre data.
Siegert, Richard J; Jackson, Diana M; Turner-Stokes, Lynne
2014-07-01
A psychometric evaluation of the Needs and Provision Complexity Scale (NPCS). The NPCS is designed to evaluate both needs for health and social support (NPCS-Needs) and services provided to meet those needs (NPCS-Gets). A consecutive cohort of patients were recruited from nine specialist neurorehabilitation units in London. Four hundred and twenty-eight patients were assessed at discharge (63.1% males; mean age 49 years) of whom 73.6% had acquired brain injury (49.5% stroke/subarachnoid, 14.7% traumatic brain injury, 9.3% 'other acquired brain injury'), 8.9% spinal cord injury, 6.1% peripheral neuropathy, 4.9% progressive neurological and 6.3% other neurological conditions. The NPCS-Needs was completed by the clinical team at discharge and 212 patients reported NPCS-Gets after six months. NPCS-Gets repeatability was tested in a subsample (n = 60). Factor analysis identified two principal domains ('Health and personal care' and 'Social care and support') accounting for 66% of variance, and suggested a large general factor underpinning the NPCS. Internal consistency was high (alpha = 0.94) and repeatability acceptable. Intraclass coefficients for domain scores were healthcare 0.67 (95% confidence interval (CI) 0.48-0.80); personal care 0.83 (0.73-0.90); rehabilitation 0.65 (0.45-0.78); social/family support 0.66 (0.46-0.79) and environment 0.84 (0.74-0.90). Linear-weighted kappas for item-by-item agreement ranged from 0.42 to 0.83. Concurrent validity was demonstrated through correlations with measures of dependency and community integration. Notwithstanding a 50% response rate after six months, the NPCS has good internal consistency, a robust two-factor structure, acceptable test-retest reliability and initial evidence of concurrent validity. © The Author(s) 2014.
Directory of Open Access Journals (Sweden)
Chiara Biscarini
2013-01-01
Full Text Available The numerical simulation of fast-moving fronts originating from dam or levee breaches is a challenging task for small scale engineering projects. In this work, the use of fully three-dimensional Navier-Stokes (NS equations and lattice Boltzmann method (LBM is proposed for testing the validity of, respectively, macroscopic and mesoscopic mathematical models. Macroscopic simulations are performed employing an open-source computational fluid dynamics (CFD code that solves the NS combined with the volume of fluid (VOF multiphase method to represent free-surface flows. The mesoscopic model is a front-tracking experimental variant of the LBM. In the proposed LBM the air-gas interface is represented as a surface with zero thickness that handles the passage of the density field from the light to the dense phase and vice versa. A single set of LBM equations represents the liquid phase, while the free surface is characterized by an additional variable, the liquid volume fraction. Case studies show advantages and disadvantages of the proposed LBM and NS with specific regard to the computational efficiency and accuracy in dealing with the simulation of flows through complex geometries. In particular, the validation of the model application is developed by simulating the flow propagating through a synthetic urban setting and comparing results with analytical and experimental laboratory measurements.
Directory of Open Access Journals (Sweden)
Michael J Emslie
Full Text Available High biodiversity ecosystems are commonly associated with complex habitats. Coral reefs are highly diverse ecosystems, but are under increasing pressure from numerous stressors, many of which reduce live coral cover and habitat complexity with concomitant effects on other organisms such as reef fishes. While previous studies have highlighted the importance of habitat complexity in structuring reef fish communities, they employed gradient or meta-analyses which lacked a controlled experimental design over broad spatial scales to explicitly separate the influence of live coral cover from overall habitat complexity. Here a natural experiment using a long term (20 year, spatially extensive (∼ 115,000 kms(2 dataset from the Great Barrier Reef revealed the fundamental importance of overall habitat complexity for reef fishes. Reductions of both live coral cover and habitat complexity had substantial impacts on fish communities compared to relatively minor impacts after major reductions in coral cover but not habitat complexity. Where habitat complexity was substantially reduced, species abundances broadly declined and a far greater number of fish species were locally extirpated, including economically important fishes. This resulted in decreased species richness and a loss of diversity within functional groups. Our results suggest that the retention of habitat complexity following disturbances can ameliorate the impacts of coral declines on reef fishes, so preserving their capacity to perform important functional roles essential to reef resilience. These results add to a growing body of evidence about the importance of habitat complexity for reef fishes, and represent the first large-scale examination of this question on the Great Barrier Reef.
Vnukov, A. A.; Shershnev, M. B.
2018-01-01
The aim of this work is the software implementation of three image scaling algorithms using parallel computations, as well as the development of an application with a graphical user interface for the Windows operating system to demonstrate the operation of algorithms and to study the relationship between system performance, algorithm execution time and the degree of parallelization of computations. Three methods of interpolation were studied, formalized and adapted to scale images. The result of the work is a program for scaling images by different methods. Comparison of the quality of scaling by different methods is given.
The architecture of ArgR-DNA complexes at the genome-scale in Escherichia coli
DEFF Research Database (Denmark)
Cho, Suhyung; Cho, Yoo-Bok; Kang, Taek Jin
2015-01-01
DNA-binding motifs that are recognized by transcription factors (TFs) have been well studied; however, challenges remain in determining the in vivo architecture of TF-DNA complexes on a genome-scale. Here, we determined the in vivo architecture of Escherichia coli arginine repressor (ArgR)-DNA co...
Friedrich, T.; Timmermann, A.; Menviel, L.; Elison Timm, O.; Mouchet, A.; Roche, D.M.V.A.P.
2010-01-01
The mechanism triggering centennial-to-millennial-scale variability of the Atlantic Meridional Overturning Circulation (AMOC) in the earth system model of intermediate complexity LOVECLIM is investigated. It is found that for several climate boundary conditions such as low obliquity values (∼22.1 )
Max-Min SINR in Large-Scale Single-Cell MU-MIMO: Asymptotic Analysis and Low Complexity Transceivers
Sifaou, Houssem; Kammoun, Abla; Sanguinetti, Luca; Debbah, Merouane; Alouini, Mohamed-Slim
2016-01-01
This work focuses on the downlink and uplink of large-scale single-cell MU-MIMO systems in which the base station (BS) endowed with M antennas communicates with K single-antenna user equipments (UEs). Particularly, we aim at reducing the complexity
Czech Academy of Sciences Publication Activity Database
Landau, A.; Haritan, I.; Kaprálová-Žďánská, Petra Ruth; Moiseyev, N.
2015-01-01
Roč. 113, 19-20 (2015), s. 3141-3146 ISSN 0026-8976 R&D Projects: GA MŠk(CZ) LG13029 Institutional support: RVO:68378271 Keywords : resonance * complex scaling * non-Hermitian * ab-initio Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.837, year: 2015
International Nuclear Information System (INIS)
Purohit, D.N.; Goswami, A.K.; Chauhan, R.S.; Ressalan, S.
1999-01-01
A spectrophotometric method for determination of stability constants making use of Job's curves has been developed. Using this method stability constants of Zn(II), Cd(II), Mo(VI) and V(V) complexes of hydroxytriazenes have been determined. For the sake of comparison, values of the stability constants were also determined using Harvey and Manning's method. The values of the stability constants developed by two methods compare well. This new method has been named as Purohit's method. (author)
International Nuclear Information System (INIS)
Ishihara, Kenichi; Hamada, Takeshi; Meshii, Toshiyuki
2017-01-01
In this paper, a new method for scaling the crack tip stress distribution under small scale yielding condition was proposed and named as T-scaling method. This method enables to identify the different stress distributions for materials with different tensile properties but identical load in terms of K or J. Then by assuming that the temperature dependence of a material is represented as the stress-strain relationship temperature dependence, a method to predict the fracture load at an arbitrary temperature from the already known fracture load at a reference temperature was proposed. This method combined the T-scaling method and the knowledge “fracture stress for slip induced cleavage fracture is temperature independent.” Once the fracture load is predicted, fracture toughness J c at the temperature under consideration can be evaluated by running elastic-plastic finite element analysis. Finally, the above-mentioned framework to predict the J c temperature dependency of a material in the ductile-to-brittle temperature distribution was validated for 0.55% carbon steel JIS S55C. The proposed framework seems to have a possibility to solve the problem the master curve is facing in the relatively higher temperature region, by requiring only tensile tests. (author)
Improved dynamical scaling analysis using the kernel method for nonequilibrium relaxation.
Echinaka, Yuki; Ozeki, Yukiyasu
2016-10-01
The dynamical scaling analysis for the Kosterlitz-Thouless transition in the nonequilibrium relaxation method is improved by the use of Bayesian statistics and the kernel method. This allows data to be fitted to a scaling function without using any parametric model function, which makes the results more reliable and reproducible and enables automatic and faster parameter estimation. Applying this method, the bootstrap method is introduced and a numerical discrimination for the transition type is proposed.
Jajcay, N.; Kravtsov, S.; Tsonis, A.; Palus, M.
2017-12-01
A better understanding of dynamics in complex systems, such as the Earth's climate is one of the key challenges for contemporary science and society. A large amount of experimental data requires new mathematical and computational approaches. Natural complex systems vary on many temporal and spatial scales, often exhibiting recurring patterns and quasi-oscillatory phenomena. The statistical inference of causal interactions and synchronization between dynamical phenomena evolving on different temporal scales is of vital importance for better understanding of underlying mechanisms and a key for modeling and prediction of such systems. This study introduces and applies information theory diagnostics to phase and amplitude time series of different wavelet components of the observed data that characterizes El Niño. A suite of significant interactions between processes operating on different time scales was detected, and intermittent synchronization among different time scales has been associated with the extreme El Niño events. The mechanisms of these nonlinear interactions were further studied in conceptual low-order and state-of-the-art dynamical, as well as statistical climate models. Observed and simulated interactions exhibit substantial discrepancies, whose understanding may be the key to an improved prediction. Moreover, the statistical framework which we apply here is suitable for direct usage of inferring cross-scale interactions in nonlinear time series from complex systems such as the terrestrial magnetosphere, solar-terrestrial interactions, seismic activity or even human brain dynamics.
Complex data modeling and computationally intensive methods for estimation and prediction
Secchi, Piercesare; Advances in Complex Data Modeling and Computational Methods in Statistics
2015-01-01
The book is addressed to statisticians working at the forefront of the statistical analysis of complex and high dimensional data and offers a wide variety of statistical models, computer intensive methods and applications: network inference from the analysis of high dimensional data; new developments for bootstrapping complex data; regression analysis for measuring the downsize reputational risk; statistical methods for research on the human genome dynamics; inference in non-euclidean settings and for shape data; Bayesian methods for reliability and the analysis of complex data; methodological issues in using administrative data for clinical and epidemiological research; regression models with differential regularization; geostatistical methods for mobility analysis through mobile phone data exploration. This volume is the result of a careful selection among the contributions presented at the conference "S.Co.2013: Complex data modeling and computationally intensive methods for estimation and prediction" held...
The geological processes time scale of the Ingozersky block TTG complex (Kola Peninsula)
Nitkina, Elena
2013-04-01
Ingozersky block located in the Tersky Terrane of the Kola Peninsula is composed of Archean gneisses and granitoids [1; 5; 8]. The Archaean basement complexes on the regional geological maps have called tonalite-trondemit-gneisses (TTG) complexes [6]. In the previous studies [1; 3; 4; 5; 7] within Ingozersky block the following types of rocks were established: biotite, biotite-amphibole, amphibole-biotite gneisses, granites, granodiorites and pegmatites [2]. In the rocks of the complex following corresponding sequence of endogenous processes observed (based on [5]): stage 1 - the biotitic gneisses formation; 2 - the introduction of dikes of basic rocks; 3 phase - deformation and foliation; 4 stage - implementation bodies of granite and migmatization; 5 stage - implementation of large pegmatite bodies; stage 6 - the formation of differently pegmatite and granite veins of low power, with and without garnet; stage 7 - quartz veins. Previous U-Pb isotopic dating of the samples was done for biotite gneisses, amphibole-biotite gneisses and biotite-amphibole gneisses. Thus, some Sm-Nd TDM ages are 3613 Ma - biotite gnesses, 2596 Ma - amphibole-biotite gnesses and 3493 Ma biotite-amphibole gneisses.. U-Pb ages of the metamorphism processes in the TTG complex are obtained: 2697±9 Ma - for the biotite gneiss, 2725±2 and 2667±7 Ma - for the amphibole-biotite gneisses, and 2727±5 Ma for the biotite-amphibole gneisses. The age defined for the biotite gneisses by using single zircon dating to be about 3149±46 Ma corresponds to the time of the gneisses protolith formation. The purpose of these studies is the age establishing of granite and pegmatite bodies emplacement and finding a geological processes time scale of the Ingozerskom block. Preliminary U-Pb isotopic dating of zircon and other accessory minerals were held for granites - 2615±8 Ma, migmatites - 2549±30 Ma and veined granites - 1644±7 Ma. As a result of the isotope U-Pb dating of the different Ingozerskogo TTG
Biosensors in the small scale: methods and technology trends.
Senveli, Sukru U; Tigli, Onur
2013-03-01
This study presents a review on biosensors with an emphasis on recent developments in the field. A brief history accompanied by a detailed description of the biosensor concepts is followed by rising trends observed in contemporary micro- and nanoscale biosensors. Performance metrics to quantify and compare different detection mechanisms are presented. A comprehensive analysis on various types and subtypes of biosensors are given. The fields of interest within the scope of this review are label-free electrical, mechanical and optical biosensors as well as other emerging and popular technologies. Especially, the latter half of the last decade is reviewed for the types, methods and results of the most prominently researched detection mechanisms. Tables are provided for comparison of various competing technologies in the literature. The conclusion part summarises the noteworthy advantages and disadvantages of all biosensors reviewed in this study. Furthermore, future directions that the micro- and nanoscale biosensing technologies are expected to take are provided along with the immediate outlook.
Large-scale atomic calculations using variational methods
Energy Technology Data Exchange (ETDEWEB)
Joensson, Per
1995-01-01
Atomic properties, such as radiative lifetimes, hyperfine structures and isotope shift, have been studied both theoretically and experimentally. Computer programs which calculate these properties from multiconfiguration Hartree-Fock (MCHF) and configuration interaction (CI) wave functions have been developed and tested. To study relativistic effects, a program which calculates hyperfine structures from multiconfiguration Dirac-Fock (MCDF) wave functions has also been written. A new method of dealing with radial non-orthogonalities in transition matrix elements has been investigated. This method allows two separate orbital sets to be used for the initial and final states, respectively. It is shown that, once the usual orthogonality restrictions have been overcome, systematic MCHF calculations are able to predict oscillator strengths in light atoms with high accuracy. In connection with recent high-power laser experiments, time-dependent calculations of the atomic response to intense laser fields have been performed. Using the frozen-core approximation, where the atom is modeled as an active electron moving in the average field of the core electrons and the nucleus, the active electron has been propagated in time under the influence of the laser field. Radiative lifetimes and hyperfine structures of excited states in sodium and silver have been experimentally determined using time-resolved laser spectroscopy. By recording the fluorescence light decay following laser excitation in the vacuum ultraviolet spectral region, the radiative lifetimes and hyperfine structures of the 7p{sup 2}P states in silver have been measured. The delayed-coincidence technique has been used to make very accurate measurements of the radiative lifetimes and hyperfine structures of the lowest 2P states in sodium and silver. 77 refs, 2 figs, 14 tabs.
Large-scale atomic calculations using variational methods
International Nuclear Information System (INIS)
Joensson, Per.
1995-01-01
Atomic properties, such as radiative lifetimes, hyperfine structures and isotope shift, have been studied both theoretically and experimentally. Computer programs which calculate these properties from multiconfiguration Hartree-Fock (MCHF) and configuration interaction (CI) wave functions have been developed and tested. To study relativistic effects, a program which calculates hyperfine structures from multiconfiguration Dirac-Fock (MCDF) wave functions has also been written. A new method of dealing with radial non-orthogonalities in transition matrix elements has been investigated. This method allows two separate orbital sets to be used for the initial and final states, respectively. It is shown that, once the usual orthogonality restrictions have been overcome, systematic MCHF calculations are able to predict oscillator strengths in light atoms with high accuracy. In connection with recent high-power laser experiments, time-dependent calculations of the atomic response to intense laser fields have been performed. Using the frozen-core approximation, where the atom is modeled as an active electron moving in the average field of the core electrons and the nucleus, the active electron has been propagated in time under the influence of the laser field. Radiative lifetimes and hyperfine structures of excited states in sodium and silver have been experimentally determined using time-resolved laser spectroscopy. By recording the fluorescence light decay following laser excitation in the vacuum ultraviolet spectral region, the radiative lifetimes and hyperfine structures of the 7p 2 P states in silver have been measured. The delayed-coincidence technique has been used to make very accurate measurements of the radiative lifetimes and hyperfine structures of the lowest 2P states in sodium and silver. 77 refs, 2 figs, 14 tabs
Purification of 2-oxo acid dehydrogenase multienzyme complexes from ox heart by a new method.
Stanley, C J; Perham, R N
1980-01-01
A new method is described that allows the parallel purification of the pyruvate dehydrogenase and 2-oxoglutarate dehydrogenase multienzyme complexes from ox heart without the need for prior isolation of mitochondria. All the assayable activity of the 2-oxo acid dehydrogenase complexes in the disrupted tissue is made soluble by the inclusion of non-ionic detergents such as Triton X-100 or Tween-80 in the buffer used for the initial extraction of the enzyme complexes. The yields of the pyruvate...
BRAND program complex for neutron-physical experiment simulation by the Monte-Carlo method
International Nuclear Information System (INIS)
Androsenko, A.A.; Androsenko, P.A.
1984-01-01
Possibilities of the BRAND program complex for neutron and γ-radiation transport simulation by the Monte-Carlo method are described in short. The complex includes the following modules: geometric module, source module, detector module, modules of simulation of a vector of particle motion direction after interaction and a free path. The complex is written in the FORTRAN langauage and realized by the BESM-6 computer
Marchán, Daniel F; Fernández, Rosa; de Sosa, Irene; Díaz Cosín, Darío J; Novo, Marta
2017-07-01
Spatial and temporal aspects of the evolution of cryptic species complexes have received less attention than species delimitation within them. The phylogeography of the cryptic complex Hormogaster elisae (Oligochaeta, Hormogastridae) lacks knowledge on several aspects, including the small-scale distribution of its lineages or the palaeogeographic context of their diversification. To shed light on these topics, a dense specimen collection was performed in the center of the Iberian Peninsula - resulting in 28 new H. elisae collecting points, some of them as close as 760m from each other- for a higher resolution of the distribution of the cryptic lineages and the relationships between the populations. Seven molecular regions were amplified: mitochondrial subunit 1 of cytochrome c oxidase (COI), 16S rRNA and tRNA Leu, Ala, and Ser (16S t-RNAs), one nuclear ribosomal gene (a fragment of 28S rRNA) and one nuclear protein-encoding gene (histone H3) in order to infer their phylogenetic relationships. Different representation methods of the pairwise divergence in the cytochrome oxidase I sequence (heatmap and genetic landscape graphs) were used to visualize the genetic structure of H. elisae. A nested approach sensu Mairal et al. (2015) (connecting the evolutionary rates of two datasets of different taxonomic coverage) was used to obtain one approximation to a time-calibrated phylogenetic tree based on external Clitellata fossils and a wide molecular dataset. Our results indicate that limited active dispersal ability and ecological or biotic barriers could explain the isolation of the different cryptic lineages, which never co-occur. Rare events of long distance dispersal through hydrochory appear as one of the possible causes of range expansion. Copyright © 2017 Elsevier Inc. All rights reserved.
International Nuclear Information System (INIS)
Cunningham, J; Gatenby, R
2014-01-01
Purpose: To develop a simulation to catalyze a reevaluation of common assumptions about 3 dimensional diffusive processes and help cell biologists gain a more nuanced, intuitive understanding of the true physical hurdles of protein signaling cascades. Furthermore, to discuss the possibility of intracellular electrodynamics as a critical, unrecognized component of cellular biology and protein dynamics that is necessary for optimal information flow from the cell membrane to the nucleus. Methods: The Unity 3D gaming physics engine was used to build an accurate virtual scale model of the cytoplasm within a few hundred nanometers of the nuclear membrane. A cloud of simulated pERK proteins is controlled by the physics simulation, where diffusion is based on experimentally measured values and the electrodynamics are based on theoretical nano-fluid dynamics. The trajectories of pERK within the cytoplasm and through the 1250 nuclear pores on the nuclear surface is recorded and analyzed. Results: The simulation quickly demonstrates that pERKs moving solely by diffusion will rarely locate and come within capture distance of a nuclear pore. The addition of intracellular electrodynamics between charges on the nuclear pore complexes and on pERKs increases the number of successful translocations by allowing the electro-physical attractive effects to draw in pERKs from the cytoplasm. The effects of changes in intracellular shielding ion concentrations allowed for estimation of the “capture radius” under varying conditions. Conclusion: The simulation allows a shift in perspective that is paramount in attempting to communicate the scale and dynamics of intracellular protein cascade mechanics. This work has allowed researchers to more fully understand the parameters involved in intracellular electrodynamics, such as shielding anion concentration and protein charge. As these effects are still far below the spatial resolution of currently available measurement technology this
Energy Technology Data Exchange (ETDEWEB)
Cunningham, J; Gatenby, R [Moffitt Cancer Research Institute, Tampa, FL (United States)
2014-06-01
Purpose: To develop a simulation to catalyze a reevaluation of common assumptions about 3 dimensional diffusive processes and help cell biologists gain a more nuanced, intuitive understanding of the true physical hurdles of protein signaling cascades. Furthermore, to discuss the possibility of intracellular electrodynamics as a critical, unrecognized component of cellular biology and protein dynamics that is necessary for optimal information flow from the cell membrane to the nucleus. Methods: The Unity 3D gaming physics engine was used to build an accurate virtual scale model of the cytoplasm within a few hundred nanometers of the nuclear membrane. A cloud of simulated pERK proteins is controlled by the physics simulation, where diffusion is based on experimentally measured values and the electrodynamics are based on theoretical nano-fluid dynamics. The trajectories of pERK within the cytoplasm and through the 1250 nuclear pores on the nuclear surface is recorded and analyzed. Results: The simulation quickly demonstrates that pERKs moving solely by diffusion will rarely locate and come within capture distance of a nuclear pore. The addition of intracellular electrodynamics between charges on the nuclear pore complexes and on pERKs increases the number of successful translocations by allowing the electro-physical attractive effects to draw in pERKs from the cytoplasm. The effects of changes in intracellular shielding ion concentrations allowed for estimation of the “capture radius” under varying conditions. Conclusion: The simulation allows a shift in perspective that is paramount in attempting to communicate the scale and dynamics of intracellular protein cascade mechanics. This work has allowed researchers to more fully understand the parameters involved in intracellular electrodynamics, such as shielding anion concentration and protein charge. As these effects are still far below the spatial resolution of currently available measurement technology this
A large-scale benchmark of gene prioritization methods.
Guala, Dimitri; Sonnhammer, Erik L L
2017-04-21
In order to maximize the use of results from high-throughput experimental studies, e.g. GWAS, for identification and diagnostics of new disease-associated genes, it is important to have properly analyzed and benchmarked gene prioritization tools. While prospective benchmarks are underpowered to provide statistically significant results in their attempt to differentiate the performance of gene prioritization tools, a strategy for retrospective benchmarking has been missing, and new tools usually only provide internal validations. The Gene Ontology(GO) contains genes clustered around annotation terms. This intrinsic property of GO can be utilized in construction of robust benchmarks, objective to the problem domain. We demonstrate how this can be achieved for network-based gene prioritization tools, utilizing the FunCoup network. We use cross-validation and a set of appropriate performance measures to compare state-of-the-art gene prioritization algorithms: three based on network diffusion, NetRank and two implementations of Random Walk with Restart, and MaxLink that utilizes network neighborhood. Our benchmark suite provides a systematic and objective way to compare the multitude of available and future gene prioritization tools, enabling researchers to select the best gene prioritization tool for the task at hand, and helping to guide the development of more accurate methods.
Time dependence, complex scaling, and the calculation of resonances in many-electron systems
International Nuclear Information System (INIS)
Nicolaides, C.A.; Beck, D.R.
1978-01-01
The theory deals with certain aspects of the formal properties of atomic and molecular highly excited nonstationary states and the problem of calculating their wave functions, energies, and widths. The conceptual framework is a decay theory based on the consistent definition and calculation of the t = 0 localized state, vertical bar psi 0 >. Given this framework, the following topics are treated: The variational calculation of psi 0 and E 0 using a previously published theory that generalized the projection operator approach to many-electron systems. The exact definition of the resonance energy. The possibility of bound states in the continuum. The relation of psi 0 to the resonance (Gamow) function psi and of the Hamiltonian to the rotated Hamiltonian H(theta) based on the notion of perturbation of boundary conditions in the asymptotic region. The variational calculation of real and complex energies employing matrix elements of H and H 2 with square-integrable and resonance functions. The mathematical structure of the time evolution of vertical bar psi 0 > and the possibility of observing nonexponential decays in certain autoionizing states that are very close to the ionization threshold. A many-body theory of atomic and molecular resonances that employs the coordinate rotation method. 107 references
International Nuclear Information System (INIS)
Li, Rui; Wang, Jun
2016-01-01
A financial price model is developed based on the voter interacting system in this work. The Lempel–Ziv complexity is introduced to analyze the complex behaviors of the stock market. Some stock market stylized facts including fat tails, absence of autocorrelation and volatility clustering are investigated for the proposed price model firstly. Then the complexity of fluctuation behaviors of the real stock markets and the proposed price model are mainly explored by Lempel–Ziv complexity (LZC) analysis and multi-scale weighted-permutation entropy (MWPE) analysis. A series of LZC analyses of the returns and the absolute returns of daily closing prices and moving average prices are performed. Moreover, the complexity of the returns, the absolute returns and their corresponding intrinsic mode functions (IMFs) derived from the empirical mode decomposition (EMD) with MWPE is also investigated. The numerical empirical study shows similar statistical and complex behaviors between the proposed price model and the real stock markets, which exhibits that the proposed model is feasible to some extent. - Highlights: • A financial price dynamical model is developed based on the voter interacting system. • Lempel–Ziv complexity is the firstly applied to investigate the stock market dynamics system. • MWPE is employed to explore the complexity fluctuation behaviors of the stock market. • Empirical results show the feasibility of the proposed financial model.
Energy Technology Data Exchange (ETDEWEB)
Li, Rui, E-mail: lirui1401@bjtu.edu.cn; Wang, Jun
2016-01-08
A financial price model is developed based on the voter interacting system in this work. The Lempel–Ziv complexity is introduced to analyze the complex behaviors of the stock market. Some stock market stylized facts including fat tails, absence of autocorrelation and volatility clustering are investigated for the proposed price model firstly. Then the complexity of fluctuation behaviors of the real stock markets and the proposed price model are mainly explored by Lempel–Ziv complexity (LZC) analysis and multi-scale weighted-permutation entropy (MWPE) analysis. A series of LZC analyses of the returns and the absolute returns of daily closing prices and moving average prices are performed. Moreover, the complexity of the returns, the absolute returns and their corresponding intrinsic mode functions (IMFs) derived from the empirical mode decomposition (EMD) with MWPE is also investigated. The numerical empirical study shows similar statistical and complex behaviors between the proposed price model and the real stock markets, which exhibits that the proposed model is feasible to some extent. - Highlights: • A financial price dynamical model is developed based on the voter interacting system. • Lempel–Ziv complexity is the firstly applied to investigate the stock market dynamics system. • MWPE is employed to explore the complexity fluctuation behaviors of the stock market. • Empirical results show the feasibility of the proposed financial model.
Indian Academy of Sciences (India)
Rahul Pandit
2008-10-31
Oct 31, 2008 ... Centre for Condensed Matter Theory. Department of Physics. Indian Institute ... Interactions between a system's components are important role. ... Scale-free networks in, say, social networks or the world-wide web. ▻ A system ...
Investigations of surface-tension effects due to small-scale complex boundaries
Feng, Jiansheng
these two different types of surfaces differed by about 50° ˜ 60°, with the low-adhesion surfaces at about 120° ˜ 130° and the high-adhesion surfaces at about 70° ˜ 80°. Characterizations of both the microscopic structures and macroscopic wetting properties of these product surfaces allowed us to pinpoint the structural features responsible for specific wetting properties. It is found that the advancing contact angle was mainly determined by the primary structures while the receding contact angle is largely affected by the side-wall slope of the secondary features. This study established a platform for further exploration of the structure aspects of surface wettability. In the third and final project (Chapter 4), we demonstrated a new type of microfluidic channel that enable asymmetric wicking of wetting fluids based on structure-induced direction-dependent surface-tension effect. By decorating the side-walls of open microfluidic channels with tilted fins, we were able to experimentally demonstrate preferential wicking behaviors of various IPA-water mixtures with a range of contact angles in these channels. A simplified 2D model was established to explain the wicking asymmetry, and a complete 3D model was developed to provide more accurate quantitative predictions. The design principles developed in this study provide an additional scheme for controlling the spreading of fluids. The research presented in this dissertation spreads out across a wide range of physical phenomena (wicking, wetting, and capillarity), and involves a number of computational and experimental techniques, yet all of these projects are intrinsically united under a common theme: we want to better understand how simple fluids respond to small-scale complex surface structures as manifestations of surface-tension effects. We hope our findings can serve as building blocks for a larger scale endeavor of scientific research and engineering development. After all, the pursue of knowledge is most
Baryon asymmetry via leptogenesis in a neutrino mass model with complex scaling
International Nuclear Information System (INIS)
Samanta, Rome; Ghosal, Ambar; Chakraborty, Mainak; Roy, Probir
2017-01-01
Baryogenesis via leptogenesis is investigated in a specific model of light neutrino masses and mixing angles. The latter was proposed on the basis of an assumed complex-extended scaling property of the neutrino Majorana mass matrix M ν , derived with a type-1 seesaw from a Dirac mass matrix m D and a heavy singlet neutrino Majorana mass matrix M R . One of its important features, highlighted here, is that there is a common source of the origin of a nonzero θ 13 and the CP violating lepton asymmetry through the imaginary part of m D . The model predicted CP violation to be maximal for the Dirac type and vanishing for the Majorana type. We assume strongly hierarchical mass eigenvalues for M R . The leptonic CP asymmetry parameter ε α 1 mm with lepton flavor α, originating from the decays of the lightest of the heavy neutrinos N 1 (of mass M 1 ) at a temperature T ∼ M 1 , is what matters here with the lepton asymmetries, originating from the decays of N 2,3 , being washed out. The light leptonic and heavy neutrino number densities (normalized to the entropy density) are evolved via Boltzmann equations down to electroweak temperatures to yield a baryon asymmetry through sphaleronic transitions. The effects of flavored vs. unflavored leptogenesis in the three mass regimes (1) M 1 < 10 9 GeV, (2) 10 9 GeV < M 1 < 10 12 GeV and (3) M 1 > 10 12 GeV are numerically worked out for both a normal and an inverted mass ordering of the light neutrinos. Corresponding results on the baryon asymmetry of the universe are obtained, displayed and discussed. For values close to the best-fit points of the input neutrino mass and mixing parameters, obtained from neutrino oscillation experiments, successful baryogenesis is achieved for the mass regime (2) and a normal mass ordering of the light neutrinos with a nonzero θ 13 playing a crucial role. However, the other possibility of an inverted mass ordering for the same mass regime, though disfavored, cannot be excluded. A
Baryon asymmetry via leptogenesis in a neutrino mass model with complex scaling
Energy Technology Data Exchange (ETDEWEB)
Samanta, Rome; Ghosal, Ambar [Saha Institute of Nuclear Physics, HBNI, 1/AF Bidhannagar, Kolkata 700064 (India); Chakraborty, Mainak [Centre of Excellence in Theoretical and Mathematical Sciences, SOA University, Khandagiri Square, Bhubaneswar 751030 (India); Roy, Probir, E-mail: rome.samanta@saha.ac.in, E-mail: mainak.chakraborty2@gmail.com, E-mail: probirrana@gmail.com, E-mail: ambar.ghosal@saha.ac.in [Center for Astroparticle Physics and Space Science, Bose Institute, Kolkata 700091 (India)
2017-03-01
Baryogenesis via leptogenesis is investigated in a specific model of light neutrino masses and mixing angles. The latter was proposed on the basis of an assumed complex-extended scaling property of the neutrino Majorana mass matrix M {sub ν}, derived with a type-1 seesaw from a Dirac mass matrix m {sub D} and a heavy singlet neutrino Majorana mass matrix M {sub R} . One of its important features, highlighted here, is that there is a common source of the origin of a nonzero θ{sub 13} and the CP violating lepton asymmetry through the imaginary part of m {sub D} . The model predicted CP violation to be maximal for the Dirac type and vanishing for the Majorana type. We assume strongly hierarchical mass eigenvalues for M {sub R} . The leptonic CP asymmetry parameter ε{sup α}{sub 1} mm with lepton flavor α, originating from the decays of the lightest of the heavy neutrinos N {sub 1} (of mass M {sub 1}) at a temperature T ∼ M {sub 1}, is what matters here with the lepton asymmetries, originating from the decays of N {sub 2,3}, being washed out. The light leptonic and heavy neutrino number densities (normalized to the entropy density) are evolved via Boltzmann equations down to electroweak temperatures to yield a baryon asymmetry through sphaleronic transitions. The effects of flavored vs. unflavored leptogenesis in the three mass regimes (1) M {sub 1} < 10{sup 9} GeV, (2) 10{sup 9} GeV < M {sub 1} < 10{sup 12} GeV and (3) M {sub 1} > 10{sup 12} GeV are numerically worked out for both a normal and an inverted mass ordering of the light neutrinos. Corresponding results on the baryon asymmetry of the universe are obtained, displayed and discussed. For values close to the best-fit points of the input neutrino mass and mixing parameters, obtained from neutrino oscillation experiments, successful baryogenesis is achieved for the mass regime (2) and a normal mass ordering of the light neutrinos with a nonzero θ{sub 13} playing a crucial role. However, the other
Large scale IRAM 30 m CO-observations in the giant molecular cloud complex W43
Carlhoff, P.; Nguyen Luong, Q.; Schilke, P.; Motte, F.; Schneider, N.; Beuther, H.; Bontemps, S.; Heitsch, F.; Hill, T.; Kramer, C.; Ossenkopf, V.; Schuller, F.; Simon, R.; Wyrowski, F.
2013-12-01
We aim to fully describe the distribution and location of dense molecular clouds in the giant molecular cloud complex W43. It was previously identified as one of the most massive star-forming regions in our Galaxy. To trace the moderately dense molecular clouds in the W43 region, we initiated W43-HERO, a large program using the IRAM 30 m telescope, which covers a wide dynamic range of scales from 0.3 to 140 pc. We obtained on-the-fly-maps in 13CO (2-1) and C18O (2-1) with a high spectral resolution of 0.1 km s-1 and a spatial resolution of 12''. These maps cover an area of ~1.5 square degrees and include the two main clouds of W43 and the lower density gas surrounding them. A comparison to Galactic models and previous distance calculations confirms the location of W43 near the tangential point of the Scutum arm at approximately 6 kpc from the Sun. The resulting intensity cubes of the observed region are separated into subcubes, which are centered on single clouds and then analyzed in detail. The optical depth, excitation temperature, and H2 column density maps are derived out of the 13CO and C18O data. These results are then compared to those derived from Herschel dust maps. The mass of a typical cloud is several 104 M⊙ while the total mass in the dense molecular gas (>102 cm-3) in W43 is found to be ~1.9 × 106 M⊙. Probability distribution functions obtained from column density maps derived from molecular line data and Herschel imaging show a log-normal distribution for low column densities and a power-law tail for high densities. A flatter slope for the molecular line data probability distribution function may imply that those selectively show the gravitationally collapsing gas. Appendices are available in electronic form at http://www.aanda.orgThe final datacubes (13CO and C18O) for the entire survey are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/560/A24
International Nuclear Information System (INIS)
Wu, Fuke; Tian, Tianhai; Rawlings, James B.; Yin, George
2016-01-01
The frequently used reduction technique is based on the chemical master equation for stochastic chemical kinetics with two-time scales, which yields the modified stochastic simulation algorithm (SSA). For the chemical reaction processes involving a large number of molecular species and reactions, the collection of slow reactions may still include a large number of molecular species and reactions. Consequently, the SSA is still computationally expensive. Because the chemical Langevin equations (CLEs) can effectively work for a large number of molecular species and reactions, this paper develops a reduction method based on the CLE by the stochastic averaging principle developed in the work of Khasminskii and Yin [SIAM J. Appl. Math. 56, 1766–1793 (1996); ibid. 56, 1794–1819 (1996)] to average out the fast-reacting variables. This reduction method leads to a limit averaging system, which is an approximation of the slow reactions. Because in the stochastic chemical kinetics, the CLE is seen as the approximation of the SSA, the limit averaging system can be treated as the approximation of the slow reactions. As an application, we examine the reduction of computation complexity for the gene regulatory networks with two-time scales driven by intrinsic noise. For linear and nonlinear protein production functions, the simulations show that the sample average (expectation) of the limit averaging system is close to that of the slow-reaction process based on the SSA. It demonstrates that the limit averaging system is an efficient approximation of the slow-reaction process in the sense of the weak convergence.
Scales are a visible peeling or flaking of outer skin layers. These layers are called the stratum ... Scales may be caused by dry skin, certain inflammatory skin conditions, or infections. Examples of disorders that ...
GRAPHICS-IMAGE MIXED METHOD FOR LARGE-SCALE BUILDINGS RENDERING
Directory of Open Access Journals (Sweden)
Y. Zhou
2018-05-01
Full Text Available Urban 3D model data is huge and unstructured, LOD and Out-of-core algorithm are usually used to reduce the amount of data that drawn in each frame to improve the rendering efficiency. When the scene is large enough, even the complex optimization algorithm is difficult to achieve better results. Based on the traditional study, a novel idea was developed. We propose a graphics and image mixed method for large-scale buildings rendering. Firstly, the view field is divided into several regions, the graphics-image mixed method used to render the scene on both screen and FBO, then blending the FBO with scree. The algorithm is tested on the huge CityGML model data in the urban areas of New York which contained 188195 public building models, and compared with the Cesium platform. The experiment result shows the system was running smoothly. The experimental results confirm that the algorithm can achieve more massive building scene roaming under the same hardware conditions, and can rendering the scene without vision loss.
Second-order wave diffraction by a circular cylinder using scaled boundary finite element method
International Nuclear Information System (INIS)
Song, H; Tao, L
2010-01-01
The scaled boundary finite element method (SBFEM) has achieved remarkable success in structural mechanics and fluid mechanics, combing the advantage of both FEM and BEM. Most of the previous works focus on linear problems, in which superposition principle is applicable. However, many physical problems in the real world are nonlinear and are described by nonlinear equations, challenging the application of the existing SBFEM model. A popular idea to solve a nonlinear problem is decomposing the nonlinear equation to a number of linear equations, and then solves them individually. In this paper, second-order wave diffraction by a circular cylinder is solved by SBFEM. By splitting the forcing term into two parts, the physical problem is described as two second-order boundary-value problems with different asymptotic behaviour at infinity. Expressing the velocity potentials as a series of depth-eigenfunctions, both of the 3D boundary-value problems are decomposed to a number of 2D boundary-value sub-problems, which are solved semi-analytically by SBFEM. Only the cylinder boundary is discretised with 1D curved finite-elements on the circumference of the cylinder, while the radial differential equation is solved completely analytically. The method can be extended to solve more complex wave-structure interaction problems resulting in direct engineering applications.
Stepwise integral scaling method and its application to severe accident phenomena
International Nuclear Information System (INIS)
Ishii, M.; Zhang, G.
1993-10-01
Severe accidents in light water reactors are characterized by an occurrence of multiphase flow with complicated phase changes, chemical reaction and various bifurcation phenomena. Because of the inherent difficulties associated with full-scale testing, scaled down and simulation experiments are essential part of the severe accident analyses. However, one of the most significant shortcomings in the area is the lack of well-established and reliable scaling method and scaling criteria. In view of this, the stepwise integral scaling method is developed for severe accident analyses. This new scaling method is quite different from the conventional approach. However, its focus on dominant transport mechanisms and use of the integral response of the system make this method relatively simple to apply to very complicated multi-phase flow problems. In order to demonstrate its applicability and usefulness, three case studies have been made. The phenomena considered are (1) corium dispersion in DCH, (2) corium spreading in BWR MARK-I containment, and (3) incore boil-off and heating process. The results of these studies clearly indicate the effectiveness of their stepwise integral scaling method. Such a simple and systematic scaling method has not been previously available to severe accident analyses
Das, Subhraseema; Subuddhi, Usharani
2015-11-01
Inclusion complexes of diclofenac sodium (DS) with β-cyclodextrin (β-CD) were prepared in order to improve the solubility, dissolution and oral bioavailability of the poorly water soluble drug. The effect of method of preparation of the DS/β-CD inclusion complexes (ICs) was investigated. The ICs were prepared by microwave irradiation and also by the conventional methods of kneading, co-precipitation and freeze drying. Though freeze drying method is usually referred to as the gold standard among all the conventional methods, its long processing time limits the utility. Microwave irradiation accomplishes the process in a very short span of time and is a more environmentally benign method. Better efficacy of the microwaved inclusion product (MW) was observed in terms of dissolution, antimicrobial activity and antibiofilm properties of the drug. Thus microwave irradiation can be utilized as an improved, time-saving and cost-effective method for the generation of DS/β-CD inclusion complexes.
Directory of Open Access Journals (Sweden)
Savić Ivan
2009-01-01
Full Text Available The aim of this work was to optimize a GFC method for the analysis of bioactive metal (Cu, Co and Fe complexes with olygosaccharides (dextran and pullulan. Bioactive metal complexes with olygosaccharides were synthesized by original procedure. GFC was used to study the molecular weight distribution, polymerization degree of oligosaccharides and bioactive metal complexes. The metal bounding in complexes depends on the ligand polymerization degree and the presence of OH groups in coordinative sphere of the central metal ion. The interaction between oligosaccharide and metal ions are very important in veterinary medicine, agriculture, pharmacy and medicine.
International Nuclear Information System (INIS)
Dobrynina, N.A.
1992-01-01
Position of bioinorganic chemistry in the system of naturl science, as well as relations between bioinorganic and biocoordination chemistry, were considered. The content of chemical elements in geosphere and biosphere was analyzed. Characteristic features of biometal complexing with bioligands were pointed out. By way of example complex equilibria in solution were studie by the method of pH-metric titration using mathematical simulation. Advantages of the methods totality, when studying biosystems, were emphasized
The relationship between the Wigner-Weyl kinetic formalism and the complex geometrical optics method
Maj, Omar
2004-01-01
The relationship between two different asymptotic techniques developed in order to describe the propagation of waves beyond the standard geometrical optics approximation, namely, the Wigner-Weyl kinetic formalism and the complex geometrical optics method, is addressed. More specifically, a solution of the wave kinetic equation, relevant to the Wigner-Weyl formalism, is obtained which yields the same wavefield intensity as the complex geometrical optics method. Such a relationship is also disc...
Directory of Open Access Journals (Sweden)
Abdallah Bengueddoudj
2017-05-01
Full Text Available In this paper, we propose a new image fusion algorithm based on two-dimensional Scale-Mixing Complex Wavelet Transform (2D-SMCWT. The fusion of the detail 2D-SMCWT coefficients is performed via a Bayesian Maximum a Posteriori (MAP approach by considering a trivariate statistical model for the local neighboring of 2D-SMCWT coefficients. For the approximation coefficients, a new fusion rule based on the Principal Component Analysis (PCA is applied. We conduct several experiments using three different groups of multimodal medical images to evaluate the performance of the proposed method. The obtained results prove the superiority of the proposed method over the state of the art fusion methods in terms of visual quality and several commonly used metrics. Robustness of the proposed method is further tested against different types of noise. The plots of fusion metrics establish the accuracy of the proposed fusion method.
International Nuclear Information System (INIS)
Hey-Suk Kim; Mi-Soo Shin; Dong-Soon Jang; Tae-In Ohm
2004-01-01
Considering the rapid variation of waste composition and the more severe regulation trend of pollutant emission in this country, the importance of the development of a reliable computer program for a full-scale, stoker-type incinerator cannot be emphasized too much, especially in the view of proper design and optimal determination of operating condition of existing and future constructed facility. To this end, a comprehensive, numerical model related with the process of the waste-off gaseous combustion with the capacity of 200 tons/day is successfully made. This includes development of several phenomenological models such as municipal waste-off gaseous reaction, NO pollutant generation and destruction in turbulence-related environment. Especially in this study a number of sound assumptions have been made for the NO reaction model, 3-D geometry of incinerator and waste-bed model to achieve the efficient incorporation of the empirical models and enhancement of the stability of calculation process. First of all, the turbulence-related, complex combustion chemistry involved with NO reaction is modeled by the harmonic mean method, which is given by the relative strength of the rates of chemistry and turbulent mixing. Further, the 3-D rectangular shape of the incinerator is simply approximated by a 3-D axi-symmetric geometry with equivalent area. And the modeling of complex waste-burning process on moving grate is described by a pure gaseous combustion process of waste off-gas. The program developed in this study is successfully validated by comparing with the experimental data such as temperature and NO concentration profiles in the incinerator located at 4th industrial complex of Daejon, S. Korea. Using the program developed, a series of parametric investigations have been made for the evaluation of SNCR process and thereby evaluate various important design and the operating variables. The major parameters considered in this parametric study are heating value of
Energy Technology Data Exchange (ETDEWEB)
Al Mouhamed, Mayez
1977-09-15
In a number of complex physical systems the accessible signals are often characterized by random fluctuations about a mean value. The fluctuations (signature) often transmit information about the state of the system that the mean value cannot predict. This study is undertaken to elaborate statistical methods of anomaly detection on the basis of signature analysis of the noise inherent in the process. The algorithm presented first learns the characteristics of normal operation of a complex process. Then it detects small deviations from the normal behavior. The algorithm can be implemented in a medium-sized computer for on line application. (author) [French] Dans de nombreux systemes physiques complexes les grandeurs accessibles a l'homme sont souvent caracterisees par des fluctuations aleatoires autour d'une valeur moyenne. Les fluctuations (signatures) transmettent souvent des informations sur l'etat du systeme que la valeur moyenne ne peut predire. Cette etude est entreprise pour elaborer des methodes statistiques de detection d'anomalies de fonctionnement sur la base de l'analyse des signatures contenues dans les signaux de bruit provenant du processus. L'algorithme presente est capable de: 1/ Apprendre les caracteristiques des operations normales dans un processus complexe. 2/ Detecter des petites deviations par rapport a la conduite normale du processus. L'algorithme peut etre implante sur un calculateur de taille moyenne pour les applications en ligne. (auteur)
Shrekenhamer, Abraham; Gottesman, Stephen R.
2012-10-01
A novel and memory efficient method for computing diffraction patterns produced on large-scale focal planes by largescale Coded Apertures at wavelengths where diffraction effects are significant has been developed and tested. The scheme, readily implementable on portable computers, overcomes the memory limitations of present state-of-the-art simulation codes such as Zemax. The method consists of first calculating a set of reference complex field (amplitude and phase) patterns on the focal plane produced by a single (reference) central hole, extending to twice the focal plane array size, with one such pattern for each Line-of-Sight (LOS) direction and wavelength in the scene, and with the pattern amplitude corresponding to the square-root of the spectral irradiance from each such LOS direction in the scene at selected wavelengths. Next the set of reference patterns is transformed to generate pattern sets for other holes. The transformation consists of a translational pattern shift corresponding to each hole's position offset and an electrical phase shift corresponding to each hole's position offset and incoming radiance's direction and wavelength. The set of complex patterns for each direction and wavelength is then summed coherently and squared for each detector to yield a set of power patterns unique for each direction and wavelength. Finally the set of power patterns is summed to produce the full waveband diffraction pattern from the scene. With this tool researchers can now efficiently simulate diffraction patterns produced from scenes by large-scale Coded Apertures onto large-scale focal plane arrays to support the development and optimization of coded aperture masks and image reconstruction algorithms.
Random walk-based similarity measure method for patterns in complex object
Directory of Open Access Journals (Sweden)
Liu Shihu
2017-04-01
Full Text Available This paper discusses the similarity of the patterns in complex objects. The complex object is composed both of the attribute information of patterns and the relational information between patterns. Bearing in mind the specificity of complex object, a random walk-based similarity measurement method for patterns is constructed. In this method, the reachability of any two patterns with respect to the relational information is fully studied, and in the case of similarity of patterns with respect to the relational information can be calculated. On this bases, an integrated similarity measurement method is proposed, and algorithms 1 and 2 show the performed calculation procedure. One can find that this method makes full use of the attribute information and relational information. Finally, a synthetic example shows that our proposed similarity measurement method is validated.
Wang, Xianbin; Chen, Wei; Wang, Zhihong; Zhang, Xixiang; Yue, Weisheng; Lai, Zhiping
2015-01-01
Embodiments of the present disclosure provide for materials that include a pre-designed patterned, porous membrane (e.g., micro- and/or nano-scale patterned), structures or devices that include a pre-designed patterned, porous membrane, methods of making pre-designed patterned, porous membranes, methods of separation, and the like.
Wang, Xianbin
2015-01-22
Embodiments of the present disclosure provide for materials that include a pre-designed patterned, porous membrane (e.g., micro- and/or nano-scale patterned), structures or devices that include a pre-designed patterned, porous membrane, methods of making pre-designed patterned, porous membranes, methods of separation, and the like.
Energy Technology Data Exchange (ETDEWEB)
Clemens, M.; Weiland, T. [Technische Hochschule Darmstadt (Germany)
1996-12-31
In the field of computational electrodynamics the discretization of Maxwell`s equations using the Finite Integration Theory (FIT) yields very large, sparse, complex symmetric linear systems of equations. For this class of complex non-Hermitian systems a number of conjugate gradient-type algorithms is considered. The complex version of the biconjugate gradient (BiCG) method by Jacobs can be extended to a whole class of methods for complex-symmetric algorithms SCBiCG(T, n), which only require one matrix vector multiplication per iteration step. In this class the well-known conjugate orthogonal conjugate gradient (COCG) method for complex-symmetric systems corresponds to the case n = 0. The case n = 1 yields the BiCGCR method which corresponds to the conjugate residual algorithm for the real-valued case. These methods in combination with a minimal residual smoothing process are applied separately to practical 3D electro-quasistatical and eddy-current problems in electrodynamics. The practical performance of the SCBiCG methods is compared with other methods such as QMR and TFQMR.
A path method for finding energy barriers and minimum energy paths in complex micromagnetic systems
International Nuclear Information System (INIS)
Dittrich, R.; Schrefl, T.; Suess, D.; Scholz, W.; Forster, H.; Fidler, J.
2002-01-01
Minimum energy paths and energy barriers are calculated for complex micromagnetic systems. The method is based on the nudged elastic band method and uses finite-element techniques to represent granular structures. The method was found to be robust and fast for both simple test problems as well as for large systems such as patterned granular media. The method is used to estimate the energy barriers in CoCr-based perpendicular recording media
III. FROM SMALL TO BIG: METHODS FOR INCORPORATING LARGE SCALE DATA INTO DEVELOPMENTAL SCIENCE.
Davis-Kean, Pamela E; Jager, Justin
2017-06-01
For decades, developmental science has been based primarily on relatively small-scale data collections with children and families. Part of the reason for the dominance of this type of data collection is the complexity of collecting cognitive and social data on infants and small children. These small data sets are limited in both power to detect differences and the demographic diversity to generalize clearly and broadly. Thus, in this chapter we will discuss the value of using existing large-scale data sets to tests the complex questions of child development and how to develop future large-scale data sets that are both representative and can answer the important questions of developmental scientists. © 2017 The Society for Research in Child Development, Inc.
Song, Jinling; Qu, Yonghua; Wang, Jindi; Wan, Huawei; Liu, Xiaoqing
2007-06-01
Radiosity method is based on the computer simulation of 3D real structures of vegetations, such as leaves, branches and stems, which are composed by many facets. Using this method we can simulate the canopy reflectance and its bidirectional distribution of the vegetation canopy in visible and NIR regions. But with vegetations are more complex, more facets to compose them, so large memory and lots of time to calculate view factors are required, which are the choke points of using Radiosity method to calculate canopy BRF of lager scale vegetation scenes. We derived a new method to solve the problem, and the main idea is to abstract vegetation crown shapes and to simplify their structures, which can lessen the number of facets. The facets are given optical properties according to the reflectance, transmission and absorption of the real structure canopy. Based on the above work, we can simulate the canopy BRF of the mix scenes with different species vegetation in the large scale. In this study, taking broadleaf trees as an example, based on their structure characteristics, we abstracted their crowns as ellipsoid shells, and simulated the canopy BRF in visible and NIR regions of the large scale scene with different crown shape and different height ellipsoids. Form this study, we can conclude: LAI, LAD the probability gap, the sunlit and shaded surfaces are more important parameter to simulate the simplified vegetation canopy BRF. And the Radiosity method can apply us canopy BRF data in any conditions for our research.
International Nuclear Information System (INIS)
Surdutovich, E.; Yakubovich, A.V.; Solov'yov, A.V.; Surdutovich, E.; Yakubovich, A.V.; Solov'yov, A.V.
2010-01-01
We present the latest advances of the multi-scale approach to radiation damage caused by irradiation of a tissue with energetic ions and report the calculations of complex DNA damage and the effects of thermal spikes on biomolecules. The multi-scale approach aims to quantify the most important physical, chemical, and biological phenomena taking place during and following irradiation with ions and provide a better means for clinically-necessary calculations with adequate accuracy. We suggest a way of quantifying the complex clustered damage, one of the most important features of the radiation damage caused by ions. This quantification allows the studying of how the clusterization of DNA lesions affects the lethality of damage. We discuss the first results of molecular dynamics simulations of ubiquitin in the environment of thermal spikes, predicted to occur in tissue for a short time after an ion's passage in the vicinity of the ions' tracks. (authors)
Method of producing carbon coated nano- and micron-scale particles
Perry, W. Lee; Weigle, John C; Phillips, Jonathan
2013-12-17
A method of making carbon-coated nano- or micron-scale particles comprising entraining particles in an aerosol gas, providing a carbon-containing gas, providing a plasma gas, mixing the aerosol gas, the carbon-containing gas, and the plasma gas proximate a torch, bombarding the mixed gases with microwaves, and collecting resulting carbon-coated nano- or micron-scale particles.
J.A. Bikker; O.W. Steenbeek; F. Torracchi
2010-01-01
Administrative costs per participant appear to vary widely across pension funds in different countries. These costs are important because they reduce the rate of return on the investments of pension funds, and consequently raise the cost of retirement security. Using unique data on 90 pension funds over the period 2004-2008, this paper examines the impact of scale, the complexity of pension plans, and service quality on the administrative costs of pension funds, and compares those costs acros...
Thomas C. Brown; George L. Peterson
2009-01-01
The method of paired comparisons is used to measure individuals' preference orderings of items presented to them as discrete binary choices. This paper reviews the theory and application of the paired comparison method, describes a new computer program available for eliciting the choices, and presents an analysis of methods for scaling paired choice data to...
A simple method for determining polymeric IgA-containing immune complexes.
Sancho, J; Egido, J; González, E
1983-06-10
A simplified assay to measure polymeric IgA-immune complexes in biological fluids is described. The assay is based upon the specific binding of a secretory component for polymeric IgA. In the first step, multimeric IgA (monomeric and polymeric) immune complexes are determined by the standard Raji cell assay. Secondly, labeled secretory component added to the assay is bound to polymeric IgA-immune complexes previously fixed to Raji cells, but not to monomeric IgA immune complexes. To avoid false positives due to possible complement-fixing IgM immune complexes, prior IgM immunoadsorption is performed. Using anti-IgM antiserum coupled to CNBr-activated Sepharose 4B this step is not time-consuming. Polymeric IgA has a low affinity constant and binds weakly to Raji cells, as Scatchard analysis of the data shows. Thus, polymeric IgA immune complexes do not bind to Raji cells directly through Fc receptors, but through complement breakdown products, as with IgG-immune complexes. Using this method, we have been successful in detecting specific polymeric-IgA immune complexes in patients with IgA nephropathy (Berger's disease) and alcoholic liver disease, as well as in normal subjects after meals of high protein content. This new, simple, rapid and reproducible assay might help to study the physiopathological role of polymeric IgA immune complexes in humans and animals.
Dual linear structured support vector machine tracking method via scale correlation filter
Li, Weisheng; Chen, Yanquan; Xiao, Bin; Feng, Chen
2018-01-01
Adaptive tracking-by-detection methods based on structured support vector machine (SVM) performed well on recent visual tracking benchmarks. However, these methods did not adopt an effective strategy of object scale estimation, which limits the overall tracking performance. We present a tracking method based on a dual linear structured support vector machine (DLSSVM) with a discriminative scale correlation filter. The collaborative tracker comprised of a DLSSVM model and a scale correlation filter obtains good results in tracking target position and scale estimation. The fast Fourier transform is applied for detection. Extensive experiments show that our tracking approach outperforms many popular top-ranking trackers. On a benchmark including 100 challenging video sequences, the average precision of the proposed method is 82.8%.
A new high-throughput LC-MS method for the analysis of complex fructan mixtures
DEFF Research Database (Denmark)
Verspreet, Joran; Hansen, Anders Holmgaard; Dornez, Emmie
2014-01-01
In this paper, a new liquid chromatography-mass spectrometry (LC-MS) method for the analysis of complex fructan mixtures is presented. In this method, columns with a trifunctional C18 alkyl stationary phase (T3) were used and their performance compared with that of a porous graphitized carbon (PGC...
Memory Indexing: A Novel Method for Tracing Memory Processes in Complex Cognitive Tasks
Renkewitz, Frank; Jahn, Georg
2012-01-01
We validate an eye-tracking method applicable for studying memory processes in complex cognitive tasks. The method is tested with a task on probabilistic inferences from memory. It provides valuable data on the time course of processing, thus clarifying previous results on heuristic probabilistic inference. Participants learned cue values of…
Gilstrap, Donald L.
2013-01-01
In addition to qualitative methods presented in chaos and complexity theories in educational research, this article addresses quantitative methods that may show potential for future research studies. Although much in the social and behavioral sciences literature has focused on computer simulations, this article explores current chaos and…
Richards, William D., Jr.
Previous methods for determining the communication structure of organizations work well for small or simple organizations, but are either inadequate or unwieldy for use with large complex organizations. An improved method uses a number of different measures and a series of successive approximations to order the communication matrix such that…
On a computational method for modelling complex ecosystems by superposition procedure
International Nuclear Information System (INIS)
He Shanyu.
1986-12-01
In this paper, the Superposition Procedure is concisely described, and a computational method for modelling a complex ecosystem is proposed. With this method, the information contained in acceptable submodels and observed data can be utilized to maximal degree. (author). 1 ref
Abdi Tabari, Mahmoud; Ivey, Toni A.
2015-01-01
This paper provides a methodological review of previous research on cognitive task complexity, since the term emerged in 1995, and investigates why much research was more quantitative rather than qualitative. Moreover, it sheds light onto the studies which used the mixed-methods approach and determines which version of the mixed-methods designs…
Low-complexity video encoding method for wireless image transmission in capsule endoscope.
Takizawa, Kenichi; Hamaguchi, Kiyoshi
2010-01-01
This paper presents a low-complexity video encoding method applicable for wireless image transmission in capsule endoscopes. This encoding method is based on Wyner-Ziv theory, in which side information available at a transmitter is treated as side information at its receiver. Therefore complex processes in video encoding, such as estimation of the motion vector, are moved to the receiver side, which has a larger-capacity battery. As a result, the encoding process is only to decimate coded original data through channel coding. We provide a performance evaluation for a low-density parity check (LDPC) coding method in the AWGN channel.
Diagnosing Disaster Resilience of Communities as Multi-scale Complex Socio-ecological Systems
Liu, Wei; Mochizuki, Junko; Keating, Adriana; Mechler, Reinhard; Williges, Keith; Hochrainer, Stefan
2014-05-01
Global environmental change, growing anthropogenic influence, and increasing globalisation of society have made it clear that disaster vulnerability and resilience of communities cannot be understood without knowledge on the broader social-ecological system in which they are embedded. We propose a framework for diagnosing community resilience to disasters, as a form of disturbance to social-ecological systems, with feedbacks from the local to the global scale. Inspired by iterative multi-scale analysis employed by Resilience Alliance, the related socio-ecological systems framework of Ostrom, and the sustainable livelihood framework, we developed a multi-tier framework for thinking of communities as multi-scale social-ecological systems and analyzing communities' disaster resilience and also general resilience. We highlight the cross-scale influences and feedbacks on communities that exist from lower (e.g., household) to higher (e.g., regional, national) scales. The conceptual framework is then applied to a real-world resilience assessment situation, to illustrate how key components of socio-ecological systems, including natural hazards, natural and man-made environment, and community capacities can be delineated and analyzed.
Regularization methods for ill-posed problems in multiple Hilbert scales
International Nuclear Information System (INIS)
Mazzieri, Gisela L; Spies, Ruben D
2012-01-01
Several convergence results in Hilbert scales under different source conditions are proved and orders of convergence and optimal orders of convergence are derived. Also, relations between those source conditions are proved. The concept of a multiple Hilbert scale on a product space is introduced, and regularization methods on these scales are defined, both for the case of a single observation and for the case of multiple observations. In the latter case, it is shown how vector-valued regularization functions in these multiple Hilbert scales can be used. In all cases, convergence is proved and orders and optimal orders of convergence are shown. Finally, some potential applications and open problems are discussed. (paper)
Directory of Open Access Journals (Sweden)
Cohen Eyal
2012-10-01
Full Text Available Abstract Background Primary care medical homes may improve health outcomes for children with special healthcare needs (CSHCN, by improving care coordination. However, community-based primary care practices may be challenged to deliver comprehensive care coordination to complex subsets of CSHCN such as children with medical complexity (CMC. Linking a tertiary care center with the community may achieve cost effective and high quality care for CMC. The objective of this study was to evaluate the outcomes of community-based complex care clinics integrated with a tertiary care center. Methods A before- and after-intervention study design with mixed (quantitative/qualitative methods was utilized. Clinics at two community hospitals distant from tertiary care were staffed by local community pediatricians with the tertiary care center nurse practitioner and linked with primary care providers. Eighty-one children with underlying chronic conditions, fragility, requirement for high intensity care and/or technology assistance, and involvement of multiple providers participated. Main outcome measures included health care utilization and expenditures, parent reports of parent- and child-quality of life [QOL (SF-36®, CPCHILD©, PedsQL™], and family-centered care (MPOC-20®. Comparisons were made in equal (up to 1 year pre- and post-periods supplemented by qualitative perspectives of families and pediatricians. Results Total health care system costs decreased from median (IQR $244 (981 per patient per month (PPPM pre-enrolment to $131 (355 PPPM post-enrolment (p=.007, driven primarily by fewer inpatient days in the tertiary care center (p=.006. Parents reported decreased out of pocket expenses (p© domains [Health Standardization Section (p=.04; Comfort and Emotions (p=.03], while total CPCHILD© score decreased between baseline and 1 year (p=.003. Parents and providers reported the ability to receive care close to home as a key benefit. Conclusions Complex
A New Feature Extraction Method Based on EEMD and Multi-Scale Fuzzy Entropy for Motor Bearing
Directory of Open Access Journals (Sweden)
Huimin Zhao
2016-12-01
Full Text Available Feature extraction is one of the most important, pivotal, and difficult problems in mechanical fault diagnosis, which directly relates to the accuracy of fault diagnosis and the reliability of early fault prediction. Therefore, a new fault feature extraction method, called the EDOMFE method based on integrating ensemble empirical mode decomposition (EEMD, mode selection, and multi-scale fuzzy entropy is proposed to accurately diagnose fault in this paper. The EEMD method is used to decompose the vibration signal into a series of intrinsic mode functions (IMFs with a different physical significance. The correlation coefficient analysis method is used to calculate and determine three improved IMFs, which are close to the original signal. The multi-scale fuzzy entropy with the ability of effective distinguishing the complexity of different signals is used to calculate the entropy values of the selected three IMFs in order to form a feature vector with the complexity measure, which is regarded as the inputs of the support vector machine (SVM model for training and constructing a SVM classifier (EOMSMFD based on EDOMFE and SVM for fulfilling fault pattern recognition. Finally, the effectiveness of the proposed method is validated by real bearing vibration signals of the motor with different loads and fault severities. The experiment results show that the proposed EDOMFE method can effectively extract fault features from the vibration signal and that the proposed EOMSMFD method can accurately diagnose the fault types and fault severities for the inner race fault, the outer race fault, and rolling element fault of the motor bearing. Therefore, the proposed method provides a new fault diagnosis technology for rotating machinery.
Complex Method Mixed with PSO Applying to Optimization Design of Bridge Crane Girder
Directory of Open Access Journals (Sweden)
He Yan
2017-01-01
Full Text Available In engineer design, basic complex method has not enough global search ability for the nonlinear optimization problem, so it mixed with particle swarm optimization (PSO has been presented in the paper,that is the optimal particle evaluated from fitness function of particle swarm displacement complex vertex in order to realize optimal principle of the largest complex central distance.This method is applied to optimization design problems of box girder of bridge crane with constraint conditions.At first a mathematical model of the girder optimization has been set up,in which box girder cross section area of bridge crane is taken as the objective function, and its four sizes parameters as design variables, girder mechanics performance, manufacturing process, border sizes and so on requirements as constraint conditions. Then complex method mixed with PSO is used to solve optimization design problem of cane box girder from constrained optimization studying approach, and its optimal results have achieved the goal of lightweight design and reducing the crane manufacturing cost . The method is reliable, practical and efficient by the practical engineer calculation and comparative analysis with basic complex method.
Rapid high temperature field test method for evaluation of geothermal calcite scale inhibitors
Energy Technology Data Exchange (ETDEWEB)
Asperger, R.G.
1982-08-01
A test method is described which allows the rapid field testing of calcite scale inhibitors in high- temperature geothermal brines. Five commercial formulations, chosen on the basis of laboratory screening tests, were tested in brines with low total dissolved solids at ca 500 F. Four were found to be effective; of these, 2 were found to be capable of removing recently deposited scale. One chemical was tested in the full-flow brine line for 6 wks. It was shown to stop a severe surface scaling problem at the well's control valve, thus proving the viability of the rapid test method. (12 refs.)
Quantitative analysis of scaling error compensation methods in dimensional X-ray computed tomography
DEFF Research Database (Denmark)
Müller, P.; Hiller, Jochen; Dai, Y.
2015-01-01
X-ray Computed Tomography (CT) has become an important technology for quality control of industrial components. As with other technologies, e.g., tactile coordinate measurements or optical measurements, CT is influenced by numerous quantities which may have negative impact on the accuracy...... errors of the manipulator system (magnification axis). This article also introduces a new compensation method for scaling errors using a database of reference scaling factors and discusses its advantages and disadvantages. In total, three methods for the correction of scaling errors – using the CT ball...
Methods for deconvoluting and interpreting complex gamma- and x-ray spectral regions
International Nuclear Information System (INIS)
Gunnink, R.
1983-06-01
Germanium and silicon detectors are now widely used for the detection and measurement of x and gamma radiation. However, some analysis situations and spectral regions have heretofore been too complex to deconvolute and interpret by techniques in general use. One example is the L x-ray spectrum of an element taken with a Ge or Si detector. This paper describes some new tools and methods that were developed to analyze complex spectral regions; they are illustrated with examples
Determining Complex Structures using Docking Method with Single Particle Scattering Data
Directory of Open Access Journals (Sweden)
Haiguang Liu
2017-04-01
Full Text Available Protein complexes are critical for many molecular functions. Due to intrinsic flexibility and dynamics of complexes, their structures are more difficult to determine using conventional experimental methods, in contrast to individual subunits. One of the major challenges is the crystallization of protein complexes. Using X-ray free electron lasers (XFELs, it is possible to collect scattering signals from non-crystalline protein complexes, but data interpretation is more difficult because of unknown orientations. Here, we propose a hybrid approach to determine protein complex structures by combining XFEL single particle scattering data with computational docking methods. Using simulations data, we demonstrate that a small set of single particle scattering data collected at random orientations can be used to distinguish the native complex structure from the decoys generated using docking algorithms. The results also indicate that a small set of single particle scattering data is superior to spherically averaged intensity profile in distinguishing complex structures. Given the fact that XFEL experimental data are difficult to acquire and at low abundance, this hybrid approach should find wide applications in data interpretations.
Explaining Student Behavior at Scale : The Influence of Video Complexity on Student Dwelling Time
Sluis, van der F.; Ginn, J.H.; Zee, van der T.; Haywood, J.; Aleven, V.; Kay, J.; Roll, I.
2016-01-01
Understanding why and how students interact with educational videos is essential to further improve the quality of MOOCs. In this paper, we look at the complexity of videos to explain two related aspects of student behavior: the dwelling time (how much time students spend watching a video) and the
Analogize This! The Politics of Scale and the Problem of Substance in Complexity-Based Composition
Roderick, Noah R.
2012-01-01
In light of recent enthusiasm in composition studies (and in the social sciences more broadly) for complexity theory and ecology, this article revisits the debate over how much composition studies can or should align itself with the natural sciences. For many in the discipline, the science debate--which was ignited in the 1970s, both by the…
Krylov Subspace Methods for Complex Non-Hermitian Linear Systems. Thesis
Freund, Roland W.
1991-01-01
We consider Krylov subspace methods for the solution of large sparse linear systems Ax = b with complex non-Hermitian coefficient matrices. Such linear systems arise in important applications, such as inverse scattering, numerical solution of time-dependent Schrodinger equations, underwater acoustics, eddy current computations, numerical computations in quantum chromodynamics, and numerical conformal mapping. Typically, the resulting coefficient matrices A exhibit special structures, such as complex symmetry, or they are shifted Hermitian matrices. In this paper, we first describe a Krylov subspace approach with iterates defined by a quasi-minimal residual property, the QMR method, for solving general complex non-Hermitian linear systems. Then, we study special Krylov subspace methods designed for the two families of complex symmetric respectively shifted Hermitian linear systems. We also include some results concerning the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.
Modern methods of surveyor observations in opencast mining under complex hydrogeological conditions.
Usoltseva, L. A.; Lushpei, V. P.; Mursin, VA
2017-10-01
The article considers the possibility of linking the modern methods of surveying security of open mining works to improve industrial safety in the Primorsky Territory, as well as their use in the educational process. Industrial Safety in the management of Surface Mining depends largely on the applied assessment methods and methods of stability of pit walls and slopes of dumps in the complex mining and hydro-geological conditions.
Learning Ecosystem Complexity: A Study on Small-Scale Fishers' Ecological Knowledge Generation
Garavito-Bermúdez, Diana
2018-01-01
Small-scale fisheries are learning contexts of importance for generating, transferring and updating ecological knowledge of natural environments through everyday work practices. The rich knowledge fishers have of local ecosystems is the result of the intimate relationship fishing communities have had with their natural environments across…
A method of orbital analysis for large-scale first-principles simulations
International Nuclear Information System (INIS)
Ohwaki, Tsukuru; Otani, Minoru; Ozaki, Taisuke
2014-01-01
An efficient method of calculating the natural bond orbitals (NBOs) based on a truncation of the entire density matrix of a whole system is presented for large-scale density functional theory calculations. The method recovers an orbital picture for O(N) electronic structure methods which directly evaluate the density matrix without using Kohn-Sham orbitals, thus enabling quantitative analysis of chemical reactions in large-scale systems in the language of localized Lewis-type chemical bonds. With the density matrix calculated by either an exact diagonalization or O(N) method, the computational cost is O(1) for the calculation of NBOs associated with a local region where a chemical reaction takes place. As an illustration of the method, we demonstrate how an electronic structure in a local region of interest can be analyzed by NBOs in a large-scale first-principles molecular dynamics simulation for a liquid electrolyte bulk model (propylene carbonate + LiBF 4 )
Grey Language Hesitant Fuzzy Group Decision Making Method Based on Kernel and Grey Scale.
Li, Qingsheng; Diao, Yuzhu; Gong, Zaiwu; Hu, Aqin
2018-03-02
Based on grey language multi-attribute group decision making, a kernel and grey scale scoring function is put forward according to the definition of grey language and the meaning of the kernel and grey scale. The function introduces grey scale into the decision-making method to avoid information distortion. This method is applied to the grey language hesitant fuzzy group decision making, and the grey correlation degree is used to sort the schemes. The effectiveness and practicability of the decision-making method are further verified by the industry chain sustainable development ability evaluation example of a circular economy. Moreover, its simplicity and feasibility are verified by comparing it with the traditional grey language decision-making method and the grey language hesitant fuzzy weighted arithmetic averaging (GLHWAA) operator integration method after determining the index weight based on the grey correlation.
Estimating basin scale evapotranspiration (ET) by water balance and remote sensing methods
Senay, G.B.; Leake, S.; Nagler, P.L.; Artan, G.; Dickinson, J.; Cordova, J.T.; Glenn, E.P.
2011-01-01
Evapotranspiration (ET) is an important hydrological process that can be studied and estimated at multiple spatial scales ranging from a leaf to a river basin. We present a review of methods in estimating basin scale ET and its applications in understanding basin water balance dynamics. The review focuses on two aspects of ET: (i) how the basin scale water balance approach is used to estimate ET; and (ii) how ‘direct’ measurement and modelling approaches are used to estimate basin scale ET. Obviously, the basin water balance-based ET requires the availability of good precipitation and discharge data to calculate ET as a residual on longer time scales (annual) where net storage changes are assumed to be negligible. ET estimated from such a basin water balance principle is generally used for validating the performance of ET models. On the other hand, many of the direct estimation methods involve the use of remotely sensed data to estimate spatially explicit ET and use basin-wide averaging to estimate basin scale ET. The direct methods can be grouped into soil moisture balance modelling, satellite-based vegetation index methods, and methods based on satellite land surface temperature measurements that convert potential ET into actual ET using a proportionality relationship. The review also includes the use of complementary ET estimation principles for large area applications. The review identifies the need to compare and evaluate the different ET approaches using standard data sets in basins covering different hydro-climatic regions of the world.
A spatial method to calculate small-scale fisheries effort in data poor scenarios.
Johnson, Andrew Frederick; Moreno-Báez, Marcia; Giron-Nava, Alfredo; Corominas, Julia; Erisman, Brad; Ezcurra, Exequiel; Aburto-Oropeza, Octavio
2017-01-01
To gauge the collateral impacts of fishing we must know where fishing boats operate and how much they fish. Although small-scale fisheries land approximately the same amount of fish for human consumption as industrial fleets globally, methods of estimating their fishing effort are comparatively poor. We present an accessible, spatial method of calculating the effort of small-scale fisheries based on two simple measures that are available, or at least easily estimated, in even the most data-poor fisheries: the number of boats and the local coastal human population. We illustrate the method using a small-scale fisheries case study from the Gulf of California, Mexico, and show that our measure of Predicted Fishing Effort (PFE), measured as the number of boats operating in a given area per day adjusted by the number of people in local coastal populations, can accurately predict fisheries landings in the Gulf. Comparing our values of PFE to commercial fishery landings throughout the Gulf also indicates that the current number of small-scale fishing boats in the Gulf is approximately double what is required to land theoretical maximum fish biomass. Our method is fishery-type independent and can be used to quantitatively evaluate the efficacy of growth in small-scale fisheries. This new method provides an important first step towards estimating the fishing effort of small-scale fleets globally.
Development of polygon elements based on the scaled boundary finite element method
International Nuclear Information System (INIS)
Chiong, Irene; Song Chongmin
2010-01-01
We aim to extend the scaled boundary finite element method to construct conforming polygon elements. The development of the polygonal finite element is highly anticipated in computational mechanics as greater flexibility and accuracy can be achieved using these elements. The scaled boundary polygonal finite element will enable new developments in mesh generation, better accuracy from a higher order approximation and better transition elements in finite element meshes. Polygon elements of arbitrary number of edges and order have been developed successfully. The edges of an element are discretised with line elements. The displacement solution of the scaled boundary finite element method is used in the development of shape functions. They are shown to be smooth and continuous within the element, and satisfy compatibility and completeness requirements. Furthermore, eigenvalue decomposition has been used to depict element modes and outcomes indicate the ability of the scaled boundary polygonal element to express rigid body and constant strain modes. Numerical tests are presented; the patch test is passed and constant strain modes verified. Accuracy and convergence of the method are also presented and the performance of the scaled boundary polygonal finite element is verified on Cook's swept panel problem. Results show that the scaled boundary polygonal finite element method outperforms a traditional mesh and accuracy and convergence are achieved from fewer nodes. The proposed method is also shown to be truly flexible, and applies to arbitrary n-gons formed of irregular and non-convex polygons.
An improved method to characterise the modulation of small-scale turbulent by large-scale structures
Agostini, Lionel; Leschziner, Michael; Gaitonde, Datta
2015-11-01
A key aspect of turbulent boundary layer dynamics is ``modulation,'' which refers to degree to which the intensity of coherent large-scale structures (LS) cause an amplification or attenuation of the intensity of the small-scale structures (SS) through large-scale-linkage. In order to identify the variation of the amplitude of the SS motion, the envelope of the fluctuations needs to be determined. Mathis et al. (2009) proposed to define this latter by low-pass filtering the modulus of the analytic signal built from the Hilbert transform of SS. The validity of this definition, as a basis for quantifying the modulated SS signal, is re-examined on the basis of DNS data for a channel flow. The analysis shows that the modulus of the analytic signal is very sensitive to the skewness of its PDF, which is dependent, in turn, on the sign of the LS fluctuation and thus of whether these fluctuations are associated with sweeps or ejections. The conclusion is that generating an envelope by use of a low-pass filtering step leads to an important loss of information associated with the effects of the local skewness of the PDF of the SS on the modulation process. An improved Hilbert-transform-based method is proposed to characterize the modulation of SS turbulence by LS structures
Multi-scale calculation based on dual domain material point method combined with molecular dynamics
Energy Technology Data Exchange (ETDEWEB)
Dhakal, Tilak Raj [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-02-27
This dissertation combines the dual domain material point method (DDMP) with molecular dynamics (MD) in an attempt to create a multi-scale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically non-equilibrium state, and conventional constitutive relations are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a MD simulation of a group of atoms surrounding the material point. Rather than restricting the multi-scale simulation in a small spatial region, such as phase interfaces, or crack tips, this multi-scale method can be used to consider non-equilibrium thermodynamic e ects in a macroscopic domain. This method takes advantage that the material points only communicate with mesh nodes, not among themselves; therefore MD simulations for material points can be performed independently in parallel. First, using a one-dimensional shock problem as an example, the numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the DDMP method are investigated. Among these methods, only the DDMP method converges as the number of particles increases, but the large number of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the
Fractional calculus ties the microscopic and macroscopic scales of complex network dynamics
International Nuclear Information System (INIS)
West, B J; Turalska, M; Grigolini, Paolo
2015-01-01
A two-state, master equation-based decision-making model has been shown to generate phase transitions, to be topologically complex, and to manifest temporal complexity through an inverse power-law probability distribution function in the switching times between the two critical states of consensus. These properties are entailed by the fundamental assumption that the network elements in the decision-making model imperfectly imitate one another. The process of subordination establishes that a single network element can be described by a fractional master equation whose analytic solution yields the observed inverse power-law probability distribution obtained by numerical integration of the two-state master equation to a high degree of accuracy. (paper)
Simultaneous analysis of qualitative parameters of solid fuel using complex neutron gamma method
International Nuclear Information System (INIS)
Dombrovskij, V.P.; Ajtsev, N.I.; Ryashchikov, V.I.; Frolov, V.K.
1983-01-01
A study was made on complex neutron gamma method for simultaneous analysis of carbon content, ash content and humidity of solid fuel according to gamma radiation of inelastic fast neutron scattering and radiation capture of thermal neutrons. Metrological characteristics of pulse and stationary neutron gamma methods for determination of qualitative solid fuel parameters were analyzed, taking coke breeze as an example. Optimal energy ranges of gamma radiation detection (2-8 MeV) were determined. The advantages of using pulse neutron generator for complex analysis of qualitative parameters of solid fuel in large masses were shown
Energy Technology Data Exchange (ETDEWEB)
Asperger, R.G.
1986-09-01
A new test method is described that allows the rapid field testing of calcium carbonate scale inhibitors at 500/sup 0/F (260/sup 0/C). The method evolved from use of a full-flow test loop on a well with a mass flow rate of about 1 x 10/sup 6/ lbm/hr (126 kg/s). It is a simple, effective way to evaluate the effectiveness of inhibitors under field conditions. Five commercial formulations were chosen for field evaluation on the basis of nonflowing, laboratory screening tests at 500/sup 0/F (260/sup 0/C). Four of these formulations from different suppliers controlled calcium carbonate scale deposition as measured by the test method. Two of these could dislodge recently deposited scale that had not age-hardened. Performance-profile diagrams, which were measured for these four effective inhibitors, show the concentration interrelationship between brine calcium and inhibitor concentrations at which the formulations will and will not stop scale formation in the test apparatus. With these diagrams, one formulation was chosen for testing on the full-flow brine line. The composition was tested for 6 weeks and showed a dramatic decrease in the scaling occurring at the flow-control valve. This scaling was about to force a shutdown of a major, long-term flow test being done for reservoir economic evaluations. The inhibitor stopped the scaling, and the test was performed without interruption.
Comparison of complex effluent treatability in different bench scale microbial electrolysis cells
Ullery, Mark L.
2014-10-01
A range of wastewaters and substrates were examined using mini microbial electrolysis cells (mini MECs) to see if they could be used to predict the performance of larger-scale cube MECs. COD removals and coulombic efficiencies corresponded well between the two reactor designs for individual samples, with 66-92% of COD removed for all samples. Current generation was consistent between the reactor types for acetate (AC) and fermentation effluent (FE) samples, but less consistent with industrial (IW) and domestic wastewaters (DW). Hydrogen was recovered from all samples in cube MECs, but gas composition and volume varied significantly between samples. Evidence for direct conversion of substrate to methane was observed with two of the industrial wastewater samples (IW-1 and IW-3). Overall, mini MECs provided organic treatment data that corresponded well with larger scale reactor results, and therefore it was concluded that they can be a useful platform for screening wastewater sources. © 2014 Elsevier Ltd.
Large scale hydrogeological modelling of a low-lying complex coastal aquifer system
DEFF Research Database (Denmark)
Meyer, Rena
2018-01-01
intrusion. In this thesis a new methodological approach was developed to combine 3D numerical groundwater modelling with a detailed geological description and hydrological, geochemical and geophysical data. It was applied to a regional scale saltwater intrusion in order to analyse and quantify...... the groundwater flow dynamics, identify the driving mechanisms that formed the saltwater intrusion to its present extent and to predict its progression in the future. The study area is located in the transboundary region between Southern Denmark and Northern Germany, adjacent to the Wadden Sea. Here, a large-scale...... parametrization schemes that accommodate hydrogeological heterogeneities. Subsequently, density-dependent flow and transport modelling of multiple salt sources was successfully applied to simulate the formation of the saltwater intrusion during the last 4200 years, accounting for historic changes in the hydraulic...
Hanson, Curt; Schaefer, Jacob; Burken, John J.; Larson, David; Johnson, Marcus
2014-01-01
Flight research has shown the effectiveness of adaptive flight controls for improving aircraft safety and performance in the presence of uncertainties. The National Aeronautics and Space Administration's (NASA)'s Integrated Resilient Aircraft Control (IRAC) project designed and conducted a series of flight experiments to study the impact of variations in adaptive controller design complexity on performance and handling qualities. A novel complexity metric was devised to compare the degrees of simplicity achieved in three variations of a model reference adaptive controller (MRAC) for NASA's F-18 (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) Full-Scale Advanced Systems Testbed (Gen-2A) aircraft. The complexity measures of these controllers are also compared to that of an earlier MRAC design for NASA's Intelligent Flight Control System (IFCS) project and flown on a highly modified F-15 aircraft (McDonnell Douglas, now The Boeing Company, Chicago, Illinois). Pilot comments during the IRAC research flights pointed to the importance of workload on handling qualities ratings for failure and damage scenarios. Modifications to existing pilot aggressiveness and duty cycle metrics are presented and applied to the IRAC controllers. Finally, while adaptive controllers may alleviate the effects of failures or damage on an aircraft's handling qualities, they also have the potential to introduce annoying changes to the flight dynamics or to the operation of aircraft systems. A nuisance rating scale is presented for the categorization of nuisance side-effects of adaptive controllers.
Task-Management Method Using R-Tree Spatial Cloaking for Large-Scale Crowdsourcing
Directory of Open Access Journals (Sweden)
Yan Li
2017-12-01
Full Text Available With the development of sensor technology and the popularization of the data-driven service paradigm, spatial crowdsourcing systems have become an important way of collecting map-based location data. However, large-scale task management and location privacy are important factors for participants in spatial crowdsourcing. In this paper, we propose the use of an R-tree spatial cloaking-based task-assignment method for large-scale spatial crowdsourcing. We use an estimated R-tree based on the requested crowdsourcing tasks to reduce the crowdsourcing server-side inserting cost and enable the scalability. By using Minimum Bounding Rectangle (MBR-based spatial anonymous data without exact position data, this method preserves the location privacy of participants in a simple way. In our experiment, we showed that our proposed method is faster than the current method, and is very efficient when the scale is increased.
IMPACT OF MATRIX INVERSION ON THE COMPLEXITY OF THE FINITE ELEMENT METHOD
Directory of Open Access Journals (Sweden)
M. Sybis
2016-04-01
Full Text Available Purpose. The development of a wide construction market and a desire to design innovative architectural building constructions has resulted in the need to create complex numerical models of objects having increasingly higher computational complexity. The purpose of this work is to show that choosing a proper method for solving the set of equations can improve the calculation time (reduce the complexity by a few levels of magnitude. Methodology. The article presents an analysis of the impact of matrix inversion algorithm on the deflection calculation in the beam, using the finite element method (FEM. Based on the literature analysis, common methods of calculating set of equations were determined. From the found solutions the Gaussian elimination, LU and Cholesky decomposition methods have been implemented to determine the effect of the matrix inversion algorithm used for solving the equations set on the number of computational operations performed. In addition, each of the implemented method has been further optimized thereby reducing the number of necessary arithmetic operations. Findings. These optimizations have been performed on the use of certain properties of the matrix, such as symmetry or significant number of zero elements in the matrix. The results of the analysis are presented for the division of the beam to 5, 50, 100 and 200 nodes, for which the deflection has been calculated. Originality. The main achievement of this work is that it shows the impact of the used methodology on the complexity of solving the problem (or equivalently, time needed to obtain results. Practical value. The difference between the best (the less complex and the worst (the most complex is in the row of few orders of magnitude. This result shows that choosing wrong methodology may enlarge time needed to perform calculation significantly.
An image overall complexity evaluation method based on LSD line detection
Li, Jianan; Duan, Jin; Yang, Xu; Xiao, Bo
2017-04-01
In the artificial world, whether it is the city's traffic roads or engineering buildings contain a lot of linear features. Therefore, the research on the image complexity of linear information has become an important research direction in digital image processing field. This paper, by detecting the straight line information in the image and using the straight line as the parameter index, establishing the quantitative and accurate mathematics relationship. In this paper, we use LSD line detection algorithm which has good straight-line detection effect to detect the straight line, and divide the detected line by the expert consultation strategy. Then we use the neural network to carry on the weight training and get the weight coefficient of the index. The image complexity is calculated by the complexity calculation model. The experimental results show that the proposed method is effective. The number of straight lines in the image, the degree of dispersion, uniformity and so on will affect the complexity of the image.
Computational study of formamide-water complexes using the SAPT and AIM methods
International Nuclear Information System (INIS)
Parreira, Renato L.T.; Valdes, Haydee; Galembeck, Sergio E.
2006-01-01
In this work, the complexes formed between formamide and water were studied by means of the SAPT and AIM methods. Complexation leads to significant alterations in the geometries and electronic structure of formamide. Intermolecular interactions in the complexes are intense, especially in the cases where the solvent interacts with the carbonyl and amide groups simultaneously. In the transition states, the interaction between the water molecule and the lone pair on the amide nitrogen is also important. In all the complexes studied herein, the electrostatic interactions between formamide and water are the main attractive force, and their contribution may be five times as large as the corresponding contribution from dispersion, and twice as large as the contribution from induction. However, an increase in the resonance of planar formamide with the successive addition of water molecules may suggest that the hydrogen bonds taking place between formamide and water have some covalent character
Directory of Open Access Journals (Sweden)
Guimarães Katia S
2006-04-01
Full Text Available Abstract Background Most cellular processes are carried out by multi-protein complexes, groups of proteins that bind together to perform a specific task. Some proteins form stable complexes, while other proteins form transient associations and are part of several complexes at different stages of a cellular process. A better understanding of this higher-order organization of proteins into overlapping complexes is an important step towards unveiling functional and evolutionary mechanisms behind biological networks. Results We propose a new method for identifying and representing overlapping protein complexes (or larger units called functional groups within a protein interaction network. We develop a graph-theoretical framework that enables automatic construction of such representation. We illustrate the effectiveness of our method by applying it to TNFα/NF-κB and pheromone signaling pathways. Conclusion The proposed representation helps in understanding the transitions between functional groups and allows for tracking a protein's path through a cascade of functional groups. Therefore, depending on the nature of the network, our representation is capable of elucidating temporal relations between functional groups. Our results show that the proposed method opens a new avenue for the analysis of protein interaction networks.
Developing an Assessment Method of Active Aging: University of Jyvaskyla Active Aging Scale.
Rantanen, Taina; Portegijs, Erja; Kokko, Katja; Rantakokko, Merja; Törmäkangas, Timo; Saajanaho, Milla
2018-01-01
To develop an assessment method of active aging for research on older people. A multiphase process that included drafting by an expert panel, a pilot study for item analysis and scale validity, a feedback study with focus groups and questionnaire respondents, and a test-retest study. Altogether 235 people aged 60 to 94 years provided responses and/or feedback. We developed a 17-item University of Jyvaskyla Active Aging Scale with four aspects in each item (goals, ability, opportunity, and activity; range 0-272). The psychometric and item properties are good and the scale assesses a unidimensional latent construct of active aging. Our scale assesses older people's striving for well-being through activities pertaining to their goals, abilities, and opportunities. The University of Jyvaskyla Active Aging Scale provides a quantifiable measure of active aging that may be used in postal questionnaires or interviews in research and practice.
Protein complex detection in PPI networks based on data integration and supervised learning method.
Yu, Feng; Yang, Zhi; Hu, Xiao; Sun, Yuan; Lin, Hong; Wang, Jian
2015-01-01
Revealing protein complexes are important for understanding principles of cellular organization and function. High-throughput experimental techniques have produced a large amount of protein interactions, which makes it possible to predict protein complexes from protein-protein interaction (PPI) networks. However, the small amount of known physical interactions may limit protein complex detection. The new PPI networks are constructed by integrating PPI datasets with the large and readily available PPI data from biomedical literature, and then the less reliable PPI between two proteins are filtered out based on semantic similarity and topological similarity of the two proteins. Finally, the supervised learning protein complex detection (SLPC), which can make full use of the information of available known complexes, is applied to detect protein complex on the new PPI networks. The experimental results of SLPC on two different categories yeast PPI networks demonstrate effectiveness of the approach: compared with the original PPI networks, the best average improvements of 4.76, 6.81 and 15.75 percentage units in the F-score, accuracy and maximum matching ratio (MMR) are achieved respectively; compared with the denoising PPI networks, the best average improvements of 3.91, 4.61 and 12.10 percentage units in the F-score, accuracy and MMR are achieved respectively; compared with ClusterONE, the start-of the-art complex detection method, on the denoising extended PPI networks, the average improvements of 26.02 and 22.40 percentage units in the F-score and MMR are achieved respectively. The experimental results show that the performances of SLPC have a large improvement through integration of new receivable PPI data from biomedical literature into original PPI networks and denoising PPI networks. In addition, our protein complexes detection method can achieve better performance than ClusterONE.
Method of producing exfoliated graphite, flexible graphite, and nano-scaled graphene platelets
Zhamu, Aruna; Shi, Jinjun; Guo, Jiusheng; Jang, Bor Z.
2010-11-02
The present invention provides a method of exfoliating a layered material (e.g., graphite and graphite oxide) to produce nano-scaled platelets having a thickness smaller than 100 nm, typically smaller than 10 nm. The method comprises (a) dispersing particles of graphite, graphite oxide, or a non-graphite laminar compound in a liquid medium containing therein a surfactant or dispersing agent to obtain a stable suspension or slurry; and (b) exposing the suspension or slurry to ultrasonic waves at an energy level for a sufficient length of time to produce separated nano-scaled platelets. The nano-scaled platelets are candidate reinforcement fillers for polymer nanocomposites. Nano-scaled graphene platelets are much lower-cost alternatives to carbon nano-tubes or carbon nano-fibers.
International Nuclear Information System (INIS)
Vinsova, H.; Koudelkova, M.; Ernestova, M.; Jedinakova-Krizova, V.
2003-01-01
Many of holmium and yttrium complex compounds of both organic and inorganic origin have been studied recently from the point of view of their radiopharmaceutical behavior. Complexes with Ho-166 and Y-90 can be either directly used as pharmaceutical preparations or they can be applied in a conjugate form with selected monoclonal antibody. Appropriate bifunctional chelation agents are necessary in the latter case for indirect binding of monoclonal antibody and selected radionuclide. Our present study has been focused on the characterization of radionuclide (metal) - ligand interaction using various analytical methods. Electromigration methods (capillary electrophoresis, capillary isotachophoresis), potentiometric titration and spectrophotometry have been tested from the point of view of their potential to determine conditional stability constants of holmium and yttrium complexes. A principle of an isotachophoretic determination of stability constants is based on the linear relation between logarithms of stability constant and a reduction of a zone of complex. For the calculation of thermodynamic constants using potentiometry it was necessary at first to determine the protonation constants of acid. Those were calculated using the computer program LETAGROP Etitr from data obtained by potentiometric acid-base titration. Consequently, the titration curves of holmium and yttrium with studied ligands and protonation constants of corresponding acid were applied for the calculation of metal-ligand stability constants. Spectrophotometric determination of stability constants of selected systems was based on the titration of holmium and yttrium nitrate solutions by Arsenazo III following by the titration of metal-Arsenazo III complex by selected ligand. Data obtained have been evaluated using the computation program OPIUM. Results obtained by all analytical methods tested in this study have been compared. It was found that direct potentiometric titration technique could not be
Martin, R.; Orgogozo, L.; Noiriel, C. N.; Guibert, R.; Golfier, F.; Debenest, G.; Quintard, M.
2013-05-01
In the context of biofilm growth in porous media, we developed high performance computing tools to study the impact of biofilms on the fluid transport through pores of a solid matrix. Indeed, biofilms are consortia of micro-organisms that are developing in polymeric extracellular substances that are generally located at a fluid-solid interfaces like pore interfaces in a water-saturated porous medium. Several applications of biofilms in porous media are encountered for instance in bio-remediation methods by allowing the dissolution of organic pollutants. Many theoretical studies have been done on the resulting effective properties of these modified media ([1],[2], [3]) but the bio-colonized porous media under consideration are mainly described following simplified theoretical media (stratified media, cubic networks of spheres ...). Therefore, recent experimental advances have provided tomography images of bio-colonized porous media which allow us to observe realistic biofilm micro-structures inside the porous media [4]. To solve closure system of equations related to upscaling procedures in realistic porous media, we solve the velocity field of fluids through pores on complex geometries that are described with a huge number of cells (up to billions). Calculations are made on a realistic 3D sample geometry obtained by X micro-tomography. Cell volumes are coming from a percolation experiment performed to estimate the impact of precipitation processes on the properties of a fluid transport phenomena in porous media [5]. Average permeabilities of the sample are obtained from velocities by using MPI-based high performance computing on up to 1000 processors. Steady state Stokes equations are solved using finite volume approach. Relaxation pre-conditioning is introduced to accelerate the code further. Good weak or strong scaling are reached with results obtained in hours instead of weeks. Factors of accelerations of 20 up to 40 can be reached. Tens of geometries can now be
Directory of Open Access Journals (Sweden)
Ni An
2017-04-01
Full Text Available When modeling the soil/atmosphere interaction, it is of paramount importance to determine the net radiation flux. There are two common calculation methods for this purpose. Method 1 relies on use of air temperature, while Method 2 relies on use of both air and soil temperatures. Nowadays, there has been no consensus on the application of these two methods. In this study, the half-hourly data of solar radiation recorded at an experimental embankment are used to calculate the net radiation and long-wave radiation at different time-scales (half-hourly, hourly, and daily using the two methods. The results show that, compared with Method 2 which has been widely adopted in agronomical, geotechnical and geo-environmental applications, Method 1 is more feasible for its simplicity and accuracy at shorter time-scale. Moreover, in case of longer time-scale, daily for instance, less variations of net radiation and long-wave radiation are obtained, suggesting that no detailed soil temperature variations can be obtained. In other words, shorter time-scales are preferred in determining net radiation flux.
Directory of Open Access Journals (Sweden)
Hongfen Gao
2014-01-01
Full Text Available This paper describes the application of the complex variable meshless manifold method (CVMMM to stress intensity factor analyses of structures containing interface cracks between dissimilar materials. A discontinuous function and the near-tip asymptotic displacement functions are added to the CVMMM approximation using the framework of complex variable moving least-squares (CVMLS approximation. This enables the domain to be modeled by CVMMM without explicitly meshing the crack surfaces. The enriched crack-tip functions are chosen as those that span the asymptotic displacement fields for an interfacial crack. The complex stress intensity factors for bimaterial interfacial cracks were numerically evaluated using the method. Good agreement between the numerical results and the reference solutions for benchmark interfacial crack problems is realized.
Directory of Open Access Journals (Sweden)
Xiang Ding
2014-01-01
Full Text Available Project delivery planning is a key stage used by the project owner (or project investor for organizing design, construction, and other operations in a construction project. The main task in this stage is to select an appropriate project delivery method. In order to analyze different factors affecting the PDM selection, this paper establishes a multiagent model mainly to show how project complexity, governance strength, and market environment affect the project owner’s decision on PDM. Experiment results show that project owner usually choose Design-Build method when the project is very complex within a certain range. Besides, this paper points out that Design-Build method will be the prior choice when the potential contractors develop quickly. This paper provides the owners with methods and suggestions in terms of showing how the factors affect PDM selection, and it may improve the project performance.
Hybrid RANS/LES method for wind flow over complex terrain
DEFF Research Database (Denmark)
Bechmann, Andreas; Sørensen, Niels N.
2010-01-01
for flows at high Reynolds numbers. To reduce the computational cost of traditional LES, a hybrid method is proposed in which the near-wall eddies are modelled in a Reynolds-averaged sense. Close to walls, the flow is treated with the Reynolds-averaged Navier-Stokes (RANS) equations (unsteady RANS...... rough walls. Previous attempts of combining RANS and LES has resulted in unphysical transition regions between the two layers, but the present work improves this region by using a stochastic backscatter model. To demonstrate the ability of the proposed hybrid method, simulations are presented for wind...... the turbulent kinetic energy, whereas the new method captures the high turbulence levels well but underestimates the mean velocity. The presented results are for a relative mild configuration of complex terrain, but the proposed method can also be used for highly complex terrain where the benefits of the new...
Methods for assessment of climate variability and climate changes in different time-space scales
International Nuclear Information System (INIS)
Lobanov, V.; Lobanova, H.
2004-01-01
Main problem of hydrology and design support for water projects connects with modern climate change and its impact on hydrological characteristics as observed as well as designed. There are three main stages of this problem: - how to extract a climate variability and climate change from complex hydrological records; - how to assess the contribution of climate change and its significance for the point and area; - how to use the detected climate change for computation of design hydrological characteristics. Design hydrological characteristic is the main generalized information, which is used for water management and design support. First step of a research is a choice of hydrological characteristic, which can be as a traditional one (annual runoff for assessment of water resources, maxima, minima runoff, etc) as well as a new one, which characterizes an intra-annual function or intra-annual runoff distribution. For this aim a linear model has been developed which has two coefficients connected with an amplitude and level (initial conditions) of seasonal function and one parameter, which characterizes an intensity of synoptic and macro-synoptic fluctuations inside a year. Effective statistical methods have been developed for a separation of climate variability and climate change and extraction of homogeneous components of three time scales from observed long-term time series: intra annual, decadal and centural. The first two are connected with climate variability and the last (centural) with climate change. Efficiency of new methods of decomposition and smoothing has been estimated by stochastic modeling and well as on the synthetic examples. For an assessment of contribution and statistical significance of modern climate change components statistical criteria and methods have been used. Next step has been connected with a generalization of the results of detected climate changes over the area and spatial modeling. For determination of homogeneous region with the same
Youssef, Noha H.; Couger, M. B.; Elshahed, Mostafa S.
2010-01-01
Background The adaptation of pyrosequencing technologies for use in culture-independent diversity surveys allowed for deeper sampling of ecosystems of interest. One extremely well suited area of interest for pyrosequencing-based diversity surveys that has received surprisingly little attention so far, is examining fine scale (e.g. micrometer to millimeter) beta diversity in complex microbial ecosystems. Methodology/Principal Findings We examined the patterns of fine scale Beta diversity in four adjacent sediment samples (1mm apart) from the source of an anaerobic sulfide and sulfur rich spring (Zodletone spring) in southwestern Oklahoma, USA. Using pyrosequencing, a total of 292,130 16S rRNA gene sequences were obtained. The beta diversity patterns within the four datasets were examined using various qualitative and quantitative similarity indices. Low levels of Beta diversity (high similarity indices) were observed between the four samples at the phylum-level. However, at a putative species (OTU0.03) level, higher levels of beta diversity (lower similarity indices) were observed. Further examination of beta diversity patterns within dominant and rare members of the community indicated that at the putative species level, beta diversity is much higher within rare members of the community. Finally, sub-classification of rare members of Zodletone spring community based on patterns of novelty and uniqueness, and further examination of fine scale beta diversity of each of these subgroups indicated that members of the community that are unique, but non novel showed the highest beta diversity within these subgroups of the rare biosphere. Conclusions/Significance The results demonstrate the occurrence of high inter-sample diversity within seemingly identical samples from a complex habitat. We reason that such unexpected diversity should be taken into consideration when exploring gamma diversity of various ecosystems, as well as planning for sequencing-intensive metagenomic
Genome-scale transcriptional activation by an engineered CRISPR-Cas9 complex.
Konermann, Silvana; Brigham, Mark D; Trevino, Alexandro E; Joung, Julia; Abudayyeh, Omar O; Barcena, Clea; Hsu, Patrick D; Habib, Naomi; Gootenberg, Jonathan S; Nishimasu, Hiroshi; Nureki, Osamu; Zhang, Feng
2015-01-29
Systematic interrogation of gene function requires the ability to perturb gene expression in a robust and generalizable manner. Here we describe structure-guided engineering of a CRISPR-Cas9 complex to mediate efficient transcriptional activation at endogenous genomic loci. We used these engineered Cas9 activation complexes to investigate single-guide RNA (sgRNA) targeting rules for effective transcriptional activation, to demonstrate multiplexed activation of ten genes simultaneously, and to upregulate long intergenic non-coding RNA (lincRNA) transcripts. We also synthesized a library consisting of 70,290 guides targeting all human RefSeq coding isoforms to screen for genes that, upon activation, confer resistance to a BRAF inhibitor. The top hits included genes previously shown to be able to confer resistance, and novel candidates were validated using individual sgRNA and complementary DNA overexpression. A gene expression signature based on the top screening hits correlated with markers of BRAF inhibitor resistance in cell lines and patient-derived samples. These results collectively demonstrate the potential of Cas9-based activators as a powerful genetic perturbation technology.
Fractional Complex Transform and exp-Function Methods for Fractional Differential Equations
Directory of Open Access Journals (Sweden)
Ahmet Bekir
2013-01-01
Full Text Available The exp-function method is presented for finding the exact solutions of nonlinear fractional equations. New solutions are constructed in fractional complex transform to convert fractional differential equations into ordinary differential equations. The fractional derivatives are described in Jumarie's modified Riemann-Liouville sense. We apply the exp-function method to both the nonlinear time and space fractional differential equations. As a result, some new exact solutions for them are successfully established.
NetMHCcons: a consensus method for the major histocompatibility complex class I predictions
DEFF Research Database (Denmark)
Karosiene, Edita; Lundegaard, Claus; Lund, Ole
2012-01-01
A key role in cell-mediated immunity is dedicated to the major histocompatibility complex (MHC) molecules that bind peptides for presentation on the cell surface. Several in silico methods capable of predicting peptide binding to MHC class I have been developed. The accuracy of these methods depe...... at www.cbs.dtu.dk/services/NetMHCcons, and allows the user in an automatic manner to obtain the most accurate predictions for any given MHC molecule....
A family of conjugate gradient methods for large-scale nonlinear equations
Directory of Open Access Journals (Sweden)
Dexiang Feng
2017-09-01
Full Text Available Abstract In this paper, we present a family of conjugate gradient projection methods for solving large-scale nonlinear equations. At each iteration, it needs low storage and the subproblem can be easily solved. Compared with the existing solution methods for solving the problem, its global convergence is established without the restriction of the Lipschitz continuity on the underlying mapping. Preliminary numerical results are reported to show the efficiency of the proposed method.
A family of conjugate gradient methods for large-scale nonlinear equations.
Feng, Dexiang; Sun, Min; Wang, Xueyong
2017-01-01
In this paper, we present a family of conjugate gradient projection methods for solving large-scale nonlinear equations. At each iteration, it needs low storage and the subproblem can be easily solved. Compared with the existing solution methods for solving the problem, its global convergence is established without the restriction of the Lipschitz continuity on the underlying mapping. Preliminary numerical results are reported to show the efficiency of the proposed method.
A general method for computing the total solar radiation force on complex spacecraft structures
Chan, F. K.
1981-01-01
The method circumvents many of the existing difficulties in computational logic presently encountered in the direct analytical or numerical evaluation of the appropriate surface integral. It may be applied to complex spacecraft structures for computing the total force arising from either specular or diffuse reflection or even from non-Lambertian reflection and re-radiation.
A method for evaluating the problem complex of choosing the ventilation system for a new building
DEFF Research Database (Denmark)
Hviid, Christian Anker; Svendsen, Svend
2007-01-01
The application of a ventilation system in a new building is a multidimensional complex problem that involves quantifiable and non-quantifiable data like energy consump¬tion, indoor environment, building integration and architectural expression. This paper presents a structured method for evaluat...
Simulation As a Method To Support Complex Organizational Transformations in Healthcare
Rothengatter, D.C.F.; Katsma, Christiaan; van Hillegersberg, Jos
2010-01-01
In this paper we study the application of simulation as a method to support information system and process design in complex organizational transitions. We apply a combined use of a collaborative workshop approach with the use of a detailed and accurate graphical simulation model in a hospital that
Functional analytic methods in complex analysis and applications to partial differential equations
International Nuclear Information System (INIS)
Mshimba, A.S.A.; Tutschke, W.
1990-01-01
The volume contains 24 lectures given at the Workshop on Functional Analytic Methods in Complex Analysis and Applications to Partial Differential Equations held in Trieste, Italy, between 8-19 February 1988, at the ICTP. A separate abstract was prepared for each of these lectures. Refs and figs
Structure of the automated uchebno-methodical complex on technical disciplines
Directory of Open Access Journals (Sweden)
Вячеслав Михайлович Дмитриев
2010-12-01
Full Text Available In article it is put and the problem of automation and information of process of training of students on the basis of the entered system-organizational forms which have received in aggregate the name of education methodical complexes on discipline dares.
Global Learning in a Geography Course Using the Mystery Method as an Approach to Complex Issues
Applis, Stefan
2014-01-01
In the study which is the foundation of this essay, the question is examined of whether the complexity of global issues can be solved at the level of teaching methodology. In this context, the first qualitative and constructive study was carried out which researches the Mystery Method using the Thinking-Through-Geography approach (David Leat,…
Directory of Open Access Journals (Sweden)
T. Friedrich
2010-08-01
Full Text Available The mechanism triggering centennial-to-millennial-scale variability of the Atlantic Meridional Overturning Circulation (AMOC in the earth system model of intermediate complexity LOVECLIM is investigated. It is found that for several climate boundary conditions such as low obliquity values (~22.1° or LGM-albedo, internally generated centennial-to-millennial-scale variability occurs in the North Atlantic region. Stochastic excitations of the density-driven overturning circulation in the Nordic Seas can create regional sea-ice anomalies and a subsequent reorganization of the atmospheric circulation. The resulting remote atmospheric anomalies over the Hudson Bay can release freshwater pulses into the Labrador Sea and significantly increase snow fall in this region leading to a subsequent reduction of convective activity. The millennial-scale AMOC oscillations disappear if LGM bathymetry (with closed Hudson Bay is prescribed or if freshwater pulses are suppressed artificially. Furthermore, our study documents the process of the AMOC recovery as well as the global marine and terrestrial carbon cycle response to centennial-to-millennial-scale AMOC variability.
Pain point system scale (PPSS: a method for postoperative pain estimation in retrospective studies
Directory of Open Access Journals (Sweden)
Gkotsi A
2012-11-01
Full Text Available Anastasia Gkotsi,1 Dimosthenis Petsas,2 Vasilios Sakalis,3 Asterios Fotas,3 Argyrios Triantafyllidis,3 Ioannis Vouros,3 Evangelos Saridakis,2 Georgios Salpiggidis,3 Athanasios Papathanasiou31Department of Experimental Physiology, Aristotle University of Thessaloniki, Thessaloniki, Greece; 2Department of Anesthesiology, 3Department of Urology, Hippokration General Hospital, Thessaloniki, GreecePurpose: Pain rating scales are widely used for pain assessment. Nevertheless, a new tool is required for pain assessment needs in retrospective studies.Methods: The postoperative pain episodes, during the first postoperative day, of three patient groups were analyzed. Each pain episode was assessed by a visual analog scale, numerical rating scale, verbal rating scale, and a new tool – pain point system scale (PPSS – based on the analgesics administered. The type of analgesic was defined based on the authors’ clinic protocol, patient comorbidities, pain assessment tool scores, and preadministered medications by an artificial neural network system. At each pain episode, each patient was asked to fill the three pain scales. Bartlett’s test and Kaiser–Meyer–Olkin criterion were used to evaluate sample sufficiency. The proper scoring system was defined by varimax rotation. Spearman’s and Pearson’s coefficients assessed PPSS correlation to the known pain scales.Results: A total of 262 pain episodes were evaluated in 124 patients. The PPSS scored one point for each dose of paracetamol, three points for each nonsteroidal antiinflammatory drug or codeine, and seven points for each dose of opioids. The correlation between the visual analog scale and PPSS was found to be strong and linear (rho: 0.715; P <0.001 and Pearson: 0.631; P < 0.001.Conclusion: PPSS correlated well with the known pain scale and could be used safely in the evaluation of postoperative pain in retrospective studies.Keywords: pain scale, retrospective studies, pain point system
International Nuclear Information System (INIS)
Park, Jin Beak
1995-02-01
Low-level radioactive waste management require the knowledge of the natures and quantities of radionuclides in the immobilized or packaged waste. U. S. NRC rules require programs that measure the concentrations of all relevant nuclides either directly or indirectly by relating difficult-to-measure radionuclides to other easy-to-measure radionuclides with application of scaling factors. Scaling factors previously developed through statistical approach can give only generic ones and have many difficult problem about sampling procedures. Generic scaling factors can not take into account for plant operation history. In this study, a method to predict plant-specific and operational history dependent scaling factors is developed. Realistic and detailed approach are taken to find scaling factors at reactor coolant. This approach begin with fission product release mechanisms and fundamental release properties of fuel-source nuclide such as fission product and transuranic nuclide. Scaling factors at various waste streams are derived from the predicted reactor coolant scaling factors with the aid of radionuclide retention and build up model. This model make use of radioactive material balance within the radioactive waste processing systems. Scaling factors at reactor coolant and waste streams which can include the effects of plant operation history have been developed according to input parameters of plant operation history
Energy Technology Data Exchange (ETDEWEB)
Pinski, Peter; Riplinger, Christoph; Neese, Frank, E-mail: evaleev@vt.edu, E-mail: frank.neese@cec.mpg.de [Max Planck Institute for Chemical Energy Conversion, Stiftstr. 34-36, D-45470 Mülheim an der Ruhr (Germany); Valeev, Edward F., E-mail: evaleev@vt.edu, E-mail: frank.neese@cec.mpg.de [Department of Chemistry, Virginia Tech, Blacksburg, Virginia 24061 (United States)
2015-07-21
In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implements sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in
Development of large scale industrial complex and its pollution. Case study of Kashima area
Energy Technology Data Exchange (ETDEWEB)
Nagai, S
1975-01-01
The development of Kashima industrial complex which embraces three townships started in 1960 to promote both agricultural and industrial developments using the most advanced techniques available for environmental pollution control. The chronological development progress is described with reference to the capital investment, gross product, employment and labor supply, population, status of the use of agricultural land, annual revenue and expenditure of three townships, and township tax. The environmental pollution control policies and measures taken since 1964 are reviewed. The emphasis was placed on preliminary investigations by various means and emission standards were applied. However, many incidences of pollution damage occurred due to operational errors and accidental causes. The emission quantity of sulfur dioxide is to be reduced from 8212 N cu m/h in 1973 to 4625 N cu m/h in 1976.
van der Hilst, R. D.; de Hoop, M. V.; Shim, S. H.; Shang, X.; Wang, P.; Cao, Q.
2012-04-01
Over the past three decades, tremendous progress has been made with the mapping of mantle heterogeneity and with the understanding of these structures in terms of, for instance, the evolution of Earth's crust, continental lithosphere, and thermo-chemical mantle convection. Converted wave imaging (e.g., receiver functions) and reflection seismology (e.g. SS stacks) have helped constrain interfaces in crust and mantle; surface wave dispersion (from earthquake or ambient noise signals) characterizes wavespeed variations in continental and oceanic lithosphere, and body wave and multi-mode surface wave data have been used to map trajectories of mantle convection and delineate mantle regions of anomalous elastic properties. Collectively, these studies have revealed substantial ocean-continent differences and suggest that convective flow is strongly influenced by but permitted to cross the upper mantle transition zone. Many questions have remained unanswered, however, and further advances in understanding require more accurate depictions of Earth's heterogeneity at a wider range of length scales. To meet this challenge we need new observations—more, better, and different types of data—and methods that help us extract and interpret more information from the rapidly growing volumes of broadband data. The huge data volumes and the desire to extract more signal from them means that we have to go beyond 'business as usual' (that is, simplified theory, manual inspection of seismograms, …). Indeed, it inspires the development of automated full wave methods, both for tomographic delineation of smooth wavespeed variations and the imaging (for instance through inverse scattering) of medium contrasts. Adjoint tomography and reverse time migration, which are closely related wave equation methods, have begun to revolutionize seismic inversion of global and regional waveform data. In this presentation we will illustrate this development - and its promise - drawing from our work
Modelling H5N1 in Bangladesh across spatial scales: Model complexity and zoonotic transmission risk
Directory of Open Access Journals (Sweden)
Edward M. Hill
2017-09-01
Full Text Available Highly pathogenic avian influenza H5N1 remains a persistent public health threat, capable of causing infection in humans with a high mortality rate while simultaneously negatively impacting the livestock industry. A central question is to determine regions that are likely sources of newly emerging influenza strains with pandemic causing potential. A suitable candidate is Bangladesh, being one of the most densely populated countries in the world and having an intensifying farming system. It is therefore vital to establish the key factors, specific to Bangladesh, that enable both continued transmission within poultry and spillover across the human–animal interface. We apply a modelling framework to H5N1 epidemics in the Dhaka region of Bangladesh, occurring from 2007 onwards, that resulted in large outbreaks in the poultry sector and a limited number of confirmed human cases. This model consisted of separate poultry transmission and zoonotic transmission components. Utilising poultry farm spatial and population information a set of competing nested models of varying complexity were fitted to the observed case data, with parameter inference carried out using Bayesian methodology and goodness-of-fit verified by stochastic simulations. For the poultry transmission component, successfully identifying a model of minimal complexity, which enabled the accurate prediction of the size and spatial distribution of cases in H5N1 outbreaks, was found to be dependent on the administration level being analysed. A consistent outcome of non-optimal reporting of infected premises materialised in each poultry epidemic of interest, though across the outbreaks analysed there were substantial differences in the estimated transmission parameters. The zoonotic transmission component found the main contributor to spillover transmission of H5N1 in Bangladesh was found to differ from one poultry epidemic to another. We conclude by discussing possible explanations for
Anderson, Brian P.; Greathouse, James S.; Powell, Jessica M.; Ross, James C.; Schairer, Edward T.; Kushner, Laura; Porter, Barry J.; Goulding, Patrick W., II; Zwicker, Matthew L.; Mollmann, Catherine
2017-01-01
A two-week test campaign was conducted in the National Full-Scale Aerodynamics Complex 80 x 120-ft Wind Tunnel in support of Orion parachute pendulum mitigation activities. The test gathered static aerodynamic data using an instrumented, 3-tether system attached to the parachute vent in combination with an instrumented parachute riser. Dynamic data was also gathered by releasing the tether system and measuring canopy performance using photogrammetry. Several canopy configurations were tested and compared against the current Orion parachute design to understand changes in drag performance and aerodynamic stability. These configurations included canopies with varying levels and locations of geometric porosity as well as sails with increased levels of fullness. In total, 37 runs were completed for a total of 392 data points. Immediately after the end of the testing campaign a down-select decision was made based on preliminary data to support follow-on sub-scale air drop testing. A summary of a more rigorous analysis of the test data is also presented.
Li, Jie; Li, Rui; You, Leiming; Xu, Anlong; Fu, Yonggui; Huang, Shengfeng
2015-01-01
Switching between different alternative polyadenylation (APA) sites plays an important role in the fine tuning of gene expression. New technologies for the execution of 3’-end enriched RNA-seq allow genome-wide detection of the genes that exhibit significant APA site switching between different samples. Here, we show that the independence test gives better results than the linear trend test in detecting APA site-switching events. Further examination suggests that the discrepancy between these two statistical methods arises from complex APA site-switching events that cannot be represented by a simple change of average 3’-UTR length. In theory, the linear trend test is only effective in detecting these simple changes. We classify the switching events into four switching patterns: two simple patterns (3’-UTR shortening and lengthening) and two complex patterns. By comparing the results of the two statistical methods, we show that complex patterns account for 1/4 of all observed switching events that happen between normal and cancerous human breast cell lines. Because simple and complex switching patterns may convey different biological meanings, they merit separate study. We therefore propose to combine both the independence test and the linear trend test in practice. First, the independence test should be used to detect APA site switching; second, the linear trend test should be invoked to identify simple switching events; and third, those complex switching events that pass independence testing but fail linear trend testing can be identified. PMID:25875641
Directory of Open Access Journals (Sweden)
Y. Zhao
2017-06-01
Full Text Available Local line rolling forming is a common forming approach for the complex curvature plate of ships. However, the processing mode based on artificial experience is still applied at present, because it is difficult to integrally determine relational data for the forming shape, processing path, and process parameters used to drive automation equipment. Numerical simulation is currently the major approach for generating such complex relational data. Therefore, a highly precise and effective numerical computation method becomes crucial in the development of the automated local line rolling forming system for producing complex curvature plates used in ships. In this study, a three-dimensional elastoplastic finite element method was first employed to perform numerical computations for local line rolling forming, and the corresponding deformation and strain distribution features were acquired. In addition, according to the characteristics of strain distributions, a simplified deformation simulation method, based on the deformation obtained by applying strain was presented. Compared to the results of the three-dimensional elastoplastic finite element method, this simplified deformation simulation method was verified to provide high computational accuracy, and this could result in a substantial reduction in calculation time. Thus, the application of the simplified deformation simulation method was further explored in the case of multiple rolling loading paths. Moreover, it was also utilized to calculate the local line rolling forming for the typical complex curvature plate of ships. Research findings indicated that the simplified deformation simulation method was an effective tool for rapidly obtaining relationships between the forming shape, processing path, and process parameters.
Ravichandran, R; Rajendran, M; Devapiriam, D
2014-03-01
Quercetin found chelate cadmium ions, scavenge free radicals produced by cadmium. Hence new complex, quercetin with cadmium was synthesised, and the synthesised complex structures were determined by UV-vis spectrophotometry, infrared spectroscopy, thermogravimetry and differential thermal analysis techniques (UV-vis, IR, TGA and DTA). The equilibrium stability constants of quercetin-cadmium complex were determined by Job's method. The determined stability constant value of quercetin-cadminum complex at pH 4.4 is 2.27×10(6) and at pH 7.4 is 7.80×10(6). It was found that the quercetin and cadmium ion form 1:1 complex in both pH 4.4 and pH 7.4. The structure of the compounds was elucidated on the basis of obtained results. Furthermore, the antioxidant activity of the free quercetin and quercetin-cadmium complexes were determined by DPPH and ABTS assays. Copyright © 2013 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Олег Богданович ЗАЧКО
2016-03-01
Full Text Available The methods and models of safety-oriented project management of the development of complex systems are proposed resulting from the convergence of existing approaches in project management in contrast to the mechanism of value-oriented management. A cognitive model of safety oriented project management of the development of complex systems is developed, which provides a synergistic effect that is to move the system from the original (pre condition in an optimal one from the viewpoint of life safety - post-project state. The approach of assessment the project complexity is proposed, which consists in taking into account the seasonal component of a time characteristic of life cycles of complex organizational and technical systems with occupancy. This enabled to take into account the seasonal component in simulation models of life cycle of the product operation in complex organizational and technical system, modeling the critical points of operation of systems with occupancy, which forms a new methodology for safety-oriented management of projects, programs and portfolios of projects with the formalization of the elements of complexity.
International Nuclear Information System (INIS)
Nielsen, Joseph; Tokuhiro, Akira; Khatry, Jivan; Hiromoto, Robert
2014-01-01
Traditional probabilistic risk assessment (PRA) methods have been developed to evaluate risk associated with complex systems; however, PRA methods lack the capability to evaluate complex dynamic systems. In these systems, time and energy scales associated with transient events may vary as a function of transition times and energies to arrive at a different physical state. Dynamic PRA (DPRA) methods provide a more rigorous analysis of complex dynamic systems. Unfortunately DPRA methods introduce issues associated with combinatorial explosion of states. In order to address this combinatorial complexity, a branch-and-bound optimization technique is applied to the DPRA formalism to control the combinatorial state explosion. In addition, a new characteristic scaling metric (LENDIT – length, energy, number, distribution, information and time) is proposed as linear constraints that are used to guide the branch-and-bound algorithm to limit the number of possible states to be analyzed. The LENDIT characterization is divided into four groups or sets – 'state, system, resource and response' (S2R2) – describing reactor operations (normal and off-normal). In this paper we introduce the branch-and-bound DPRA approach and the application of LENDIT scales and S2R2 sets to a station blackout (SBO) transient. (author)
Directory of Open Access Journals (Sweden)
S. Ebrahimnejad
2012-04-01
Full Text Available The complexity of large-scale projects has led to numerous risks in their life cycle. This paper presents a new risk evaluation approach in order to rank the high risks in large-scale projects and improve the performance of these projects. It is based on the fuzzy set theory that is an effective tool to handle uncertainty. It is also based on an extended VIKOR method that is one of the well-known multiple criteria decision-making (MCDM methods. The proposed decision-making approach integrates knowledge and experience acquired from professional experts, since they perform the risk identification and also the subjective judgments of the performance rating for high risks in terms of conflicting criteria, including probability, impact, quickness of reaction toward risk, event measure quantity and event capability criteria. The most notable difference of the proposed VIKOR method with its traditional version is just the use of fuzzy decision-matrix data to calculate the ranking index without the need to ask the experts. Finally, the proposed approach is illustrated with a real-case study in an Iranian power plant project, and the associated results are compared with two well-known decision-making methods under a fuzzy environment.
Relating system-to-CFD coupled code analyses to theoretical framework of a multi-scale method
International Nuclear Information System (INIS)
Cadinu, F.; Kozlowski, T.; Dinh, T.N.
2007-01-01
Over past decades, analyses of transient processes and accidents in a nuclear power plant have been performed, to a significant extent and with a great success, by means of so called system codes, e.g. RELAP5, CATHARE, ATHLET codes. These computer codes, based on a multi-fluid model of two-phase flow, provide an effective, one-dimensional description of the coolant thermal-hydraulics in the reactor system. For some components in the system, wherever needed, the effect of multi-dimensional flow is accounted for through approximate models. The later are derived from scaled experiments conducted for selected accident scenarios. Increasingly, however, we have to deal with newer and ever more complex accident scenarios. In some such cases the system codes fail to serve as simulation vehicle, largely due to its deficient treatment of multi-dimensional flow (in e.g. downcomer, lower plenum). A possible way of improvement is to use the techniques of Computational Fluid Dynamics (CFD). Based on solving Navier-Stokes equations, CFD codes have been developed and used, broadly, to perform analysis of multi-dimensional flow, dominantly in non-nuclear industry and for single-phase flow applications. It is clear that CFD simulations can not substitute system codes but just complement them. Given the intrinsic multi-scale nature of this problem, we propose to relate it to the more general field of research on multi-scale simulations. Even though multi-scale methods are developed on case-by-case basis, the need for a unified framework brought to the development of the heterogeneous multi-scale method (HMM)
Igras, Susan; Sinai, Irit; Mukabatsinda, Marie; Ngabo, Fidele; Jennings, Victoria; Lundgren, Rebecka
2014-01-01
There is no guarantee that a successful pilot program introducing a reproductive health innovation can also be expanded successfully to the national or regional level, because the scaling-up process is complex and multilayered. This article describes how a successful pilot program to integrate the Standard Days Method (SDM) of family planning into existing Ministry of Health services was scaled up nationally in Rwanda. Much of the success of the scale-up effort was due to systematic use of monitoring and evaluation (M&E) data from several sources to make midcourse corrections. Four lessons learned illustrate this crucially important approach. First, ongoing M&E data showed that provider training protocols and client materials that worked in the pilot phase did not work at scale; therefore, we simplified these materials to support integration into the national program. Second, triangulation of ongoing monitoring data with national health facility and population-based surveys revealed serious problems in supply chain mechanisms that affected SDM (and the accompanying CycleBeads client tool) availability and use; new procedures for ordering supplies and monitoring stockouts were instituted at the facility level. Third, supervision reports and special studies revealed that providers were imposing unnecessary medical barriers to SDM use; refresher training and revised supervision protocols improved provider practices. Finally, informal environmental scans, stakeholder interviews, and key events timelines identified shifting political and health policy environments that influenced scale-up outcomes; ongoing advocacy efforts are addressing these issues. The SDM scale-up experience in Rwanda confirms the importance of monitoring and evaluating programmatic efforts continuously, using a variety of data sources, to improve program outcomes. PMID:25276581
Dondelinger, Robert M
2004-01-01
This complex method of equipment replacement planning is a methodology; it is a means to an end, a process that focuses on equipment most in need of replacement, rather than the end itself. It uses data available from the maintenance management database, and attempts to quantify those subjective items important [figure: see text] in making equipment replacement decisions. Like the simple method of the last issue, it is a starting point--albeit an advanced starting point--which the user can modify to fit their particular organization, but the complex method leaves room for expansion. It is based on sound logic, documented facts, and is fully defensible during the decision-making process and will serve your organization well as provide a structure for your equipment replacement planning decisions.
International Nuclear Information System (INIS)
Ramakrishna Reddy, S.; Srinivasan, R.; Mallika, C.; Kamachi Mudali, U.; Natarajan, R.
2012-01-01
Spectrophotometric method employing numerous chromogenic reagents like thiourea, 1,10-phenanthroline, thiocyanate and tropolone is reported in the literature for the estimation of very low concentrations of Ru. A sensitive spectrophotometric method has been developed for the determination of ruthenium in the concentration range 1.5 to 6.5 ppm in the present work. This method is based on the reaction of ruthenium with barbituric acid to produce ruthenium(ll)tris-violurate, (Ru(H 2 Va) 3 ) -1 complex which gives a stable deep-red coloured solution. The maximum absorption of the complex is at 491 nm due to the inverted t 2g → Π(L-L ligand) electron - transfer transition. The molar absorptivity of the coloured species is 9,851 dm 3 mol -1 cm -1
Complex Hand Dexterity: A Review of Biomechanical Methods for Measuring Musical Performance
Directory of Open Access Journals (Sweden)
Cheryl Diane Metcalf
2014-05-01
Full Text Available Complex hand dexterity is fundamental to our interactions with the physical, social and cultural environment. Dexterity can be an expression of creativity and precision in a range of activities, including musical performance. Little is understood about complex hand dexterity or how virtuoso expertise is acquired, due to the versatility of movement combinations available to complete any given task. This has historically limited progress of the field because of difficulties in measuring movements of the hand. Recent developments in methods of motion capture and analysis mean it is now possible to explore the intricate movements of the hand and fingers. These methods allow us insights into the neurophysiological mechanisms underpinning complex hand dexterity and motor learning. They also allow investigation into the key factors that contribute to injury, recovery and functional compensation.The application of such analytical techniques within musical performance provides a multidisciplinary framework for purposeful investigation into the process of learning and skill acquisition in instrumental performance. These highly skilled manual and cognitive tasks present the ultimate achievement in complex hand dexterity. This paper will review methods of assessing instrumental performance in music, focusing specifically on biomechanical measurement and the associated technical challenges faced when measuring highly dexterous activities.
Method for data compression by associating complex numbers with files of data values
Feo, John Thomas; Hanks, David Carlton; Kraay, Thomas Arthur
1998-02-10
A method for compressing data for storage or transmission. Given a complex polynomial and a value assigned to each root, a root generated data file (RGDF) is created, one entry at a time. Each entry is mapped to a point in a complex plane. An iterative root finding technique is used to map the coordinates of the point to the coordinates of one of the roots of the polynomial. The value associated with that root is assigned to the entry. An equational data compression (EDC) method reverses this procedure. Given a target data file, the EDC method uses a search algorithm to calculate a set of m complex numbers and a value map that will generate the target data file. The error between a simple target data file and generated data file is typically less than 10%. Data files can be transmitted or stored without loss by transmitting the m complex numbers, their associated values, and an error file whose size is at most one-tenth of the size of the input data file.
Workshop on Recent Trends in Complex Methods for Partial Differential Equations
Celebi, A; Tutschke, Wolfgang
1999-01-01
This volume is a collection of manscripts mainly originating from talks and lectures given at the Workshop on Recent Trends in Complex Methods for Par tial Differential Equations held from July 6 to 10, 1998 at the Middle East Technical University in Ankara, Turkey, sponsored by The Scientific and Tech nical Research Council of Turkey and the Middle East Technical University. This workshop is a continuation oftwo workshops from 1988 and 1993 at the In ternational Centre for Theoretical Physics in Trieste, Italy entitled Functional analytic Methods in Complex Analysis and Applications to Partial Differential Equations. Since classical complex analysis of one and several variables has a long tra dition it is of high level. But most of its basic problems are solved nowadays so that within the last few decades it has lost more and more attention. The area of complex and functional analytic methods in partial differential equations, however, is still a growing and flourishing field, in particular as these ...
Cork-resin ablative insulation for complex surfaces and method for applying the same
Walker, H. M.; Sharpe, M. H.; Simpson, W. G. (Inventor)
1980-01-01
A method of applying cork-resin ablative insulation material to complex curved surfaces is disclosed. The material is prepared by mixing finely divided cork with a B-stage curable thermosetting resin, forming the resulting mixture into a block, B-stage curing the resin-containing block, and slicing the block into sheets. The B-stage cured sheet is shaped to conform to the surface being insulated, and further curing is then performed. Curing of the resins only to B-stage before shaping enables application of sheet material to complex curved surfaces and avoids limitations and disadvantages presented in handling of fully cured sheet material.
Wang, Yu; Chou, Chia-Chun
2018-05-01
The coupled complex quantum Hamilton-Jacobi equations for electronic nonadiabatic transitions are approximately solved by propagating individual quantum trajectories in real space. Equations of motion are derived through use of the derivative propagation method for the complex actions and their spatial derivatives for wave packets moving on each of the coupled electronic potential surfaces. These equations for two surfaces are converted into the moving frame with the same grid point velocities. Excellent wave functions can be obtained by making use of the superposition principle even when nodes develop in wave packet scattering.
Complexity on dwarf galaxy scales : A bimodal distributionfFunction in sculptor
Breddels, Maarten A.; Helmi, Amina
2014-01-01
In our previous work, we presented Schwarzschild models of the Sculptor dwarf spheroidal galaxy demonstrating that this system could be embedded in dark matter halos that are either cusped or cored. Here, we show that the non-parametric distribution function recovered through Schwarzschild's method
International Nuclear Information System (INIS)
Woo, M.K.; Cunningham, J.R.
1990-01-01
In the convolution/superposition method of photon beam dose calculations, inhomogeneities are usually handled by using some form of scaling involving the relative electron densities of the inhomogeneities. In this paper the accuracy of density scaling as applied to primary electrons generated in photon interactions is examined. Monte Carlo calculations are compared with density scaling calculations for air and cork slab inhomogeneities. For individual primary photon kernels as well as for photon interactions restricted to a thin layer, the results can differ significantly, by up to 50%, between the two calculations. However, for realistic photon beams where interactions occur throughout the whole irradiated volume, the discrepancies are much less severe. The discrepancies for the kernel calculation are attributed to the scattering characteristics of the electrons and the consequent oversimplified modeling used in the density scaling method. A technique called the kernel integration technique is developed to analyze the general effects of air and cork inhomogeneities. It is shown that the discrepancies become significant only under rather extreme conditions, such as immediately beyond the surface after a large air gap. In electron beams all the primary electrons originate from the surface of the phantom and the errors caused by simple density scaling can be much more significant. Various aspects relating to the accuracy of density scaling for air and cork slab inhomogeneities are discussed
A novel method for preparation of HAMLET-like protein complexes.
Permyakov, Sergei E; Knyazeva, Ekaterina L; Leonteva, Marina V; Fadeev, Roman S; Chekanov, Aleksei V; Zhadan, Andrei P; Håkansson, Anders P; Akatov, Vladimir S; Permyakov, Eugene A
2011-09-01
Some natural proteins induce tumor-selective apoptosis. α-Lactalbumin (α-LA), a milk calcium-binding protein, is converted into an antitumor form, called HAMLET/BAMLET, via partial unfolding and association with oleic acid (OA). Besides triggering multiple cell death mechanisms in tumor cells, HAMLET exhibits bactericidal activity against Streptococcus pneumoniae. The existing methods for preparation of active complexes of α-LA with OA employ neutral pH solutions, which greatly limit water solubility of OA. Therefore these methods suffer from low scalability and/or heterogeneity of the resulting α-LA - OA samples. In this study we present a novel method for preparation of α-LA - OA complexes using alkaline conditions that favor aqueous solubility of OA. The unbound OA is removed by precipitation under acidic conditions. The resulting sample, bLA-OA-45, bears 11 OA molecules and exhibits physico-chemical properties similar to those of BAMLET. Cytotoxic activities of bLA-OA-45 against human epidermoid larynx carcinoma and S. pneumoniae D39 cells are close to those of HAMLET. Treatment of S. pneumoniae with bLA-OA-45 or HAMLET induces depolarization and rupture of the membrane. The cells are markedly rescued from death upon pretreatment with an inhibitor of Ca(2+) transport. Hence, the activation mechanisms of S. pneumoniae death are analogous for these two complexes. The developed express method for preparation of active α-LA - OA complex is high-throughput and suited for development of other protein complexes with low-molecular-weight amphiphilic substances possessing valuable cytotoxic properties. Copyright © 2011 Elsevier Masson SAS. All rights reserved.
Models, methods and software tools for building complex adaptive traffic systems
International Nuclear Information System (INIS)
Alyushin, S.A.
2011-01-01
The paper studies the modern methods and tools to simulate the behavior of complex adaptive systems (CAS), the existing systems of traffic modeling in simulators and their characteristics; proposes requirements for assessing the suitability of the system to simulate the CAS behavior in simulators. The author has developed a model of adaptive agent representation and its functioning environment to meet certain requirements set above, and has presented methods of agents' interactions and methods of conflict resolution in simulated traffic situations. A simulation system realizing computer modeling for simulating the behavior of CAS in traffic situations has been created [ru
Directory of Open Access Journals (Sweden)
Jelena Vukomanovic
2014-04-01
Full Text Available Values associated with scenic beauty are common “pull factors” for amenity migrants, however the specific landscape features that attract amenity migration are poorly understood. In this study we focused on three visual quality metrics of the intermountain West (USA, with the objective of exploring the relationship between the location of exurban homes and aesthetic landscape preference, as exemplified through greenness, viewshed size, and terrain ruggedness. Using viewshed analysis, we compared the viewsheds of actual exurban houses to the viewsheds of randomly-distributed simulated (validation houses. We found that the actual exurban households can see significantly more vegetation and a more rugged (complex terrain than simulated houses. Actual exurban homes see a more rugged terrain, but do not necessarily see the highest peaks, suggesting that visual complexity throughout the viewshed may be more important. The viewsheds visible from the actual exurban houses were significantly larger than those visible from the simulated houses, indicating that visual scale is important to the general aesthetic experiences of exurbanites. The differences in visual quality metric values between actual exurban and simulated viewsheds call into question the use of county-level scales of analysis for the study of landscape preferences, which may miss key landscape aesthetic drivers of preference.
A low complexity method for the optimization of network path length in spatially embedded networks
International Nuclear Information System (INIS)
Chen, Guang; Yang, Xu-Hua; Xu, Xin-Li; Ming, Yong; Chen, Sheng-Yong; Wang, Wan-Liang
2014-01-01
The average path length of a network is an important index reflecting the network transmission efficiency. In this paper, we propose a new method of decreasing the average path length by adding edges. A new indicator is presented, incorporating traffic flow demand, to assess the decrease in the average path length when a new edge is added during the optimization process. With the help of the indicator, edges are selected and added into the network one by one. The new method has a relatively small time computational complexity in comparison with some traditional methods. In numerical simulations, the new method is applied to some synthetic spatially embedded networks. The result shows that the method can perform competitively in decreasing the average path length. Then, as an example of an application of this new method, it is applied to the road network of Hangzhou, China. (paper)
International Nuclear Information System (INIS)
Zolin, V.F.; Koreneva, L.G.; Serbinova, T.A.; Tsaryuk, V.I.
1975-01-01
The structure of pyridoxalidene amino acid complexes was studied by circular dichroism, magnetic circular dichroism and luminescence spectroscopy. It was shown that these are two-ligand complexes, whereby in the case of those based on valine, leucine and isoleucine the chromophores are almost perpendicular to one another. In the case of complexes based on glycine and alanine the co-ordination sphere is strongly deformed. (author)
APINetworks Java. A Java approach to the efficient treatment of large-scale complex networks
Muñoz-Caro, Camelia; Niño, Alfonso; Reyes, Sebastián; Castillo, Miriam
2016-10-01
We present a new version of the core structural package of our Application Programming Interface, APINetworks, for the treatment of complex networks in arbitrary computational environments. The new version is written in Java and presents several advantages over the previous C++ version: the portability of the Java code, the easiness of object-oriented design implementations, and the simplicity of memory management. In addition, some additional data structures are introduced for storing the sets of nodes and edges. Also, by resorting to the different garbage collectors currently available in the JVM the Java version is much more efficient than the C++ one with respect to memory management. In particular, the G1 collector is the most efficient one because of the parallel execution of G1 and the Java application. Using G1, APINetworks Java outperforms the C++ version and the well-known NetworkX and JGraphT packages in the building and BFS traversal of linear and complete networks. The better memory management of the present version allows for the modeling of much larger networks.
Directory of Open Access Journals (Sweden)
Ángel Vázquez Alonso
2005-05-01
Full Text Available The scarce attention to assessment and evaluation in science education research has been especially harmful for Science-Technology-Society (STS education, due to the dialectic, tentative, value-laden, and controversial nature of most STS topics. To overcome the methodological pitfalls of the STS assessment instruments used in the past, an empirically developed instrument (VOSTS, Views on Science-Technology-Society have been suggested. Some methodological proposals, namely the multiple response models and the computing of a global attitudinal index, were suggested to improve the item implementation. The final step of these methodological proposals requires the categorization of STS statements. This paper describes the process of categorization through a scaling procedure ruled by a panel of experts, acting as judges, according to the body of knowledge from history, epistemology, and sociology of science. The statement categorization allows for the sound foundation of STS items, which is useful in educational assessment and science education research, and may also increase teachers’ self-confidence in the development of the STS curriculum for science classrooms.
The complexity of millennial-scale variability in southwestern Europe during MIS 11
Oliveira, Dulce; Desprat, Stéphanie; Rodrigues, Teresa; Naughton, Filipa; Hodell, David; Trigo, Ricardo; Rufino, Marta; Lopes, Cristina; Abrantes, Fátima; Sánchez Goñi, Maria Fernanda
2016-11-01
Climatic variability of Marine Isotope Stage (MIS) 11 is examined using a new high-resolution direct land-sea comparison from the SW Iberian margin Site U1385. This study, based on pollen and biomarker analyses, documents regional vegetation, terrestrial climate and sea surface temperature (SST) variability. Suborbital climate variability is revealed by a series of forest decline events suggesting repeated cooling and drying episodes in SW Iberia throughout MIS 11. Only the most severe events on land are coeval with SST decreases, under larger ice volume conditions. Our study shows that the diverse expression (magnitude, character and duration) of the millennial-scale cooling events in SW Europe relies on atmospheric and oceanic processes whose predominant role likely depends on baseline climate states. Repeated atmospheric shifts recalling the positive North Atlantic Oscillation mode, inducing dryness in SW Iberia without systematical SST changes, would prevail during low ice volume conditions. In contrast, disruption of the Atlantic meridional overturning circulation (AMOC), related to iceberg discharges, colder SST and increased hydrological regime, would be responsible for the coldest and driest episodes of prolonged duration in SW Europe.
Energy Technology Data Exchange (ETDEWEB)
Fresco, G F [Genoa Univ. (Italy). Dept. of Internal Medicine
1978-06-01
A new RIA method for the detection of circulating immune complexes and antibodies arising in the course of viral hepatitis is described. It involves the use of /sup 125/I-labeled antibodies and foresees the possibility of employing immune complex-coated polypropylene tubes. This simple and sensitive procedure takes into account the possibility that the immune complexes may be absorbed by the surface of polypropylene tubes during the period in which the serum remains there.
Yu, Caixia; Zhao, Jingtao; Wang, Yanfei
2017-02-01
Studying small-scale geologic discontinuities, such as faults, cavities and fractures, plays a vital role in analyzing the inner conditions of reservoirs, as these geologic structures and elements can provide storage spaces and migration pathways for petroleum. However, these geologic discontinuities have weak energy and are easily contaminated with noises, and therefore effectively extracting them from seismic data becomes a challenging problem. In this paper, a method for detecting small-scale discontinuities using dictionary learning and sparse representation is proposed that can dig up high-resolution information by sparse coding. A K-SVD (K-means clustering via Singular Value Decomposition) sparse representation model that contains two stage of iteration procedure: sparse coding and dictionary updating, is suggested for mathematically expressing these seismic small-scale discontinuities. Generally, the orthogonal matching pursuit (OMP) algorithm is employed for sparse coding. However, the method can only update one dictionary atom at one time. In order to improve calculation efficiency, a regularized version of OMP algorithm is presented for simultaneously updating a number of atoms at one time. Two numerical experiments demonstrate the validity of the developed method for clarifying and enhancing small-scale discontinuities. The field example of carbonate reservoirs further demonstrates its effectiveness in revealing masked tiny faults and small-scale cavities.
A Method of Vector Map Multi-scale Representation Considering User Interest on Subdivision Gird
Directory of Open Access Journals (Sweden)
YU Tong
2016-12-01
Full Text Available Compared with the traditional spatial data model and method, global subdivision grid show a great advantage in the organization and expression of massive spatial data. In view of this, a method of vector map multi-scale representation considering user interest on subdivision gird is proposed. First, the spatial interest field is built using a large number POI data to describe the spatial distribution of the user interest in geographic information. Second, spatial factor is classified and graded, and its representation scale range can be determined. Finally, different levels of subdivision surfaces are divided based on GeoSOT subdivision theory, and the corresponding relation of subdivision level and scale is established. According to the user interest of subdivision surfaces, the spatial feature can be expressed in different degree of detail. It can realize multi-scale representation of spatial data based on user interest. The experimental results show that this method can not only satisfy general-to-detail and important-to-secondary space cognitive demands of users, but also achieve better multi-scale representation effect.
Microreactor and method for preparing a radiolabeled complex or a biomolecule conjugate
Energy Technology Data Exchange (ETDEWEB)
Reichert, David E; Kenis, Paul J. A.; Wheeler, Tobias D; Desai, Amit V; Zeng, Dexing; Onal, Birce C
2015-03-17
A microreactor for preparing a radiolabeled complex or a biomolecule conjugate comprises a microchannel for fluid flow, where the microchannel comprises a mixing portion comprising one or more passive mixing elements, and a reservoir for incubating a mixed fluid. The reservoir is in fluid communication with the microchannel and is disposed downstream of the mixing portion. A method of preparing a radiolabeled complex includes flowing a radiometal solution comprising a metallic radionuclide through a downstream mixing portion of a microchannel, where the downstream mixing portion includes one or more passive mixing elements, and flowing a ligand solution comprising a bifunctional chelator through the downstream mixing portion. The ligand solution and the radiometal solution are passively mixed while in the downstream mixing portion to initiate a chelation reaction between the metallic radionuclide and the bifunctional chelator. The chelation reaction is completed to form a radiolabeled complex.
Lam, Nina Siu-Ngan; Qiu, Hong-Lie; Quattrochi, Dale A.; Emerson, Charles W.; Arnold, James E. (Technical Monitor)
2001-01-01
The rapid increase in digital data volumes from new and existing sensors necessitates the need for efficient analytical tools for extracting information. We developed an integrated software package called ICAMS (Image Characterization and Modeling System) to provide specialized spatial analytical functions for interpreting remote sensing data. This paper evaluates the three fractal dimension measurement methods: isarithm, variogram, and triangular prism, along with the spatial autocorrelation measurement methods Moran's I and Geary's C, that have been implemented in ICAMS. A modified triangular prism method was proposed and implemented. Results from analyzing 25 simulated surfaces having known fractal dimensions show that both the isarithm and triangular prism methods can accurately measure a range of fractal surfaces. The triangular prism method is most accurate at estimating the fractal dimension of higher spatial complexity, but it is sensitive to contrast stretching. The variogram method is a comparatively poor estimator for all of the surfaces, particularly those with higher fractal dimensions. Similar to the fractal techniques, the spatial autocorrelation techniques are found to be useful to measure complex images but not images with low dimensionality. These fractal measurement methods can be applied directly to unclassified images and could serve as a tool for change detection and data mining.
Max-Min SINR in Large-Scale Single-Cell MU-MIMO: Asymptotic Analysis and Low Complexity Transceivers
Sifaou, Houssem
2016-12-28
This work focuses on the downlink and uplink of large-scale single-cell MU-MIMO systems in which the base station (BS) endowed with M antennas communicates with K single-antenna user equipments (UEs). Particularly, we aim at reducing the complexity of the linear precoder and receiver that maximize the minimum signal-to-interference-plus-noise ratio subject to a given power constraint. To this end, we consider the asymptotic regime in which M and K grow large with a given ratio. Tools from random matrix theory (RMT) are then used to compute, in closed form, accurate approximations for the parameters of the optimal precoder and receiver, when imperfect channel state information (modeled by the generic Gauss-Markov formulation form) is available at the BS. The asymptotic analysis allows us to derive the asymptotically optimal linear precoder and receiver that are characterized by a lower complexity (due to the dependence on the large scale components of the channel) and, possibly, by a better resilience to imperfect channel state information. However, the implementation of both is still challenging as it requires fast inversions of large matrices in every coherence period. To overcome this issue, we apply the truncated polynomial expansion (TPE) technique to the precoding and receiving vector of each UE and make use of RMT to determine the optimal weighting coefficients on a per- UE basis that asymptotically solve the max-min SINR problem. Numerical results are used to validate the asymptotic analysis in the finite system regime and to show that the proposed TPE transceivers efficiently mimic the optimal ones, while requiring much lower computational complexity.
Kim, S. O.; Shim, K. M.; Shin, Y. S.; Yun, J. I.
2015-12-01
Adequate downscaling of synoptic forecasts is a prerequisite for improved agrometeorological service to rural areas in South Korea where complex terrain and small farms are common. Geospatial schemes based on topoclimatology were used to scale down the Korea Meteorological Administration (KMA) temperature forecasts to the local scale (~30 m) across a rural catchment. Local temperatures were estimated at 14 validation sites at 0600 and 1500 LST in 2013/2014 using these schemes and were compared with observations. A substantial reduction in the estimation error was found for both 0600 and 1500 temperatures compared with uncorrected KMA products. Improvement was most remarkable at low lying locations for the 0600 temperature and at the locations on west- and south-facing slopes for the 1500 temperature. Using the downscaled real-time temperature data, a pilot service has started to provide field-specific weather information tailored to meet the requirements of small-scale farms. For example, the service system makes a daily outlook on the phenology of crop species grown in a given field using the field-specific temperature data. When the temperature forecast is given for tomorrow morning, a frost risk index is calculated according to a known phenology-frost injury relationship. If the calculated index is higher than a pre-defined threshold, a warning is issued and delivered to the grower's cellular phone with relevant countermeasures to help protect crops against frost damage. The system was implemented for a topographically complex catchment of 350km2with diverse agricultural activities, and more than 400 volunteer farmers are participating in this pilot service to access user-specific weather information.
A multiple-scale power series method for solving nonlinear ordinary differential equations
Directory of Open Access Journals (Sweden)
Chein-Shan Liu
2016-02-01
Full Text Available The power series solution is a cheap and effective method to solve nonlinear problems, like the Duffing-van der Pol oscillator, the Volterra population model and the nonlinear boundary value problems. A novel power series method by considering the multiple scales $R_k$ in the power term $(t/R_k^k$ is developed, which are derived explicitly to reduce the ill-conditioned behavior in the data interpolation. In the method a huge value times a tiny value is avoided, such that we can decrease the numerical instability and which is the main reason to cause the failure of the conventional power series method. The multiple scales derived from an integral can be used in the power series expansion, which provide very accurate numerical solutions of the problems considered in this paper.
EVALUATING THE NOVEL METHODS ON SPECIES DISTRIBUTION MODELING IN COMPLEX FOREST
Directory of Open Access Journals (Sweden)
C. H. Tu
2012-07-01
Full Text Available The prediction of species distribution has become a focus in ecology. For predicting a result more effectively and accurately, some novel methods have been proposed recently, like support vector machine (SVM and maximum entropy (MAXENT. However, high complexity in the forest, like that in Taiwan, will make the modeling become even harder. In this study, we aim to explore which method is more applicable to species distribution modeling in the complex forest. Castanopsis carlesii (long-leaf chinkapin, LLC, growing widely in Taiwan, was chosen as the target species because its seeds are an important food source for animals. We overlaid the tree samples on the layers of altitude, slope, aspect, terrain position, and vegetation index derived from SOPT-5 images, and developed three models, MAXENT, SVM, and decision tree (DT, to predict the potential habitat of LLCs. We evaluated these models by two sets of independent samples in different site and the effect on the complexity of forest by changing the background sample size (BSZ. In the forest with low complex (small BSZ, the accuracies of SVM (kappa = 0.87 and DT (0.86 models were slightly higher than that of MAXENT (0.84. In the more complex situation (large BSZ, MAXENT kept high kappa value (0.85, whereas SVM (0.61 and DT (0.57 models dropped significantly due to limiting the habitat close to samples. Therefore, MAXENT model was more applicable to predict species’ potential habitat in the complex forest; whereas SVM and DT models would tend to underestimate the potential habitat of LLCs.
Ishida, Akihiko; Yamada, Yasuko; Kamidate, Tamio
2008-11-01
In hygiene management, recently there has been a significant need for screening methods for microbial contamination by visual observation or with commonly used colorimetric apparatus. The amount of adenosine triphosphate (ATP) can serve as the index of a microorganism. This paper describes the development of a colorimetric method for the assay of ATP, using enzymatic cycling and Fe(III)-xylenol orange (XO) complex formation. The color characteristics of the Fe(III)-XO complexes, which show a distinct color change from yellow to purple, assist the visual observation in screening work. In this method, a trace amount of ATP was converted to pyruvate, which was further amplified exponentially with coupled enzymatic reactions. Eventually, pyruvate was converted to the Fe(III)-XO complexes through pyruvate oxidase reaction and Fe(II) oxidation. As the assay result, yellow or purple color was observed: A yellow color indicates that the ATP concentration is lower than the criterion of the test, and a purple color indicates that the ATP concentration is higher than the criterion. The method was applied to the assay of ATP extracted from Escherichia coli cells added to cow milk.
Network reliability analysis of complex systems using a non-simulation-based method
International Nuclear Information System (INIS)
Kim, Youngsuk; Kang, Won-Hee
2013-01-01
Civil infrastructures such as transportation, water supply, sewers, telecommunications, and electrical and gas networks often establish highly complex networks, due to their multiple source and distribution nodes, complex topology, and functional interdependence between network components. To understand the reliability of such complex network system under catastrophic events such as earthquakes and to provide proper emergency management actions under such situation, efficient and accurate reliability analysis methods are necessary. In this paper, a non-simulation-based network reliability analysis method is developed based on the Recursive Decomposition Algorithm (RDA) for risk assessment of generic networks whose operation is defined by the connections of multiple initial and terminal node pairs. The proposed method has two separate decomposition processes for two logical functions, intersection and union, and combinations of these processes are used for the decomposition of any general system event with multiple node pairs. The proposed method is illustrated through numerical network examples with a variety of system definitions, and is applied to a benchmark gas transmission pipe network in Memphis TN to estimate the seismic performance and functional degradation of the network under a set of earthquake scenarios.
Viegas, Carla; Sabino, Raquel; Botelho, Daniel; dos Santos, Mateus; Gomes, Anita Quintal
2015-09-01
Cork oak is the second most dominant forest species in Portugal and makes this country the world leader in cork export. Occupational exposure to Chrysonilia sitophila and the Penicillium glabrum complex in cork industry is common, and the latter fungus is associated with suberosis. However, as conventional methods seem to underestimate its presence in occupational environments, the aim of our study was to see whether information obtained by polymerase chain reaction (PCR), a molecular-based method, can complement conventional findings and give a better insight into occupational exposure of cork industry workers. We assessed fungal contamination with the P. glabrum complex in three cork manufacturing plants in the outskirts of Lisbon using both conventional and molecular methods. Conventional culturing failed to detect the fungus at six sampling sites in which PCR did detect it. This confirms our assumption that the use of complementing methods can provide information for a more accurate assessment of occupational exposure to the P. glabrum complex in cork industry.
Directory of Open Access Journals (Sweden)
I. L. Dyachok
2016-08-01
Full Text Available Aim. The development of sensible, economical and expressive method of quantitative determination of organic acids in complex poly herbal extraction counted on izovaleric acid with the use of digital technologies. Materials and methods. Model complex poly herbal extraction of sedative action was chosen as a research object. Extraction is composed of these medical plants: Valeriana officinalis L., Crataégus, Melissa officinalis L., Hypericum, Mentha piperita L., Húmulus lúpulus, Viburnum. Based on chemical composition of plant components, we consider that main pharmacologically active compounds, which can be found in complex poly herbal extraction are: polyphenolic substances (flavonoids, which are contained in Crataégus, Viburnum, Hypericum, Mentha piperita L., Húmulus lúpulus; also organic acids, including izovaleric acid, which are contained in Valeriana officinalis L., Mentha piperita L., Melissa officinalis L., Viburnum; the aminoacid are contained in Valeriana officinalis L. For the determination of organic acids content in low concentration we applied instrumental method of analysis, namely conductometry titration which consisted in the dependences of water solution conductivity of complex poly herbal extraction on composition of organic acids. Result. The got analytical dependences, which describes tangent lines to the conductometry curve before and after the point of equivalence, allow to determine the volume of solution expended on titration and carry out procedure of quantitative determination of organic acids in the digital mode. Conclusion. The proposed method enables to determine the point of equivalence and carry out quantitative determination of organic acids counted on izovaleric acid with the use of digital technologies, that allows to computerize the method on the whole.
International Nuclear Information System (INIS)
Samin; Kris-Tri-Basuki; Farida-Ernawati
1996-01-01
The influence of atomic number on the complex formation constants and it's application by visible spectrophotometric method has been carried out. The complex compound have been made of Y, Nd, Sm and Gd with alizarin red sulfonic in the mole fraction range of 0.20 - 0.53 and pH range of 3.5 - 5. The optimum condition of complex formation was found in the mole fraction range of 0.30 - 0.53, range of pH 3.75 - 5, and the total concentration was 0.00030 M. It was found that the formation constant (β) of alizarin red S. complex by continued variation and matrix disintegration techniques were β : (7.00 ± 0.64).10 9 of complex 3 9γ,β : (4.09±0.34).10 8 of 6 0Nd, β : (7.26 ± 0.42).10 8 of 62 S m and β : (8.38 ± 0.70).10 8 of 64 G d. It can be concluded that the atomic number of Nd is bigger than Sm which is bigger than Gd. The atomic number of Y is the smallest. (39) and the complex formation constant is a biggest. The complex compound can be used for sample analysis with limit detection of Y : 2.2 .10 -5 M, Nd : 2.9 .10 -5 M, Sm : 2.6 .10 -5 M and Gd : 2.4 .10 -5 M. The sensitivity of analysis are Y>Gd>Sm>Nd. The Y 2 O 3 sample of product result from xenotime sand contains Y 2 O 3 : 98.96 ± 1.40 % and in the filtrate (product of monazite sand) contains Nd : 0.27 ± 0.002 M
Directory of Open Access Journals (Sweden)
Hassan Badreddine
2017-01-01
Full Text Available The current work focuses on the development and application of a new finite volume immersed boundary method (IBM to simulate three-dimensional fluid flows and heat transfer around complex geometries. First, the discretization of the governing equations based on the second-order finite volume method on Cartesian, structured, staggered grid is outlined, followed by the description of modifications which have to be applied to the discretized system once a body is immersed into the grid. To validate the new approach, the heat conduction equation with a source term is solved inside a cavity with an immersed body. The approach is then tested for a natural convection flow in a square cavity with and without circular cylinder for different Rayleigh numbers. The results computed with the present approach compare very well with the benchmark solutions. As a next step in the validation procedure, the method is tested for Direct Numerical Simulation (DNS of a turbulent flow around a surface-mounted matrix of cubes. The results computed with the present method compare very well with Laser Doppler Anemometry (LDA measurements of the same case, showing that the method can be used for scale-resolving simulations of turbulence as well.
Estimating the complexity of 3D structural models using machine learning methods
Mejía-Herrera, Pablo; Kakurina, Maria; Royer, Jean-Jacques
2016-04-01
Quantifying the complexity of 3D geological structural models can play a major role in natural resources exploration surveys, for predicting environmental hazards or for forecasting fossil resources. This paper proposes a structural complexity index which can be used to help in defining the degree of effort necessary to build a 3D model for a given degree of confidence, and also to identify locations where addition efforts are required to meet a given acceptable risk of uncertainty. In this work, it is considered that the structural complexity index can be estimated using machine learning methods on raw geo-data. More precisely, the metrics for measuring the complexity can be approximated as the difficulty degree associated to the prediction of the geological objects distribution calculated based on partial information on the actual structural distribution of materials. The proposed methodology is tested on a set of 3D synthetic structural models for which the degree of effort during their building is assessed using various parameters (such as number of faults, number of part in a surface object, number of borders, ...), the rank of geological elements contained in each model, and, finally, their level of deformation (folding and faulting). The results show how the estimated complexity in a 3D model can be approximated by the quantity of partial data necessaries to simulated at a given precision the actual 3D model without error using machine learning algorithms.
Methods for large-scale international studies on ICT in education
Pelgrum, W.J.; Plomp, T.; Voogt, Joke; Knezek, G.A.
2008-01-01
International comparative assessment is a research method applied for describing and analyzing educational processes and outcomes. They are used to ‘describe the status quo’ in educational systems from an international comparative perspective. This chapter reviews different large scale international
A fast method for large-scale isolation of phages from hospital ...
African Journals Online (AJOL)
This plaque-forming method could be adopted to isolate E. coli phage easily, rapidly and in large quantities. Among the 18 isolated E. coli phages, 10 of them had a broad host range in E. coli and warrant further study. Key words: Escherichia coli phages, large-scale isolation, drug resistance, biological properties.
Non-Abelian Kubo formula and the multiple time-scale method
International Nuclear Information System (INIS)
Zhang, X.; Li, J.
1996-01-01
The non-Abelian Kubo formula is derived from the kinetic theory. That expression is compared with the one obtained using the eikonal for a Chern endash Simons theory. The multiple time-scale method is used to study the non-Abelian Kubo formula, and the damping rate for longitudinal color waves is computed. copyright 1996 Academic Press, Inc
Detection of circulating immune complexes in breast cancer and melanoma by three different methods
Energy Technology Data Exchange (ETDEWEB)
Krapf, F; Renger, D; Fricke, M; Kemper, A; Schedel, I; Deicher, H
1982-08-01
By the simultaneous application of three methods, C1q-binding-test (C1q-BA), a two antibody conglutinin binding ELISA and a polyethylene-glycol 6000 precipitation with subsequent quantitative determination of immunoglobulins and complement factors in the redissolved precipitates (PPLaNT), circulating immune complexes could be demonstrated in the sera of 94% of patients with malignant melanoma and of 75% of breast cancer patients. The specific detection rates of the individual methods varied between 23% (C1q-BA) and 46% (PPLaNT), presumably due to the presence of qualitatively different immune complexes in the investigated sera. Accordingly, the simultaneous use of the afore mentioned assays resulted in an increased diagnostic sensitivity and a duplication of the predictive value. Nevertheless, because of the relatively low incidence of malignant diseases in the total population, and due to the fact that circulating immune complexes occur in other non-malignant diseases with considerable frequency, tests for circulating immune complexes must be regarded as less useful parameters in the early diagnostic of cancer.
A ghost-cell immersed boundary method for flow in complex geometry
International Nuclear Information System (INIS)
Tseng, Y.-H.; Ferziger, Joel H.
2003-01-01
An efficient ghost-cell immersed boundary method (GCIBM) for simulating turbulent flows in complex geometries is presented. A boundary condition is enforced through a ghost cell method. The reconstruction procedure allows systematic development of numerical schemes for treating the immersed boundary while preserving the overall second-order accuracy of the base solver. Both Dirichlet and Neumann boundary conditions can be treated. The current ghost cell treatment is both suitable for staggered and non-staggered Cartesian grids. The accuracy of the current method is validated using flow past a circular cylinder and large eddy simulation of turbulent flow over a wavy surface. Numerical results are compared with experimental data and boundary-fitted grid results. The method is further extended to an existing ocean model (MITGCM) to simulate geophysical flow over a three-dimensional bump. The method is easily implemented as evidenced by our use of several existing codes
A hybrid 3D SEM reconstruction method optimized for complex geologic material surfaces.
Yan, Shang; Adegbule, Aderonke; Kibbey, Tohren C G
2017-08-01
Reconstruction methods are widely used to extract three-dimensional information from scanning electron microscope (SEM) images. This paper presents a new hybrid reconstruction method that combines stereoscopic reconstruction with shape-from-shading calculations to generate highly-detailed elevation maps from SEM image pairs. The method makes use of an imaged glass sphere to determine the quantitative relationship between observed intensity and angles between the beam and surface normal, and the detector and surface normal. Two specific equations are derived to make use of image intensity information in creating the final elevation map. The equations are used together, one making use of intensities in the two images, the other making use of intensities within a single image. The method is specifically designed for SEM images captured with a single secondary electron detector, and is optimized to capture maximum detail from complex natural surfaces. The method is illustrated with a complex structured abrasive material, and a rough natural sand grain. Results show that the method is capable of capturing details such as angular surface features, varying surface roughness, and surface striations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Method for Hot Real-Time Analysis of Pyrolysis Vapors at Pilot Scale
Energy Technology Data Exchange (ETDEWEB)
Pomeroy, Marc D [National Renewable Energy Laboratory (NREL), Golden, CO (United States)
2017-09-29
Pyrolysis oils contain more than 400 compounds, up to 60% of which do not re-volatilize for subsequent chemical analysis. Vapor chemical composition is also complicated as additional condensation reactions occur during quenching and collection of the product. Due to the complexity of the pyrolysis oil, and a desire to catalytically upgrade the vapor composition before condensation, online real-time analytical techniques such as Molecular Beam Mass Spectrometry (MBMS) are of great use. However, in order to properly sample hot pyrolysis vapors at the pilot scale, many challenges must be overcome.
International Nuclear Information System (INIS)
Puncher, M.; Birchall, A.; Bull, R. K.
2012-01-01
Estimating uncertainties on doses from bioassay data is of interest in epidemiology studies that estimate cancer risk from occupational exposures to radionuclides. Bayesian methods provide a logical framework to calculate these uncertainties. However, occupational exposures often consist of many intakes, and this can make the Bayesian calculation computationally intractable. This paper describes a novel strategy for increasing the computational speed of the calculation by simplifying the intake pattern to a single composite intake, termed as complex intake regime (CIR). In order to assess whether this approximation is accurate and fast enough for practical purposes, the method is implemented by the Weighted Likelihood Monte Carlo Sampling (WeLMoS) method and evaluated by comparing its performance with a Markov Chain Monte Carlo (MCMC) method. The MCMC method gives the full solution (all intakes are independent), but is very computationally intensive to apply routinely. Posterior distributions of model parameter values, intakes and doses are calculated for a representative sample of plutonium workers from the United Kingdom Atomic Energy cohort using the WeLMoS method with the CIR and the MCMC method. The distributions are in good agreement: posterior means and Q 0.025 and Q 0.975 quantiles are typically within 20 %. Furthermore, the WeLMoS method using the CIR converges quickly: a typical case history takes around 10-20 min on a fast workstation, whereas the MCMC method took around 12-hr. The advantages and disadvantages of the method are discussed. (authors)
CSIR Research Space (South Africa)
Wilke, DN
2012-07-01
Full Text Available problems that utilise remeshing (i.e. the mesh topology is allowed to change) between design updates. Here, changes in mesh topology result in abrupt changes in the discretization error of the computed response. These abrupt changes in turn manifests... in shape optimization but may be present whenever (partial) differential equations are ap- proximated numerically with non-constant discretization methods e.g. remeshing of spatial domains or automatic time stepping in temporal domains. Keywords: Complex...
F. Grigoli; Simone Cesca; Torsten Dahm; L. Krieger
2012-01-01
Determining the relative orientation of the horizontal components of seismic sensors is a common problem that limits data analysis and interpretation for several acquisition setups, including linear arrays of geophones deployed in borehole installations or ocean bottom seismometers deployed at the seafloor. To solve this problem we propose a new inversion method based on a complex linear algebra approach. Relative orientation angles are retrieved by minimizing, in a least-squares sense, the l...
Adiabatic passage for a lossy two-level quantum system by a complex time method
International Nuclear Information System (INIS)
Dridi, G; Guérin, S
2012-01-01
Using a complex time method with the formalism of Stokes lines, we establish a generalization of the Davis–Dykhne–Pechukas formula which gives in the adiabatic limit the transition probability of a lossy two-state system driven by an external frequency-chirped pulse-shaped field. The conditions that allow this generalization are derived. We illustrate the result with the dissipative Allen–Eberly and Rosen–Zener models. (paper)