WorldWideScience

Sample records for complex scaling method

  1. Method of complex scaling

    International Nuclear Information System (INIS)

    Braendas, E.

    1986-01-01

    The method of complex scaling is taken to include bound states, resonances, remaining scattering background and interference. Particular points of the general complex coordinate formulation are presented. It is shown that care must be exercised to avoid paradoxical situations resulting from inadequate definitions of operator domains. A new resonance localization theorem is presented

  2. Continuum Level Density in Complex Scaling Method

    International Nuclear Information System (INIS)

    Suzuki, R.; Myo, T.; Kato, K.

    2005-01-01

    A new calculational method of continuum level density (CLD) at unbound energies is studied in the complex scaling method (CSM). It is shown that the CLD can be calculated by employing the discretization of continuum states in the CSM without any smoothing technique

  3. Level density in the complex scaling method

    International Nuclear Information System (INIS)

    Suzuki, Ryusuke; Kato, Kiyoshi; Myo, Takayuki

    2005-01-01

    It is shown that the continuum level density (CLD) at unbound energies can be calculated with the complex scaling method (CSM), in which the energy spectra of bound states, resonances and continuum states are obtained in terms of L 2 basis functions. In this method, the extended completeness relation is applied to the calculation of the Green functions, and the continuum-state part is approximately expressed in terms of discretized complex scaled continuum solutions. The obtained result is compared with the CLD calculated exactly from the scattering phase shift. The discretization in the CSM is shown to give a very good description of continuum states. We discuss how the scattering phase shifts can inversely be calculated from the discretized CLD using a basis function technique in the CSM. (author)

  4. Iteratively-coupled propagating exterior complex scaling method for electron-hydrogen collisions

    International Nuclear Information System (INIS)

    Bartlett, Philip L; Stelbovics, Andris T; Bray, Igor

    2004-01-01

    A newly-derived iterative coupling procedure for the propagating exterior complex scaling (PECS) method is used to efficiently calculate the electron-impact wavefunctions for atomic hydrogen. An overview of this method is given along with methods for extracting scattering cross sections. Differential scattering cross sections at 30 eV are presented for the electron-impact excitation to the n = 1, 2, 3 and 4 final states, for both PECS and convergent close coupling (CCC), which are in excellent agreement with each other and with experiment. PECS results are presented at 27.2 eV and 30 eV for symmetric and asymmetric energy-sharing triple differential cross sections, which are in excellent agreement with CCC and exterior complex scaling calculations, and with experimental data. At these intermediate energies, the efficiency of the PECS method with iterative coupling has allowed highly accurate partial-wave solutions of the full Schroedinger equation, for L ≤ 50 and a large number of coupled angular momentum states, to be obtained with minimal computing resources. (letter to the editor)

  5. Symmetrized complex amplitudes for He double photoionization from the time-dependent close coupling and exterior complex scaling methods

    International Nuclear Information System (INIS)

    Horner, D.A.; Colgan, J.; Martin, F.; McCurdy, C.W.; Pindzola, M.S.; Rescigno, T.N.

    2004-01-01

    Symmetrized complex amplitudes for the double photoionization of helium are computed by the time-dependent close-coupling and exterior complex scaling methods, and it is demonstrated that both methods are capable of the direct calculation of these amplitudes. The results are found to be in excellent agreement with each other and in very good agreement with results of other ab initio methods and experiment

  6. Large-scale Complex IT Systems

    OpenAIRE

    Sommerville, Ian; Cliff, Dave; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard

    2011-01-01

    This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that identifies the major challen...

  7. Large-scale complex IT systems

    OpenAIRE

    Sommerville, Ian; Cliff, Dave; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard

    2012-01-01

    12 pages, 2 figures This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that ident...

  8. Methods Dealing with Complexity in Selecting Joint Venture Contractors for Large-Scale Infrastructure Projects

    Directory of Open Access Journals (Sweden)

    Ru Liang

    2018-01-01

    Full Text Available The magnitude of business dynamics has increased rapidly due to increased complexity, uncertainty, and risk of large-scale infrastructure projects. This fact made it increasingly tough to “go alone” into a contractor. As a consequence, joint venture contractors with diverse strengths and weaknesses cooperatively bid for bidding. Understanding project complexity and making decision on the optimal joint venture contractor is challenging. This paper is to study how to select joint venture contractors for undertaking large-scale infrastructure projects based on a multiattribute mathematical model. Two different methods are developed to solve the problem. One is based on ideal points and the other one is based on balanced ideal advantages. Both of the two methods consider individual difference in expert judgment and contractor attributes. A case study of Hong Kong-Zhuhai-Macao-Bridge (HZMB project in China is used to demonstrate how to apply these two methods and their advantages.

  9. Scale Development and Initial Tests of the Multidimensional Complex Adaptive Leadership Scale for School Principals: An Exploratory Mixed Method Study

    Science.gov (United States)

    Özen, Hamit; Turan, Selahattin

    2017-01-01

    This study was designed to develop the scale of the Complex Adaptive Leadership for School Principals (CAL-SP) and examine its psychometric properties. This was an exploratory mixed method research design (ES-MMD). Both qualitative and quantitative methods were used to develop and assess psychometric properties of the questionnaire. This study…

  10. Recent developments in complex scaling

    International Nuclear Information System (INIS)

    Rescigno, T.N.

    1980-01-01

    Some recent developments in the use of complex basis function techniques to study resonance as well as certain types of non-resonant, scattering phenomena are discussed. Complex scaling techniques and other closely related methods have continued to attract the attention of computational physicists and chemists and have now reached a point of development where meaningful calculations on many-electron atoms and molecules are beginning to appear feasible

  11. A Proactive Complex Event Processing Method for Large-Scale Transportation Internet of Things

    OpenAIRE

    Wang, Yongheng; Cao, Kening

    2014-01-01

    The Internet of Things (IoT) provides a new way to improve the transportation system. The key issue is how to process the numerous events generated by IoT. In this paper, a proactive complex event processing method is proposed for large-scale transportation IoT. Based on a multilayered adaptive dynamic Bayesian model, a Bayesian network structure learning algorithm using search-and-score is proposed to support accurate predictive analytics. A parallel Markov decision processes model is design...

  12. Scale-dependent intrinsic entropies of complex time series.

    Science.gov (United States)

    Yeh, Jia-Rong; Peng, Chung-Kang; Huang, Norden E

    2016-04-13

    Multi-scale entropy (MSE) was developed as a measure of complexity for complex time series, and it has been applied widely in recent years. The MSE algorithm is based on the assumption that biological systems possess the ability to adapt and function in an ever-changing environment, and these systems need to operate across multiple temporal and spatial scales, such that their complexity is also multi-scale and hierarchical. Here, we present a systematic approach to apply the empirical mode decomposition algorithm, which can detrend time series on various time scales, prior to analysing a signal's complexity by measuring the irregularity of its dynamics on multiple time scales. Simulated time series of fractal Gaussian noise and human heartbeat time series were used to study the performance of this new approach. We show that our method can successfully quantify the fractal properties of the simulated time series and can accurately distinguish modulations in human heartbeat time series in health and disease. © 2016 The Author(s).

  13. Modified projective synchronization with complex scaling factors of uncertain real chaos and complex chaos

    International Nuclear Information System (INIS)

    Zhang Fang-Fang; Liu Shu-Tang; Yu Wei-Yong

    2013-01-01

    To increase the variety and security of communication, we present the definitions of modified projective synchronization with complex scaling factors (CMPS) of real chaotic systems and complex chaotic systems, where complex scaling factors establish a link between real chaos and complex chaos. Considering all situations of unknown parameters and pseudo-gradient condition, we design adaptive CMPS schemes based on the speed-gradient method for the real drive chaotic system and complex response chaotic system and for the complex drive chaotic system and the real response chaotic system, respectively. The convergence factors and dynamical control strength are added to regulate the convergence speed and increase robustness. Numerical simulations verify the feasibility and effectiveness of the presented schemes. (general)

  14. Continuum level density of a coupled-channel system in the complex scaling method

    International Nuclear Information System (INIS)

    Suzuki, Ryusuke; Kato, Kiyoshi; Kruppa, Andras; Giraud, Bertrand G.

    2008-01-01

    We study the continuum level density (CLD) in the formalism of the complex scaling method (CSM) for coupled-channel systems. We apply the formalism to the 4 He=[ 3 H+p]+[ 3 He+n] coupled-channel cluster model where there are resonances at low energy. Numerical calculations of the CLD in the CSM with a finite number of L 2 basis functions are consistent with the exact result calculated from the S-matrix by solving coupled-channel equations. We also study channel densities. In this framework, the extended completeness relation (ECR) plays an important role. (author)

  15. Three-body Coulomb breakup of 11Li in the complex scaling method

    International Nuclear Information System (INIS)

    Myo, Takayuki; Aoyama, Shigeyoshi; Kato, Kiyoshi; Ikeda, Kiyomi

    2003-01-01

    Coulomb breakup strengths of 11 Li into a three-body 9 Li+n+n system are studied in the complex scaling method. We decompose the transition strengths into the contributions from three-body resonances, two-body '' 10 Li+n'' and three-body '' 9 Li+n+n'' continuum states. In the calculated results, we cannot find the dipole resonances with a sharp decay width in 11 Li. There is a low energy enhancement in the breakup strength, which is produced by both the two- and three-body continuum states. The enhancement given by the three-body continuum states is found to have a strong connection to the halo structure of 11 Li. The calculated breakup strength distribution is compared with the experimental data from MSU, RIKEN and GSI

  16. A study of complex scaling transformation using the Wigner representation of wavefunctions.

    Science.gov (United States)

    Kaprálová-Ždánská, Petra Ruth

    2011-05-28

    The complex scaling operator exp(-θ ̂x̂p/ℏ), being a foundation of the complex scaling method for resonances, is studied in the Wigner phase-space representation. It is shown that the complex scaling operator behaves similarly to the squeezing operator, rotating and amplifying Wigner quasi-probability distributions of the respective wavefunctions. It is disclosed that the distorting effect of the complex scaling transformation is correlated with increased numerical errors of computed resonance energies and widths. The behavior of the numerical error is demonstrated for a computation of CO(2+) vibronic resonances. © 2011 American Institute of Physics

  17. Low-Complexity Transmit Antenna Selection and Beamforming for Large-Scale MIMO Communications

    Directory of Open Access Journals (Sweden)

    Kun Qian

    2014-01-01

    Full Text Available Transmit antenna selection plays an important role in large-scale multiple-input multiple-output (MIMO communications, but optimal large-scale MIMO antenna selection is a technical challenge. Exhaustive search is often employed in antenna selection, but it cannot be efficiently implemented in large-scale MIMO communication systems due to its prohibitive high computation complexity. This paper proposes a low-complexity interactive multiple-parameter optimization method for joint transmit antenna selection and beamforming in large-scale MIMO communication systems. The objective is to jointly maximize the channel outrage capacity and signal-to-noise (SNR performance and minimize the mean square error in transmit antenna selection and minimum variance distortionless response (MVDR beamforming without exhaustive search. The effectiveness of all the proposed methods is verified by extensive simulation results. It is shown that the required antenna selection processing time of the proposed method does not increase along with the increase of selected antennas, but the computation complexity of conventional exhaustive search method will significantly increase when large-scale antennas are employed in the system. This is particularly useful in antenna selection for large-scale MIMO communication systems.

  18. Relativistic extension of the complex scaled Green's function method for resonances in deformed nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Shi, Min [Anhui University, School of Physics and Materials Science, Hefei (China); RIKEN Nishina Center, Wako (Japan); Shi, Xin-Xing; Guo, Jian-You [Anhui University, School of Physics and Materials Science, Hefei (China); Niu, Zhong-Ming [Anhui University, School of Physics and Materials Science, Hefei (China); Interdisciplinary Theoretical Science Research Group, RIKEN, Wako (Japan); Sun, Ting-Ting [Zhengzhou University, School of Physics and Engineering, Zhengzhou (China)

    2017-03-15

    We have extended the complex scaled Green's function method to the relativistic framework describing deformed nuclei with the theoretical formalism presented in detail. We have checked the applicability and validity of the present formalism for exploration of the resonances in deformed nuclei. Furthermore, we have studied the dependences of resonances on nuclear deformations and the shape of potential, which are helpful to recognize the evolution of resonant levels from stable nuclei to exotic nuclei with axially quadruple deformations. (orig.)

  19. Time-dependent approach to collisional ionization using exterior complex scaling

    International Nuclear Information System (INIS)

    McCurdy, C. William; Horner, Daniel A.; Rescigno, Thomas N.

    2002-01-01

    We present a time-dependent formulation of the exterior complex scaling method that has previously been used to treat electron-impact ionization of the hydrogen atom accurately at low energies. The time-dependent approach solves a driven Schroedinger equation, and scales more favorably with the number of electrons than the original formulation. The method is demonstrated in calculations for breakup processes in two dimensions (2D) and three dimensions for systems involving short-range potentials and in 2D for electron-impact ionization in the Temkin-Poet model for electron-hydrogen atom collisions

  20. Scaling up complex interventions: insights from a realist synthesis.

    Science.gov (United States)

    Willis, Cameron D; Riley, Barbara L; Stockton, Lisa; Abramowicz, Aneta; Zummach, Dana; Wong, Geoff; Robinson, Kerry L; Best, Allan

    2016-12-19

    legislation, or agreements with new funding partners.This synthesis applies and advances theory, realist methods and the practice of scaling up complex interventions. Practitioners may benefit from a number of coordinated efforts, including conducting or commissioning evaluations at strategic moments, mobilising local and political support through relevant partnerships, and promoting ongoing knowledge exchange in peer learning networks. Action research studies guided by these findings, and studies on knowledge translation for realist syntheses are promising future directions.

  1. Implementation of exterior complex scaling in B-splines to solve atomic and molecular collision problems

    International Nuclear Information System (INIS)

    McCurdy, C William; MartIn, Fernando

    2004-01-01

    B-spline methods are now well established as widely applicable tools for the evaluation of atomic and molecular continuum states. The mathematical technique of exterior complex scaling has been shown, in a variety of other implementations, to be a powerful method with which to solve atomic and molecular scattering problems, because it allows the correct imposition of continuum boundary conditions without their explicit analytic application. In this paper, an implementation of exterior complex scaling in B-splines is described that can bring the well-developed technology of B-splines to bear on new problems, including multiple ionization and breakup problems, in a straightforward way. The approach is demonstrated for examples involving the continuum motion of nuclei in diatomic molecules as well as electronic continua. For problems involving electrons, a method based on Poisson's equation is presented for computing two-electron integrals over B-splines under exterior complex scaling

  2. Large-Scale Optimization for Bayesian Inference in Complex Systems

    Energy Technology Data Exchange (ETDEWEB)

    Willcox, Karen [MIT; Marzouk, Youssef [MIT

    2013-11-12

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to

  3. Complex scaling in the cluster model

    International Nuclear Information System (INIS)

    Kruppa, A.T.; Lovas, R.G.; Gyarmati, B.

    1987-01-01

    To find the positions and widths of resonances, a complex scaling of the intercluster relative coordinate is introduced into the resonating-group model. In the generator-coordinate technique used to solve the resonating-group equation the complex scaling requires minor changes in the formulae and code. The finding of the resonances does not need any preliminary guess or explicit reference to any asymptotic prescription. The procedure is applied to the resonances in the relative motion of two ground-state α clusters in 8 Be, but is appropriate for any systems consisting of two clusters. (author) 23 refs.; 5 figs

  4. Solving the three-body Coulomb breakup problem using exterior complex scaling

    Energy Technology Data Exchange (ETDEWEB)

    McCurdy, C.W.; Baertschy, M.; Rescigno, T.N.

    2004-05-17

    Electron-impact ionization of the hydrogen atom is the prototypical three-body Coulomb breakup problem in quantum mechanics. The combination of subtle correlation effects and the difficult boundary conditions required to describe two electrons in the continuum have made this one of the outstanding challenges of atomic physics. A complete solution of this problem in the form of a ''reduction to computation'' of all aspects of the physics is given by the application of exterior complex scaling, a modern variant of the mathematical tool of analytic continuation of the electronic coordinates into the complex plane that was used historically to establish the formal analytic properties of the scattering matrix. This review first discusses the essential difficulties of the three-body Coulomb breakup problem in quantum mechanics. It then describes the formal basis of exterior complex scaling of electronic coordinates as well as the details of its numerical implementation using a variety of methods including finite difference, finite elements, discrete variable representations, and B-splines. Given these numerical implementations of exterior complex scaling, the scattering wave function can be generated with arbitrary accuracy on any finite volume in the space of electronic coordinates, but there remains the fundamental problem of extracting the breakup amplitudes from it. Methods are described for evaluating these amplitudes. The question of the volume-dependent overall phase that appears in the formal theory of ionization is resolved. A summary is presented of accurate results that have been obtained for the case of electron-impact ionization of hydrogen as well as a discussion of applications to the double photoionization of helium.

  5. Vertical equilibrium with sub-scale analytical methods for geological CO2 sequestration

    KAUST Repository

    Gasda, S. E.; Nordbotten, J. M.; Celia, M. A.

    2009-01-01

    equilibrium with sub-scale analytical method (VESA) combines the flexibility of a numerical method, allowing for heterogeneous and geologically complex systems, with the efficiency and accuracy of an analytical method, thereby eliminating expensive grid

  6. Comparison of MODIS and SWAT evapotranspiration over a complex terrain at different spatial scales

    Science.gov (United States)

    Abiodun, Olanrewaju O.; Guan, Huade; Post, Vincent E. A.; Batelaan, Okke

    2018-05-01

    In most hydrological systems, evapotranspiration (ET) and precipitation are the largest components of the water balance, which are difficult to estimate, particularly over complex terrain. In recent decades, the advent of remotely sensed data based ET algorithms and distributed hydrological models has provided improved spatially upscaled ET estimates. However, information on the performance of these methods at various spatial scales is limited. This study compares the ET from the MODIS remotely sensed ET dataset (MOD16) with the ET estimates from a SWAT hydrological model on graduated spatial scales for the complex terrain of the Sixth Creek Catchment of the Western Mount Lofty Ranges, South Australia. ET from both models was further compared with the coarser-resolution AWRA-L model at catchment scale. The SWAT model analyses are performed on daily timescales with a 6-year calibration period (2000-2005) and 7-year validation period (2007-2013). Differences in ET estimation between the SWAT and MOD16 methods of up to 31, 19, 15, 11 and 9 % were observed at respectively 1, 4, 9, 16 and 25 km2 spatial resolutions. Based on the results of the study, a spatial scale of confidence of 4 km2 for catchment-scale evapotranspiration is suggested in complex terrain. Land cover differences, HRU parameterisation in AWRA-L and catchment-scale averaging of input climate data in the SWAT semi-distributed model were identified as the principal sources of weaker correlations at higher spatial resolution.

  7. Hierarchical modeling and robust synthesis for the preliminary design of large scale complex systems

    Science.gov (United States)

    Koch, Patrick Nathan

    Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: (1) Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis, (2) Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration, and (3) Noise modeling techniques for implementing robust preliminary design when approximate models are employed. The method developed and associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system; the turbofan system-level problem is partitioned into engine cycle and configuration design and a compressor module is integrated for more detailed subsystem-level design exploration, improving system evaluation.

  8. Evaluation model of project complexity for large-scale construction projects in Iran - A Fuzzy ANP approach

    Directory of Open Access Journals (Sweden)

    Aliyeh Kazemi

    2016-09-01

    Full Text Available Construction projects have always been complex. By growing trend of this complexity, implementations of large-scale constructions become harder. Hence, evaluating and understanding these complexities are critical. Correct evaluation of a project complication can provide executives and managers with good source to use. Fuzzy analytic network process (ANP is a logical and systematic approach toward defining, evaluation, and grading. This method allows for analyzing complex systems, and determining complexity of them. In this study, by taking advantage of fuzzy ANP, effective indexes for development of complications in large-scale construction projects in Iran have been determined and prioritized. The results show socio-political, project system interdependencies, and technological complexity indexes ranked top to three. Furthermore, in comparison of three main huge projects: commercial-administrative, hospital, and skyscrapers, the hospital project had been evaluated as the most complicated. This model is beneficial for professionals in managing large-scale projects.

  9. Stationarity of resonant pole trajectories in complex scaling

    International Nuclear Information System (INIS)

    Canuto, S.; Goscinski, O.

    1978-01-01

    A reciprocity theorem relating the real parameters eta and α that define the complex scaling transformation r → eta r e/sup iα/ in the theory of complex scaling for resonant states is demonstrated. The virial theorem is used in connection with the stationarity of the pole trajectory. The Stark broadening in the hydrogen atom using a basis set generated by Rayleigh--Schroedinger perturbation theory is treated as an example. 18 references

  10. Comparison of MODIS and SWAT evapotranspiration over a complex terrain at different spatial scales

    Directory of Open Access Journals (Sweden)

    O. O. Abiodun

    2018-05-01

    Full Text Available In most hydrological systems, evapotranspiration (ET and precipitation are the largest components of the water balance, which are difficult to estimate, particularly over complex terrain. In recent decades, the advent of remotely sensed data based ET algorithms and distributed hydrological models has provided improved spatially upscaled ET estimates. However, information on the performance of these methods at various spatial scales is limited. This study compares the ET from the MODIS remotely sensed ET dataset (MOD16 with the ET estimates from a SWAT hydrological model on graduated spatial scales for the complex terrain of the Sixth Creek Catchment of the Western Mount Lofty Ranges, South Australia. ET from both models was further compared with the coarser-resolution AWRA-L model at catchment scale. The SWAT model analyses are performed on daily timescales with a 6-year calibration period (2000–2005 and 7-year validation period (2007–2013. Differences in ET estimation between the SWAT and MOD16 methods of up to 31, 19, 15, 11 and 9 % were observed at respectively 1, 4, 9, 16 and 25 km2 spatial resolutions. Based on the results of the study, a spatial scale of confidence of 4 km2 for catchment-scale evapotranspiration is suggested in complex terrain. Land cover differences, HRU parameterisation in AWRA-L and catchment-scale averaging of input climate data in the SWAT semi-distributed model were identified as the principal sources of weaker correlations at higher spatial resolution.

  11. Multilevel method for modeling large-scale networks.

    Energy Technology Data Exchange (ETDEWEB)

    Safro, I. M. (Mathematics and Computer Science)

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  12. Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems

    Energy Technology Data Exchange (ETDEWEB)

    Ghattas, Omar [The University of Texas at Austin

    2013-10-15

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUARO Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.

  13. Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends.

    Science.gov (United States)

    Snowden, Thomas J; van der Graaf, Piet H; Tindall, Marcus J

    2017-07-01

    Complex models of biochemical reaction systems have become increasingly common in the systems biology literature. The complexity of such models can present a number of obstacles for their practical use, often making problems difficult to intuit or computationally intractable. Methods of model reduction can be employed to alleviate the issue of complexity by seeking to eliminate those portions of a reaction network that have little or no effect upon the outcomes of interest, hence yielding simplified systems that retain an accurate predictive capacity. This review paper seeks to provide a brief overview of a range of such methods and their application in the context of biochemical reaction network models. To achieve this, we provide a brief mathematical account of the main methods including timescale exploitation approaches, reduction via sensitivity analysis, optimisation methods, lumping, and singular value decomposition-based approaches. Methods are reviewed in the context of large-scale systems biology type models, and future areas of research are briefly discussed.

  14. Renormalization Scale-Fixing for Complex Scattering Amplitudes

    Energy Technology Data Exchange (ETDEWEB)

    Brodsky, Stanley J.; /SLAC; Llanes-Estrada, Felipe J.; /Madrid U.

    2005-12-21

    We show how to fix the renormalization scale for hard-scattering exclusive processes such as deeply virtual meson electroproduction by applying the BLM prescription to the imaginary part of the scattering amplitude and employing a fixed-t dispersion relation to obtain the scale-fixed real part. In this way we resolve the ambiguity in BLM renormalization scale-setting for complex scattering amplitudes. We illustrate this by computing the H generalized parton distribution at leading twist in an analytic quark-diquark model for the parton-proton scattering amplitude which can incorporate Regge exchange contributions characteristic of the deep inelastic structure functions.

  15. Linearization Method and Linear Complexity

    Science.gov (United States)

    Tanaka, Hidema

    We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N(≅2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.

  16. Scale-free crystallization of two-dimensional complex plasmas: Domain analysis using Minkowski tensors

    Science.gov (United States)

    Böbel, A.; Knapek, C. A.; Räth, C.

    2018-05-01

    Experiments of the recrystallization processes in two-dimensional complex plasmas are analyzed to rigorously test a recently developed scale-free phase transition theory. The "fractal-domain-structure" (FDS) theory is based on the kinetic theory of Frenkel. It assumes the formation of homogeneous domains, separated by defect lines, during crystallization and a fractal relationship between domain area and boundary length. For the defect number fraction and system energy a scale-free power-law relation is predicted. The long-range scaling behavior of the bond-order correlation function shows clearly that the complex plasma phase transitions are not of the Kosterlitz, Thouless, Halperin, Nelson, and Young type. Previous preliminary results obtained by counting the number of dislocations and applying a bond-order metric for structural analysis are reproduced. These findings are supplemented by extending the use of the bond-order metric to measure the defect number fraction and furthermore applying state-of-the-art analysis methods, allowing a systematic testing of the FDS theory with unprecedented scrutiny: A morphological analysis of lattice structure is performed via Minkowski tensor methods. Minkowski tensors form a complete family of additive, motion covariant and continuous morphological measures that are sensitive to nonlinear properties. The FDS theory is rigorously confirmed and predictions of the theory are reproduced extremely well. The predicted scale-free power-law relation between defect fraction number and system energy is verified for one more order of magnitude at high energies compared to the inherently discontinuous bond-order metric. It is found that the fractal relation between crystalline domain area and circumference is independent of the experiment, the particular Minkowski tensor method, and the particular choice of parameters. Thus, the fractal relationship seems to be inherent to two-dimensional phase transitions in complex plasmas. Minkowski

  17. Complexity Analysis of Carbon Market Using the Modified Multi-Scale Entropy

    Directory of Open Access Journals (Sweden)

    Jiuli Yin

    2018-06-01

    Full Text Available Carbon markets provide a market-based way to reduce climate pollution. Subject to general market regulations, the major existing emission trading markets present complex characteristics. This paper analyzes the complexity of carbon market by using the multi-scale entropy. Pilot carbon markets in China are taken as the example. Moving average is adopted to extract the scales due to the short length of the data set. Results show a low-level complexity inferring that China’s pilot carbon markets are quite immature in lack of market efficiency. However, the complexity varies in different time scales. China’s carbon markets (except for the Chongqing pilot are more complex in the short period than in the long term. Furthermore, complexity level in most pilot markets increases as the markets developed, showing an improvement in market efficiency. All these results demonstrate that an effective carbon market is required for the full function of emission trading.

  18. The method of measurement and synchronization control for large-scale complex loading system

    International Nuclear Information System (INIS)

    Liao Min; Li Pengyuan; Hou Binglin; Chi Chengfang; Zhang Bo

    2012-01-01

    With the development of modern industrial technology, measurement and control system was widely used in high precision, complex industrial control equipment and large-tonnage loading device. The measurement and control system is often used to analyze the distribution of stress and displacement in the complex bearing load or the complex nature of the mechanical structure itself. In ITER GS mock-up with 5 flexible plates, for each load combination, detect and measure potential slippage between the central flexible plate and the neighboring spacers is necessary as well as the potential slippage between each pre-stressing bar and its neighboring plate. The measurement and control system consists of seven sets of EDC controller and board, computer system, 16-channel quasi-dynamic strain gauge, 25 sets of displacement sensors, 7 sets of load and displacement sensors in the cylinders. This paper demonstrates the principles and methods of EDC220 digital controller to achieve synchronization control, and R and D process of multi-channel loading control software and measurement software. (authors)

  19. Synchronization in node of complex networks consist of complex chaotic system

    Energy Technology Data Exchange (ETDEWEB)

    Wei, Qiang, E-mail: qiangweibeihua@163.com [Beihua University computer and technology College, BeiHua University, Jilin, 132021, Jilin (China); Digital Images Processing Institute of Beihua University, BeiHua University, Jilin, 132011, Jilin (China); Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024 (China); Xie, Cheng-jun [Beihua University computer and technology College, BeiHua University, Jilin, 132021, Jilin (China); Digital Images Processing Institute of Beihua University, BeiHua University, Jilin, 132011, Jilin (China); Liu, Hong-jun [School of Information Engineering, Weifang Vocational College, Weifang, 261041 (China); Li, Yan-hui [The Library, Weifang Vocational College, Weifang, 261041 (China)

    2014-07-15

    A new synchronization method is investigated for node of complex networks consists of complex chaotic system. When complex networks realize synchronization, different component of complex state variable synchronize up to different scaling complex function by a designed complex feedback controller. This paper change synchronization scaling function from real field to complex field for synchronization in node of complex networks with complex chaotic system. Synchronization in constant delay and time-varying coupling delay complex networks are investigated, respectively. Numerical simulations are provided to show the effectiveness of the proposed method.

  20. Large-scale computing techniques for complex system simulations

    CERN Document Server

    Dubitzky, Werner; Schott, Bernard

    2012-01-01

    Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and

  1. Tailoring Enterprise Systems Engineering Policy for Project Scale and Complexity

    Science.gov (United States)

    Cox, Renee I.; Thomas, L. Dale

    2014-01-01

    Space systems are characterized by varying degrees of scale and complexity. Accordingly, cost-effective implementation of systems engineering also varies depending on scale and complexity. Recognizing that systems engineering and integration happen everywhere and at all levels of a given system and that the life cycle is an integrated process necessary to mature a design, the National Aeronautic and Space Administration's (NASA's) Marshall Space Flight Center (MSFC) has developed a suite of customized implementation approaches based on project scale and complexity. While it may be argued that a top-level system engineering process is common to and indeed desirable across an enterprise for all space systems, implementation of that top-level process and the associated products developed as a result differ from system to system. The implementation approaches used for developing a scientific instrument necessarily differ from those used for a space station. .

  2. Extreme-Scale Bayesian Inference for Uncertainty Quantification of Complex Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Biros, George [Univ. of Texas, Austin, TX (United States)

    2018-01-12

    Uncertainty quantification (UQ)—that is, quantifying uncertainties in complex mathematical models and their large-scale computational implementations—is widely viewed as one of the outstanding challenges facing the field of CS&E over the coming decade. The EUREKA project set to address the most difficult class of UQ problems: those for which both the underlying PDE model as well as the uncertain parameters are of extreme scale. In the project we worked on these extreme-scale challenges in the following four areas: 1. Scalable parallel algorithms for sampling and characterizing the posterior distribution that exploit the structure of the underlying PDEs and parameter-to-observable map. These include structure-exploiting versions of the randomized maximum likelihood method, which aims to overcome the intractability of employing conventional MCMC methods for solving extreme-scale Bayesian inversion problems by appealing to and adapting ideas from large-scale PDE-constrained optimization, which have been very successful at exploring high-dimensional spaces. 2. Scalable parallel algorithms for construction of prior and likelihood functions based on learning methods and non-parametric density estimation. Constructing problem-specific priors remains a critical challenge in Bayesian inference, and more so in high dimensions. Another challenge is construction of likelihood functions that capture unmodeled couplings between observations and parameters. We will create parallel algorithms for non-parametric density estimation using high dimensional N-body methods and combine them with supervised learning techniques for the construction of priors and likelihood functions. 3. Bayesian inadequacy models, which augment physics models with stochastic models that represent their imperfections. The success of the Bayesian inference framework depends on the ability to represent the uncertainty due to imperfections of the mathematical model of the phenomena of interest. This is a

  3. A computational approach to modeling cellular-scale blood flow in complex geometry

    Science.gov (United States)

    Balogh, Peter; Bagchi, Prosenjit

    2017-04-01

    We present a computational methodology for modeling cellular-scale blood flow in arbitrary and highly complex geometry. Our approach is based on immersed-boundary methods, which allow modeling flows in arbitrary geometry while resolving the large deformation and dynamics of every blood cell with high fidelity. The present methodology seamlessly integrates different modeling components dealing with stationary rigid boundaries of complex shape, moving rigid bodies, and highly deformable interfaces governed by nonlinear elasticity. Thus it enables us to simulate 'whole' blood suspensions flowing through physiologically realistic microvascular networks that are characterized by multiple bifurcating and merging vessels, as well as geometrically complex lab-on-chip devices. The focus of the present work is on the development of a versatile numerical technique that is able to consider deformable cells and rigid bodies flowing in three-dimensional arbitrarily complex geometries over a diverse range of scenarios. After describing the methodology, a series of validation studies are presented against analytical theory, experimental data, and previous numerical results. Then, the capability of the methodology is demonstrated by simulating flows of deformable blood cells and heterogeneous cell suspensions in both physiologically realistic microvascular networks and geometrically intricate microfluidic devices. It is shown that the methodology can predict several complex microhemodynamic phenomena observed in vascular networks and microfluidic devices. The present methodology is robust and versatile, and has the potential to scale up to very large microvascular networks at organ levels.

  4. Linear-scaling quantum mechanical methods for excited states.

    Science.gov (United States)

    Yam, ChiYung; Zhang, Qing; Wang, Fan; Chen, GuanHua

    2012-05-21

    The poor scaling of many existing quantum mechanical methods with respect to the system size hinders their applications to large systems. In this tutorial review, we focus on latest research on linear-scaling or O(N) quantum mechanical methods for excited states. Based on the locality of quantum mechanical systems, O(N) quantum mechanical methods for excited states are comprised of two categories, the time-domain and frequency-domain methods. The former solves the dynamics of the electronic systems in real time while the latter involves direct evaluation of electronic response in the frequency-domain. The localized density matrix (LDM) method is the first and most mature linear-scaling quantum mechanical method for excited states. It has been implemented in time- and frequency-domains. The O(N) time-domain methods also include the approach that solves the time-dependent Kohn-Sham (TDKS) equation using the non-orthogonal localized molecular orbitals (NOLMOs). Besides the frequency-domain LDM method, other O(N) frequency-domain methods have been proposed and implemented at the first-principles level. Except one-dimensional or quasi-one-dimensional systems, the O(N) frequency-domain methods are often not applicable to resonant responses because of the convergence problem. For linear response, the most efficient O(N) first-principles method is found to be the LDM method with Chebyshev expansion for time integration. For off-resonant response (including nonlinear properties) at a specific frequency, the frequency-domain methods with iterative solvers are quite efficient and thus practical. For nonlinear response, both on-resonance and off-resonance, the time-domain methods can be used, however, as the time-domain first-principles methods are quite expensive, time-domain O(N) semi-empirical methods are often the practical choice. Compared to the O(N) frequency-domain methods, the O(N) time-domain methods for excited states are much more mature and numerically stable, and

  5. Large-scale structure of the Taurus molecular complex. II. Analysis of velocity fluctuations and turbulence. III. Methods for turbulence

    International Nuclear Information System (INIS)

    Kleiner, S.C.; Dickman, R.L.

    1985-01-01

    The velocity autocorrelation function (ACF) of observed spectral line centroid fluctuations is noted to effectively reproduce the actual ACF of turbulent gas motions within an interstellar cloud, thereby furnishing a framework for the study of the large scale velocity structure of the Taurus dark cloud complex traced by the present C-13O J = 1-0 observations of this region. The results obtained are discussed in the context of recent suggestions that widely observed correlations between molecular cloud widths and cloud sizes indicate the presence of a continuum of turbulent motions within the dense interstellar medium. Attention is then given to a method for the quantitative study of these turbulent motions, involving the mapping of a source in an optically thin spectral line and studying the spatial correlation properties of the resulting velocity centroid map. 61 references

  6. Complex Formation Control of Large-Scale Intelligent Autonomous Vehicles

    Directory of Open Access Journals (Sweden)

    Ming Lei

    2012-01-01

    Full Text Available A new formation framework of large-scale intelligent autonomous vehicles is developed, which can realize complex formations while reducing data exchange. Using the proposed hierarchy formation method and the automatic dividing algorithm, vehicles are automatically divided into leaders and followers by exchanging information via wireless network at initial time. Then, leaders form formation geometric shape by global formation information and followers track their own virtual leaders to form line formation by local information. The formation control laws of leaders and followers are designed based on consensus algorithms. Moreover, collision-avoiding problems are considered and solved using artificial potential functions. Finally, a simulation example that consists of 25 vehicles shows the effectiveness of theory.

  7. Complex transformation method and resonances in one-body quantum systems

    International Nuclear Information System (INIS)

    Sigal, I.M.

    1984-01-01

    We develop a new spectral deformation method in order to treat the resonance problem in one-body systems. Our result on the meromorphic continuation of matrix elements of the resolvent across the continuous spectrum overlaps considerably with an earlier result of E. Balslev [B] but our method is much simpler and more convenient, we believe, in applications. It is inspired by the local distortion technique of Nuttall-Thomas-Babbitt-Balslev, further developed in [B] but patterned on the complex scaling method of Combes and Balslev. The method is applicable to the multicenter problems in which each potential can be represented, roughly speaking, as a sum of exponentially decaying and dilation-analytic, spherically symmetric parts

  8. Evaluating the response of complex systems to environmental threats: the Σ II method

    International Nuclear Information System (INIS)

    Corynen, G.C.

    1983-05-01

    The Σ II method was developed to model and compute the probabilistic performance of systems that operate in a threatening environment. Although we emphasize the vulnerability of complex systems to earthquakes and to electromagnetic threats such as EMP (electromagnetic pulse), the method applies in general to most large-scale systems or networks that are embedded in a potentially harmful environment. Other methods exist for obtaining system vulnerability, but their complexity increases exponentially as the size of systems is increased. The complexity of the Σ II method is polynomial, and accurate solutions are now possible for problems for which current methods require the use of rough statistical bounds, confidence statements, and other approximations. For super-large problems, where the costs of precise answers may be prohibitive, a desired accuracy can be specified, and the Σ II algorithms will halt when that accuracy has been reached. We summarize the results of a theoretical complexity analysis - which is reported elsewhere - and validate the theory with computer experiments conducted both on worst-case academic problems and on more reasonable problems occurring in practice. Finally, we compare our method with the exact methods of Abraham and Nakazawa, and with current bounding methods, and we demonstrate the computational efficiency and accuracy of Σ II

  9. Gamma Ray Tomographic Scan Method for Large Scale Industrial Plants

    International Nuclear Information System (INIS)

    Moon, Jin Ho; Jung, Sung Hee; Kim, Jong Bum; Park, Jang Geun

    2011-01-01

    The gamma ray tomography systems have been used to investigate a chemical process for last decade. There have been many cases of gamma ray tomography for laboratory scale work but not many cases for industrial scale work. Non-tomographic equipment with gamma-ray sources is often used in process diagnosis. Gamma radiography, gamma column scanning and the radioisotope tracer technique are examples of gamma ray application in industries. In spite of many outdoor non-gamma ray tomographic equipment, the most of gamma ray tomographic systems still remained as indoor equipment. But, as the gamma tomography has developed, the demand on gamma tomography for real scale plants also increased. To develop the industrial scale system, we introduced the gamma-ray tomographic system with fixed detectors and rotating source. The general system configuration is similar to 4 th generation geometry. But the main effort has been made to actualize the instant installation of the system for real scale industrial plant. This work would be a first attempt to apply the 4th generation industrial gamma tomographic scanning by experimental method. The individual 0.5-inch NaI detector was used for gamma ray detection by configuring circular shape around industrial plant. This tomographic scan method can reduce mechanical complexity and require a much smaller space than a conventional CT. Those properties make it easy to get measurement data for a real scale plant

  10. Evaluation of integration methods for hybrid simulation of complex structural systems through collapse

    Science.gov (United States)

    Del Carpio R., Maikol; Hashemi, M. Javad; Mosqueda, Gilberto

    2017-10-01

    This study examines the performance of integration methods for hybrid simulation of large and complex structural systems in the context of structural collapse due to seismic excitations. The target application is not necessarily for real-time testing, but rather for models that involve large-scale physical sub-structures and highly nonlinear numerical models. Four case studies are presented and discussed. In the first case study, the accuracy of integration schemes including two widely used methods, namely, modified version of the implicit Newmark with fixed-number of iteration (iterative) and the operator-splitting (non-iterative) is examined through pure numerical simulations. The second case study presents the results of 10 hybrid simulations repeated with the two aforementioned integration methods considering various time steps and fixed-number of iterations for the iterative integration method. The physical sub-structure in these tests consists of a single-degree-of-freedom (SDOF) cantilever column with replaceable steel coupons that provides repeatable highlynonlinear behavior including fracture-type strength and stiffness degradations. In case study three, the implicit Newmark with fixed-number of iterations is applied for hybrid simulations of a 1:2 scale steel moment frame that includes a relatively complex nonlinear numerical substructure. Lastly, a more complex numerical substructure is considered by constructing a nonlinear computational model of a moment frame coupled to a hybrid model of a 1:2 scale steel gravity frame. The last two case studies are conducted on the same porotype structure and the selection of time steps and fixed number of iterations are closely examined in pre-test simulations. The generated unbalance forces is used as an index to track the equilibrium error and predict the accuracy and stability of the simulations.

  11. Multi-scale seismic tomography of the Merapi-Merbabu volcanic complex, Indonesia

    Science.gov (United States)

    Mujid Abdullah, Nur; Valette, Bernard; Potin, Bertrand; Ramdhan, Mohamad

    2017-04-01

    Merapi-Merbabu volcanic complex is the most active volcano located on Java Island, Indonesia, where the Indian plate subducts beneath Eurasian plate. We present a preliminary study of a multi-scale seismic tomography of the substructures of the volcanic complex. The main objective of our study is to image the feeding paths of the volcanic complex at an intermediate scale by using the data from the dense network (about 5 km spacing) constituted by 53 stations of the French-Indonesian DOMERAPI experiment complemented by the data of the German-Indonesian MERAMEX project (134 stations) and of the Indonesia Tsunami Early Warning System (InaTEWS) located in the vicinity of the complex. The inversion was performed using the INSIGHT algorithm, which follows a non-linear least squares approach based on a stochastic description of data and model. In total, 1883 events and 41846 phases (26647 P and 15199 S) have been processed, and a two-scale approach was adopted. The model obtained at regional scale is consistent with the previous studies. We selected the most reliable regional model as a prior model for the local tomography performed with a variant of the INSIGHT code. The algorithm of this code is based on the fact that inverting differences of data when transporting the errors in probability is equivalent to inverting initial data while introducing specific correlation terms in the data covariance matrix. The local tomography provides images of the substructure of the volcanic complex with a sufficiently good resolution to allow identification of a probable magma chamber at about 20 km.

  12. Multi-scale modeling with cellular automata: The complex automata approach

    NARCIS (Netherlands)

    Hoekstra, A.G.; Falcone, J.-L.; Caiazzo, A.; Chopard, B.

    2008-01-01

    Cellular Automata are commonly used to describe complex natural phenomena. In many cases it is required to capture the multi-scale nature of these phenomena. A single Cellular Automata model may not be able to efficiently simulate a wide range of spatial and temporal scales. It is our goal to

  13. Approaching complexity by stochastic methods: From biological systems to turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Friedrich, Rudolf [Institute for Theoretical Physics, University of Muenster, D-48149 Muenster (Germany); Peinke, Joachim [Institute of Physics, Carl von Ossietzky University, D-26111 Oldenburg (Germany); Sahimi, Muhammad [Mork Family Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, CA 90089-1211 (United States); Reza Rahimi Tabar, M., E-mail: mohammed.r.rahimi.tabar@uni-oldenburg.de [Department of Physics, Sharif University of Technology, Tehran 11155-9161 (Iran, Islamic Republic of); Institute of Physics, Carl von Ossietzky University, D-26111 Oldenburg (Germany); Fachbereich Physik, Universitaet Osnabrueck, Barbarastrasse 7, 49076 Osnabrueck (Germany)

    2011-09-15

    This review addresses a central question in the field of complex systems: given a fluctuating (in time or space), sequentially measured set of experimental data, how should one analyze the data, assess their underlying trends, and discover the characteristics of the fluctuations that generate the experimental traces? In recent years, significant progress has been made in addressing this question for a class of stochastic processes that can be modeled by Langevin equations, including additive as well as multiplicative fluctuations or noise. Important results have emerged from the analysis of temporal data for such diverse fields as neuroscience, cardiology, finance, economy, surface science, turbulence, seismic time series and epileptic brain dynamics, to name but a few. Furthermore, it has been recognized that a similar approach can be applied to the data that depend on a length scale, such as velocity increments in fully developed turbulent flow, or height increments that characterize rough surfaces. A basic ingredient of the approach to the analysis of fluctuating data is the presence of a Markovian property, which can be detected in real systems above a certain time or length scale. This scale is referred to as the Markov-Einstein (ME) scale, and has turned out to be a useful characteristic of complex systems. We provide a review of the operational methods that have been developed for analyzing stochastic data in time and scale. We address in detail the following issues: (i) reconstruction of stochastic evolution equations from data in terms of the Langevin equations or the corresponding Fokker-Planck equations and (ii) intermittency, cascades, and multiscale correlation functions.

  14. Approaching complexity by stochastic methods: From biological systems to turbulence

    International Nuclear Information System (INIS)

    Friedrich, Rudolf; Peinke, Joachim; Sahimi, Muhammad; Reza Rahimi Tabar, M.

    2011-01-01

    This review addresses a central question in the field of complex systems: given a fluctuating (in time or space), sequentially measured set of experimental data, how should one analyze the data, assess their underlying trends, and discover the characteristics of the fluctuations that generate the experimental traces? In recent years, significant progress has been made in addressing this question for a class of stochastic processes that can be modeled by Langevin equations, including additive as well as multiplicative fluctuations or noise. Important results have emerged from the analysis of temporal data for such diverse fields as neuroscience, cardiology, finance, economy, surface science, turbulence, seismic time series and epileptic brain dynamics, to name but a few. Furthermore, it has been recognized that a similar approach can be applied to the data that depend on a length scale, such as velocity increments in fully developed turbulent flow, or height increments that characterize rough surfaces. A basic ingredient of the approach to the analysis of fluctuating data is the presence of a Markovian property, which can be detected in real systems above a certain time or length scale. This scale is referred to as the Markov-Einstein (ME) scale, and has turned out to be a useful characteristic of complex systems. We provide a review of the operational methods that have been developed for analyzing stochastic data in time and scale. We address in detail the following issues: (i) reconstruction of stochastic evolution equations from data in terms of the Langevin equations or the corresponding Fokker-Planck equations and (ii) intermittency, cascades, and multiscale correlation functions.

  15. Preface: Introductory Remarks: Linear Scaling Methods

    Science.gov (United States)

    Bowler, D. R.; Fattebert, J.-L.; Gillan, M. J.; Haynes, P. D.; Skylaris, C.-K.

    2008-07-01

    It has been just over twenty years since the publication of the seminal paper on molecular dynamics with ab initio methods by Car and Parrinello [1], and the contribution of density functional theory (DFT) and the related techniques to physics, chemistry, materials science, earth science and biochemistry has been huge. Nevertheless, significant improvements are still being made to the performance of these standard techniques; recent work suggests that speed improvements of one or even two orders of magnitude are possible [2]. One of the areas where major progress has long been expected is in O(N), or linear scaling, DFT, in which the computer effort is proportional to the number of atoms. Linear scaling DFT methods have been in development for over ten years [3] but we are now in an exciting period where more and more research groups are working on these methods. Naturally there is a strong and continuing effort to improve the efficiency of the methods and to make them more robust. But there is also a growing ambition to apply them to challenging real-life problems. This special issue contains papers submitted following the CECAM Workshop 'Linear-scaling ab initio calculations: applications and future directions', held in Lyon from 3-6 September 2007. A noteworthy feature of the workshop is that it included a significant number of presentations involving real applications of O(N) methods, as well as work to extend O(N) methods into areas of greater accuracy (correlated wavefunction methods, quantum Monte Carlo, TDDFT) and large scale computer architectures. As well as explicitly linear scaling methods, the conference included presentations on techniques designed to accelerate and improve the efficiency of standard (that is non-linear-scaling) methods; this highlights the important question of crossover—that is, at what size of system does it become more efficient to use a linear-scaling method? As well as fundamental algorithmic questions, this brings up

  16. Electron-helium scattering in the S-wave model using exterior complex scaling

    International Nuclear Information System (INIS)

    Horner, Daniel A.; McCurdy, C. William; Rescigno, Thomas N.

    2004-01-01

    Electron-impact excitation and ionization of helium is studied in the S-wave model. The problem is treated in full dimensionality using a time-dependent formulation of the exterior complex scaling method that does not involve the solution of large linear systems of equations. We discuss the steps that must be taken to compute stable ionization amplitudes. We present total excitation, total ionization and single differential cross sections from the ground and n=2 excited states and compare our results with those obtained by others using a frozen-core model

  17. Time Scale in Least Square Method

    Directory of Open Access Journals (Sweden)

    Özgür Yeniay

    2014-01-01

    Full Text Available Study of dynamic equations in time scale is a new area in mathematics. Time scale tries to build a bridge between real numbers and integers. Two derivatives in time scale have been introduced and called as delta and nabla derivative. Delta derivative concept is defined as forward direction, and nabla derivative concept is defined as backward direction. Within the scope of this study, we consider the method of obtaining parameters of regression equation of integer values through time scale. Therefore, we implemented least squares method according to derivative definition of time scale and obtained coefficients related to the model. Here, there exist two coefficients originating from forward and backward jump operators relevant to the same model, which are different from each other. Occurrence of such a situation is equal to total number of values of vertical deviation between regression equations and observation values of forward and backward jump operators divided by two. We also estimated coefficients for the model using ordinary least squares method. As a result, we made an introduction to least squares method on time scale. We think that time scale theory would be a new vision in least square especially when assumptions of linear regression are violated.

  18. Multiple time scale methods in tokamak magnetohydrodynamics

    International Nuclear Information System (INIS)

    Jardin, S.C.

    1984-01-01

    Several methods are discussed for integrating the magnetohydrodynamic (MHD) equations in tokamak systems on other than the fastest time scale. The dynamical grid method for simulating ideal MHD instabilities utilizes a natural nonorthogonal time-dependent coordinate transformation based on the magnetic field lines. The coordinate transformation is chosen to be free of the fast time scale motion itself, and to yield a relatively simple scalar equation for the total pressure, P = p + B 2 /2μ 0 , which can be integrated implicitly to average over the fast time scale oscillations. Two methods are described for the resistive time scale. The zero-mass method uses a reduced set of two-fluid transport equations obtained by expanding in the inverse magnetic Reynolds number, and in the small ratio of perpendicular to parallel mobilities and thermal conductivities. The momentum equation becomes a constraint equation that forces the pressure and magnetic fields and currents to remain in force balance equilibrium as they evolve. The large mass method artificially scales up the ion mass and viscosity, thereby reducing the severe time scale disparity between wavelike and diffusionlike phenomena, but not changing the resistive time scale behavior. Other methods addressing the intermediate time scales are discussed

  19. Complex dewetting scenarios of ultrathin silicon films for large-scale nanoarchitectures.

    Science.gov (United States)

    Naffouti, Meher; Backofen, Rainer; Salvalaglio, Marco; Bottein, Thomas; Lodari, Mario; Voigt, Axel; David, Thomas; Benkouider, Abdelmalek; Fraj, Ibtissem; Favre, Luc; Ronda, Antoine; Berbezier, Isabelle; Grosso, David; Abbarchi, Marco; Bollani, Monica

    2017-11-01

    Dewetting is a ubiquitous phenomenon in nature; many different thin films of organic and inorganic substances (such as liquids, polymers, metals, and semiconductors) share this shape instability driven by surface tension and mass transport. Via templated solid-state dewetting, we frame complex nanoarchitectures of monocrystalline silicon on insulator with unprecedented precision and reproducibility over large scales. Phase-field simulations reveal the dominant role of surface diffusion as a driving force for dewetting and provide a predictive tool to further engineer this hybrid top-down/bottom-up self-assembly method. Our results demonstrate that patches of thin monocrystalline films of metals and semiconductors share the same dewetting dynamics. We also prove the potential of our method by fabricating nanotransfer molding of metal oxide xerogels on silicon and glass substrates. This method allows the novel possibility of transferring these Si-based patterns on different materials, which do not usually undergo dewetting, offering great potential also for microfluidic or sensing applications.

  20. Understanding the Complexity of Temperature Dynamics in Xinjiang, China, from Multitemporal Scale and Spatial Perspectives

    Directory of Open Access Journals (Sweden)

    Jianhua Xu

    2013-01-01

    Full Text Available Based on the observed data from 51 meteorological stations during the period from 1958 to 2012 in Xinjiang, China, we investigated the complexity of temperature dynamics from the temporal and spatial perspectives by using a comprehensive approach including the correlation dimension (CD, classical statistics, and geostatistics. The main conclusions are as follows (1 The integer CD values indicate that the temperature dynamics are a complex and chaotic system, which is sensitive to the initial conditions. (2 The complexity of temperature dynamics decreases along with the increase of temporal scale. To describe the temperature dynamics, at least 3 independent variables are needed at daily scale, whereas at least 2 independent variables are needed at monthly, seasonal, and annual scales. (3 The spatial patterns of CD values at different temporal scales indicate that the complex temperature dynamics are derived from the complex landform.

  1. Information geometric methods for complexity

    Science.gov (United States)

    Felice, Domenico; Cafaro, Carlo; Mancini, Stefano

    2018-03-01

    Research on the use of information geometry (IG) in modern physics has witnessed significant advances recently. In this review article, we report on the utilization of IG methods to define measures of complexity in both classical and, whenever available, quantum physical settings. A paradigmatic example of a dramatic change in complexity is given by phase transitions (PTs). Hence, we review both global and local aspects of PTs described in terms of the scalar curvature of the parameter manifold and the components of the metric tensor, respectively. We also report on the behavior of geodesic paths on the parameter manifold used to gain insight into the dynamics of PTs. Going further, we survey measures of complexity arising in the geometric framework. In particular, we quantify complexity of networks in terms of the Riemannian volume of the parameter space of a statistical manifold associated with a given network. We are also concerned with complexity measures that account for the interactions of a given number of parts of a system that cannot be described in terms of a smaller number of parts of the system. Finally, we investigate complexity measures of entropic motion on curved statistical manifolds that arise from a probabilistic description of physical systems in the presence of limited information. The Kullback-Leibler divergence, the distance to an exponential family and volumes of curved parameter manifolds, are examples of essential IG notions exploited in our discussion of complexity. We conclude by discussing strengths, limits, and possible future applications of IG methods to the physics of complexity.

  2. Cope's Rule and the Universal Scaling Law of Ornament Complexity.

    Science.gov (United States)

    Raia, Pasquale; Passaro, Federico; Carotenuto, Francesco; Maiorino, Leonardo; Piras, Paolo; Teresi, Luciano; Meiri, Shai; Itescu, Yuval; Novosolov, Maria; Baiano, Mattia Antonio; Martínez, Ricard; Fortelius, Mikael

    2015-08-01

    Luxuriant, bushy antlers, bizarre crests, and huge, twisting horns and tusks are conventionally understood as products of sexual selection. This view stems from both direct observation and from the empirical finding that the size of these structures grows faster than body size (i.e., ornament size shows positive allometry). We contend that the familiar evolutionary increase in the complexity of ornaments over time in many animal clades is decoupled from ornament size evolution. Increased body size comes with extended growth. Since growth scales to the quarter power of body size, we predicted that ornament complexity should scale according to the quarter power law as well, irrespective of the role of sexual selection in the evolution and function of the ornament. To test this hypothesis, we selected three clades (ammonites, deer, and ceratopsian dinosaurs) whose species bore ornaments that differ in terms of the importance of sexual selection to their evolution. We found that the exponent of the regression of ornament complexity to body size is the same for the three groups and is statistically indistinguishable from 0.25. We suggest that the evolution of ornament complexity is a by-product of Cope's rule. We argue that although sexual selection may control size in most ornaments, it does not influence their shape.

  3. An efficient method based on the uniformity principle for synthesis of large-scale heat exchanger networks

    International Nuclear Information System (INIS)

    Zhang, Chunwei; Cui, Guomin; Chen, Shang

    2016-01-01

    Highlights: • Two dimensionless uniformity factors are presented to heat exchange network. • The grouping of process streams reduces the computational complexity of large-scale HENS problems. • The optimal sub-network can be obtained by Powell particle swarm optimization algorithm. • The method is illustrated by a case study involving 39 process streams, with a better solution. - Abstract: The optimal design of large-scale heat exchanger networks is a difficult task due to the inherent non-linear characteristics and the combinatorial nature of heat exchangers. To solve large-scale heat exchanger network synthesis (HENS) problems, two dimensionless uniformity factors to describe the heat exchanger network (HEN) uniformity in terms of the temperature difference and the accuracy of process stream grouping are deduced. Additionally, a novel algorithm that combines deterministic and stochastic optimizations to obtain an optimal sub-network with a suitable heat load for a given group of streams is proposed, and is named the Powell particle swarm optimization (PPSO). As a result, the synthesis of large-scale heat exchanger networks is divided into two corresponding sub-parts, namely, the grouping of process streams and the optimization of sub-networks. This approach reduces the computational complexity and increases the efficiency of the proposed method. The robustness and effectiveness of the proposed method are demonstrated by solving a large-scale HENS problem involving 39 process streams, and the results obtained are better than those previously published in the literature.

  4. Immune Algorithm Complex Method for Transducer Calibration

    Directory of Open Access Journals (Sweden)

    YU Jiangming

    2014-08-01

    Full Text Available As a key link in engineering test tasks, the transducer calibration has significant influence on accuracy and reliability of test results. Because of unknown and complex nonlinear characteristics, conventional method can’t achieve satisfactory accuracy. An Immune algorithm complex modeling approach is proposed, and the simulated studies on the calibration of third multiple output transducers is made respectively by use of the developed complex modeling. The simulated and experimental results show that the Immune algorithm complex modeling approach can improve significantly calibration precision comparison with traditional calibration methods.

  5. Large Scale Emerging Properties from Non Hamiltonian Complex Systems

    Directory of Open Access Journals (Sweden)

    Marco Bianucci

    2017-06-01

    Full Text Available The concept of “large scale” depends obviously on the phenomenon we are interested in. For example, in the field of foundation of Thermodynamics from microscopic dynamics, the spatial and time large scales are order of fraction of millimetres and microseconds, respectively, or lesser, and are defined in relation to the spatial and time scales of the microscopic systems. In large scale oceanography or global climate dynamics problems the time scales of interest are order of thousands of kilometres, for space, and many years for time, and are compared to the local and daily/monthly times scales of atmosphere and ocean dynamics. In all the cases a Zwanzig projection approach is, at least in principle, an effective tool to obtain class of universal smooth “large scale” dynamics for few degrees of freedom of interest, starting from the complex dynamics of the whole (usually many degrees of freedom system. The projection approach leads to a very complex calculus with differential operators, that is drastically simplified when the basic dynamics of the system of interest is Hamiltonian, as it happens in Foundation of Thermodynamics problems. However, in geophysical Fluid Dynamics, Biology, and in most of the physical problems the building block fundamental equations of motions have a non Hamiltonian structure. Thus, to continue to apply the useful projection approach also in these cases, we exploit the generalization of the Hamiltonian formalism given by the Lie algebra of dissipative differential operators. In this way, we are able to analytically deal with the series of the differential operators stemming from the projection approach applied to these general cases. Then we shall apply this formalism to obtain some relevant results concerning the statistical properties of the El Niño Southern Oscillation (ENSO.

  6. Nonlinear dynamics of the complex multi-scale network

    Science.gov (United States)

    Makarov, Vladimir V.; Kirsanov, Daniil; Goremyko, Mikhail; Andreev, Andrey; Hramov, Alexander E.

    2018-04-01

    In this paper, we study the complex multi-scale network of nonlocally coupled oscillators for the appearance of chimera states. Chimera is a special state in which, in addition to the asynchronous cluster, there are also completely synchronous parts in the system. We show that the increase of nodes in subgroups leads to the destruction of the synchronous interaction within the common ring and to the narrowing of the chimera region.

  7. Parameter and State Estimation of Large-Scale Complex Systems Using Python Tools

    Directory of Open Access Journals (Sweden)

    M. Anushka S. Perera

    2015-07-01

    Full Text Available This paper discusses the topics related to automating parameter, disturbance and state estimation analysis of large-scale complex nonlinear dynamic systems using free programming tools. For large-scale complex systems, before implementing any state estimator, the system should be analyzed for structural observability and the structural observability analysis can be automated using Modelica and Python. As a result of structural observability analysis, the system may be decomposed into subsystems where some of them may be observable --- with respect to parameter, disturbances, and states --- while some may not. The state estimation process is carried out for those observable subsystems and the optimum number of additional measurements are prescribed for unobservable subsystems to make them observable. In this paper, an industrial case study is considered: the copper production process at Glencore Nikkelverk, Kristiansand, Norway. The copper production process is a large-scale complex system. It is shown how to implement various state estimators, in Python, to estimate parameters and disturbances, in addition to states, based on available measurements.

  8. Complex dynamics of our economic life on different scales: insights from search engine query data.

    Science.gov (United States)

    Preis, Tobias; Reith, Daniel; Stanley, H Eugene

    2010-12-28

    Search engine query data deliver insight into the behaviour of individuals who are the smallest possible scale of our economic life. Individuals are submitting several hundred million search engine queries around the world each day. We study weekly search volume data for various search terms from 2004 to 2010 that are offered by the search engine Google for scientific use, providing information about our economic life on an aggregated collective level. We ask the question whether there is a link between search volume data and financial market fluctuations on a weekly time scale. Both collective 'swarm intelligence' of Internet users and the group of financial market participants can be regarded as a complex system of many interacting subunits that react quickly to external changes. We find clear evidence that weekly transaction volumes of S&P 500 companies are correlated with weekly search volume of corresponding company names. Furthermore, we apply a recently introduced method for quantifying complex correlations in time series with which we find a clear tendency that search volume time series and transaction volume time series show recurring patterns.

  9. Method Points: towards a metric for method complexity

    Directory of Open Access Journals (Sweden)

    Graham McLeod

    1998-11-01

    Full Text Available A metric for method complexity is proposed as an aid to choosing between competing methods, as well as in validating the effects of method integration or the products of method engineering work. It is based upon a generic method representation model previously developed by the author and adaptation of concepts used in the popular Function Point metric for system size. The proposed technique is illustrated by comparing two popular I.E. deliverables with counterparts in the object oriented Unified Modeling Language (UML. The paper recommends ways to improve the practical adoption of new methods.

  10. Predicting functional impairment in brain tumor surgery: the Big Five and the Milan Complexity Scale.

    Science.gov (United States)

    Ferroli, Paolo; Broggi, Morgan; Schiavolin, Silvia; Acerbi, Francesco; Bettamio, Valentina; Caldiroli, Dario; Cusin, Alberto; La Corte, Emanuele; Leonardi, Matilde; Raggi, Alberto; Schiariti, Marco; Visintini, Sergio; Franzini, Angelo; Broggi, Giovanni

    2015-12-01

    OBJECT The Milan Complexity Scale-a new practical grading scale designed to estimate the risk of neurological clinical worsening after performing surgery for tumor removal-is presented. METHODS A retrospective study was conducted on all elective consecutive surgical procedures for tumor resection between January 2012 and December 2014 at the Second Division of Neurosurgery at Fondazione IRCCS Istituto Neurologico Carlo Besta of Milan. A prospective database dedicated to reporting complications and all clinical and radiological data was retrospectively reviewed. The Karnofsky Performance Scale (KPS) was used to classify each patient's health status. Complications were divided into major and minor and recorded based on etiology and required treatment. A logistic regression model was used to identify possible predictors of clinical worsening after surgery in terms of changes between the preoperative and discharge KPS scores. Statistically significant predictors were rated based on their odds ratios in order to build an ad hoc complexity scale. For each patient, a corresponding total score was calculated, and ANOVA was performed to compare the mean total scores between the improved/unchanged and worsened patients. Relative risk (RR) and chi-square statistics were employed to provide the risk of worsening after surgery for each total score. RESULTS The case series was composed of 746 patients (53.2% female; mean age 51.3 ± 17.1). The most common tumors were meningiomas (28.6%) and glioblastomas (24.1%). The mortality rate was 0.94%, the major complication rate was 9.1%, and the minor complication rate was 32.6%. Of 746 patients, 523 (70.1%) patients improved or remained unchanged, and 223 (29.9%) patients worsened. The following factors were found to be statistically significant predictors of the change in KPS scores: tumor size larger than 4 cm, cranial nerve manipulation, major brain vessel manipulation, posterior fossa location, and eloquent area involvement

  11. Flow and Transport in Complex Microporous Carbonates as a Consequence of Separation of Scales

    Science.gov (United States)

    Bijeljic, B.; Raeini, A. Q.; Lin, Q.; Blunt, M. J.

    2017-12-01

    Some of the most important examples of flow and transport in complex pore structures are found in subsurface applications such as contaminant hydrology, carbon storage and enhanced oil recovery. Carbonate rock structures contain most of the world's oil reserves, considerable amount of water reserves, and potentially hold a storage capacity for carbon dioxide. However, this type of pore space is difficult to represent due to complexities associated with a wide range of pore sizes and variation in connectivity which poses a considerable challenge for quantitative predictions of transport across multiple scales.A new concept unifying X-ray tomography experiment and direct numerical simulation has been developed that relies on full description flow and solute transport at the pore scale. Differential imaging method (Lin et al. 2016) provides rich information in microporous space, while advective and diffusive mass transport are simulated on micro-CT images of pore-space: Navier-Stokes equations are solved for flow in the image voxels comprising the pore space, streamline-based simulation is used to account for advection, and diffusion is superimposed by random walk.Quantitative validation has been done on analytical solutions for diffusion and by comparing the model predictions versus the experimental NMR measurements in the dual porosity beadpack. Furthermore, we discriminate signatures of multi-scale transport behaviour for a range of carbonate rock (Figure 1), dependent on the heterogeneity of the inter- and intra-grain pore space, heterogeneity in the flow field, and the mass transfer characteristics of the porous media. Finally, we demonstrate the predictive capabilities of the model through an analysis that includes a number of probability density functions flow and transport (PDFs) measures of non-Fickian transport on the micro-CT i935mages. In complex porous media separation of scales exists, leading to flow and transport signatures that need to be described by

  12. Fluid-structure interaction simulation of floating structures interacting with complex, large-scale ocean waves and atmospheric turbulence with application to floating offshore wind turbines

    Science.gov (United States)

    Calderer, Antoni; Guo, Xin; Shen, Lian; Sotiropoulos, Fotis

    2018-02-01

    We develop a numerical method for simulating coupled interactions of complex floating structures with large-scale ocean waves and atmospheric turbulence. We employ an efficient large-scale model to develop offshore wind and wave environmental conditions, which are then incorporated into a high resolution two-phase flow solver with fluid-structure interaction (FSI). The large-scale wind-wave interaction model is based on a two-fluid dynamically-coupled approach that employs a high-order spectral method for simulating the water motion and a viscous solver with undulatory boundaries for the air motion. The two-phase flow FSI solver is based on the level set method and is capable of simulating the coupled dynamic interaction of arbitrarily complex bodies with airflow and waves. The large-scale wave field solver is coupled with the near-field FSI solver with a one-way coupling approach by feeding into the latter waves via a pressure-forcing method combined with the level set method. We validate the model for both simple wave trains and three-dimensional directional waves and compare the results with experimental and theoretical solutions. Finally, we demonstrate the capabilities of the new computational framework by carrying out large-eddy simulation of a floating offshore wind turbine interacting with realistic ocean wind and waves.

  13. Self-similarity and scaling theory of complex networks

    Science.gov (United States)

    Song, Chaoming

    Scale-free networks have been studied extensively due to their relevance to many real systems as diverse as the World Wide Web (WWW), the Internet, biological and social networks. We present a novel approach to the analysis of scale-free networks, revealing that their structure is self-similar. This result is achieved by the application of a renormalization procedure which coarse-grains the system into boxes containing nodes within a given "size". Concurrently, we identify a power-law relation between the number of boxes needed to cover the network and the size of the box defining a self-similar exponent, which classifies fractal and non-fractal networks. By using the concept of renormalization as a mechanism for the growth of fractal and non-fractal modular networks, we show that the key principle that gives rise to the fractal architecture of networks is a strong effective "repulsion" between the most connected nodes (hubs) on all length scales, rendering them very dispersed. We show that a robust network comprised of functional modules, such as a cellular network, necessitates a fractal topology, suggestive of a evolutionary drive for their existence. These fundamental properties help to understand the emergence of the scale-free property in complex networks.

  14. Variational Multi-Scale method with spectral approximation of the sub-scales.

    KAUST Repository

    Dia, Ben Mansour

    2015-01-07

    A variational multi-scale method where the sub-grid scales are computed by spectral approximations is presented. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base of eigenfunctions which are orthonormal in weighted L2 spaces. We propose a feasible VMS-spectral method by truncation of this spectral expansion to a nite number of modes.

  15. Dual-scale Galerkin methods for Darcy flow

    Science.gov (United States)

    Wang, Guoyin; Scovazzi, Guglielmo; Nouveau, Léo; Kees, Christopher E.; Rossi, Simone; Colomés, Oriol; Main, Alex

    2018-02-01

    The discontinuous Galerkin (DG) method has found widespread application in elliptic problems with rough coefficients, of which the Darcy flow equations are a prototypical example. One of the long-standing issues of DG approximations is the overall computational cost, and many different strategies have been proposed, such as the variational multiscale DG method, the hybridizable DG method, the multiscale DG method, the embedded DG method, and the Enriched Galerkin method. In this work, we propose a mixed dual-scale Galerkin method, in which the degrees-of-freedom of a less computationally expensive coarse-scale approximation are linked to the degrees-of-freedom of a base DG approximation. We show that the proposed approach has always similar or improved accuracy with respect to the base DG method, with a considerable reduction in computational cost. For the specific definition of the coarse-scale space, we consider Raviart-Thomas finite elements for the mass flux and piecewise-linear continuous finite elements for the pressure. We provide a complete analysis of stability and convergence of the proposed method, in addition to a study on its conservation and consistency properties. We also present a battery of numerical tests to verify the results of the analysis, and evaluate a number of possible variations, such as using piecewise-linear continuous finite elements for the coarse-scale mass fluxes.

  16. Facing the scaling problem: A multi-methodical approach to simulate soil erosion at hillslope and catchment scale

    Science.gov (United States)

    Schmengler, A. C.; Vlek, P. L. G.

    2012-04-01

    Modelling soil erosion requires a holistic understanding of the sediment dynamics in a complex environment. As most erosion models are scale-dependent and their parameterization is spatially limited, their application often requires special care, particularly in data-scarce environments. This study presents a hierarchical approach to overcome the limitations of a single model by using various quantitative methods and soil erosion models to cope with the issues of scale. At hillslope scale, the physically-based Water Erosion Prediction Project (WEPP)-model is used to simulate soil loss and deposition processes. Model simulations of soil loss vary between 5 to 50 t ha-1 yr-1 dependent on the spatial location on the hillslope and have only limited correspondence with the results of the 137Cs technique. These differences in absolute soil loss values could be either due to internal shortcomings of each approach or to external scale-related uncertainties. Pedo-geomorphological soil investigations along a catena confirm that estimations by the 137Cs technique are more appropriate in reflecting both the spatial extent and magnitude of soil erosion at hillslope scale. In order to account for sediment dynamics at a larger scale, the spatially-distributed WaTEM/SEDEM model is used to simulate soil erosion at catchment scale and to predict sediment delivery rates into a small water reservoir. Predicted sediment yield rates are compared with results gained from a bathymetric survey and sediment core analysis. Results show that specific sediment rates of 0.6 t ha-1 yr-1 by the model are in close agreement with observed sediment yield calculated from stratigraphical changes and downcore variations in 137Cs concentrations. Sediment erosion rates averaged over the entire catchment of 1 to 2 t ha-1 yr-1 are significantly lower than results obtained at hillslope scale confirming an inverse correlation between the magnitude of erosion rates and the spatial scale of the model. The

  17. Connotations of pixel-based scale effect in remote sensing and the modified fractal-based analysis method

    Science.gov (United States)

    Feng, Guixiang; Ming, Dongping; Wang, Min; Yang, Jianyu

    2017-06-01

    Scale problems are a major source of concern in the field of remote sensing. Since the remote sensing is a complex technology system, there is a lack of enough cognition on the connotation of scale and scale effect in remote sensing. Thus, this paper first introduces the connotations of pixel-based scale and summarizes the general understanding of pixel-based scale effect. Pixel-based scale effect analysis is essentially important for choosing the appropriate remote sensing data and the proper processing parameters. Fractal dimension is a useful measurement to analysis pixel-based scale. However in traditional fractal dimension calculation, the impact of spatial resolution is not considered, which leads that the scale effect change with spatial resolution can't be clearly reflected. Therefore, this paper proposes to use spatial resolution as the modified scale parameter of two fractal methods to further analyze the pixel-based scale effect. To verify the results of two modified methods (MFBM (Modified Windowed Fractal Brownian Motion Based on the Surface Area) and MDBM (Modified Windowed Double Blanket Method)); the existing scale effect analysis method (information entropy method) is used to evaluate. And six sub-regions of building areas and farmland areas were cut out from QuickBird images to be used as the experimental data. The results of the experiment show that both the fractal dimension and information entropy present the same trend with the decrease of spatial resolution, and some inflection points appear at the same feature scales. Further analysis shows that these feature scales (corresponding to the inflection points) are related to the actual sizes of the geo-object, which results in fewer mixed pixels in the image, and these inflection points are significantly indicative of the observed features. Therefore, the experiment results indicate that the modified fractal methods are effective to reflect the pixel-based scale effect existing in remote sensing

  18. Complex-scaling of screened Coulomb potentials for resonance calculations utilizing the modified Bessel functions

    Science.gov (United States)

    Jiao, Li-Guang; Ho, Yew Kam

    2014-05-01

    The screened Coulomb potential (SCP) has been extensively used in atomic physics, nuclear physics, quantum chemistry and plasma physics. However, an accurate calculation for atomic resonances under SCP is still a challenging task for various methods. Within the complex-scaling computational scheme, we have developed a method utilizing the modified Bessel functions to calculate doubly-excited resonances in two-electron atomic systems with configuration interaction-type basis. To test the validity of our method, we have calculated S- and P-wave resonance states of the helium atom with various screening strengths, and have found good agreement with earlier calculations using different methods. Our present method can be applied to calculate high-lying resonances associated with high excitation thresholds of the He+ ion, and with high-angular-momentum states. The derivation and calculation details of our present investigation together with new results of high-angular-momentum states will be presented at the meeting. Supported by NSC of Taiwan.

  19. Variational Multi-Scale method with spectral approximation of the sub-scales.

    KAUST Repository

    Dia, Ben Mansour; Chá con-Rebollo, Tomas

    2015-01-01

    A variational multi-scale method where the sub-grid scales are computed by spectral approximations is presented. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base

  20. Curcumin complexation with cyclodextrins by the autoclave process: Method development and characterization of complex formation.

    Science.gov (United States)

    Hagbani, Turki Al; Nazzal, Sami

    2017-03-30

    One approach to enhance curcumin (CUR) aqueous solubility is to use cyclodextrins (CDs) to form inclusion complexes where CUR is encapsulated as a guest molecule within the internal cavity of the water-soluble CD. Several methods have been reported for the complexation of CUR with CDs. Limited information, however, is available on the use of the autoclave process (AU) in complex formation. The aims of this work were therefore to (1) investigate and evaluate the AU cycle as a complex formation method to enhance CUR solubility; (2) compare the efficacy of the AU process with the freeze-drying (FD) and evaporation (EV) processes in complex formation; and (3) confirm CUR stability by characterizing CUR:CD complexes by NMR, Raman spectroscopy, DSC, and XRD. Significant differences were found in the saturation solubility of CUR from its complexes with CD when prepared by the three complexation methods. The AU yielded a complex with expected chemical and physical fingerprints for a CUR:CD inclusion complex that maintained the chemical integrity and stability of CUR and provided the highest solubility of CUR in water. Physical and chemical characterizations of the AU complexes confirmed the encapsulated of CUR inside the CD cavity and the transformation of the crystalline CUR:CD inclusion complex to an amorphous form. It was concluded that the autoclave process with its short processing time could be used as an alternate and efficient methods for drug:CD complexation. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Evaluation of Penalized and Nonpenalized Methods for Disease Prediction with Large-Scale Genetic Data

    Directory of Open Access Journals (Sweden)

    Sungho Won

    2015-01-01

    Full Text Available Owing to recent improvement of genotyping technology, large-scale genetic data can be utilized to identify disease susceptibility loci and this successful finding has substantially improved our understanding of complex diseases. However, in spite of these successes, most of the genetic effects for many complex diseases were found to be very small, which have been a big hurdle to build disease prediction model. Recently, many statistical methods based on penalized regressions have been proposed to tackle the so-called “large P and small N” problem. Penalized regressions including least absolute selection and shrinkage operator (LASSO and ridge regression limit the space of parameters, and this constraint enables the estimation of effects for very large number of SNPs. Various extensions have been suggested, and, in this report, we compare their accuracy by applying them to several complex diseases. Our results show that penalized regressions are usually robust and provide better accuracy than the existing methods for at least diseases under consideration.

  2. Vertical equilibrium with sub-scale analytical methods for geological CO2 sequestration

    KAUST Repository

    Gasda, S. E.

    2009-04-23

    Large-scale implementation of geological CO2 sequestration requires quantification of risk and leakage potential. One potentially important leakage pathway for the injected CO2 involves existing oil and gas wells. Wells are particularly important in North America, where more than a century of drilling has created millions of oil and gas wells. Models of CO 2 injection and leakage will involve large uncertainties in parameters associated with wells, and therefore a probabilistic framework is required. These models must be able to capture both the large-scale CO 2 plume associated with the injection and the small-scale leakage problem associated with localized flow along wells. Within a typical simulation domain, many hundreds of wells may exist. One effective modeling strategy combines both numerical and analytical models with a specific set of simplifying assumptions to produce an efficient numerical-analytical hybrid model. The model solves a set of governing equations derived by vertical averaging with assumptions of a macroscopic sharp interface and vertical equilibrium. These equations are solved numerically on a relatively coarse grid, with an analytical model embedded to solve for wellbore flow occurring at the sub-gridblock scale. This vertical equilibrium with sub-scale analytical method (VESA) combines the flexibility of a numerical method, allowing for heterogeneous and geologically complex systems, with the efficiency and accuracy of an analytical method, thereby eliminating expensive grid refinement for sub-scale features. Through a series of benchmark problems, we show that VESA compares well with traditional numerical simulations and to a semi-analytical model which applies to appropriately simple systems. We believe that the VESA model provides the necessary accuracy and efficiency for applications of risk analysis in many CO2 sequestration problems. © 2009 Springer Science+Business Media B.V.

  3. Modified multiple time scale method for solving strongly nonlinear damped forced vibration systems

    Science.gov (United States)

    Razzak, M. A.; Alam, M. Z.; Sharif, M. N.

    2018-03-01

    In this paper, modified multiple time scale (MTS) method is employed to solve strongly nonlinear forced vibration systems. The first-order approximation is only considered in order to avoid complexicity. The formulations and the determination of the solution procedure are very easy and straightforward. The classical multiple time scale (MS) and multiple scales Lindstedt-Poincare method (MSLP) do not give desire result for the strongly damped forced vibration systems with strong damping effects. The main aim of this paper is to remove these limitations. Two examples are considered to illustrate the effectiveness and convenience of the present procedure. The approximate external frequencies and the corresponding approximate solutions are determined by the present method. The results give good coincidence with corresponding numerical solution (considered to be exact) and also provide better result than other existing results. For weak nonlinearities with weak damping effect, the absolute relative error measures (first-order approximate external frequency) in this paper is only 0.07% when amplitude A = 1.5 , while the relative error gives MSLP method is surprisingly 28.81%. Furthermore, for strong nonlinearities with strong damping effect, the absolute relative error found in this article is only 0.02%, whereas the relative error obtained by MSLP method is 24.18%. Therefore, the present method is not only valid for weakly nonlinear damped forced systems, but also gives better result for strongly nonlinear systems with both small and strong damping effect.

  4. Software quality assurance: in large scale and complex software-intensive systems

    NARCIS (Netherlands)

    Mistrik, I.; Soley, R.; Ali, N.; Grundy, J.; Tekinerdogan, B.

    2015-01-01

    Software Quality Assurance in Large Scale and Complex Software-intensive Systems presents novel and high-quality research related approaches that relate the quality of software architecture to system requirements, system architecture and enterprise-architecture, or software testing. Modern software

  5. Atmospheric dispersion modelling over complex terrain at small scale

    Science.gov (United States)

    Nosek, S.; Janour, Z.; Kukacka, L.; Jurcakova, K.; Kellnerova, R.; Gulikova, E.

    2014-03-01

    Previous study concerned of qualitative modelling neutrally stratified flow over open-cut coal mine and important surrounding topography at meso-scale (1:9000) revealed an important area for quantitative modelling of atmospheric dispersion at small-scale (1:3300). The selected area includes a necessary part of the coal mine topography with respect to its future expansion and surrounding populated areas. At this small-scale simultaneous measurement of velocity components and concentrations in specified points of vertical and horizontal planes were performed by two-dimensional Laser Doppler Anemometry (LDA) and Fast-Response Flame Ionization Detector (FFID), respectively. The impact of the complex terrain on passive pollutant dispersion with respect to the prevailing wind direction was observed and the prediction of the air quality at populated areas is discussed. The measured data will be used for comparison with another model taking into account the future coal mine transformation. Thus, the impact of coal mine transformation on pollutant dispersion can be observed.

  6. TopoSCALE v.1.0: downscaling gridded climate data in complex terrain

    Science.gov (United States)

    Fiddes, J.; Gruber, S.

    2014-02-01

    Simulation of land surface processes is problematic in heterogeneous terrain due to the the high resolution required of model grids to capture strong lateral variability caused by, for example, topography, and the lack of accurate meteorological forcing data at the site or scale it is required. Gridded data products produced by atmospheric models can fill this gap, however, often not at an appropriate spatial resolution to drive land-surface simulations. In this study we describe a method that uses the well-resolved description of the atmospheric column provided by climate models, together with high-resolution digital elevation models (DEMs), to downscale coarse-grid climate variables to a fine-scale subgrid. The main aim of this approach is to provide high-resolution driving data for a land-surface model (LSM). The method makes use of an interpolation of pressure-level data according to topographic height of the subgrid. An elevation and topography correction is used to downscale short-wave radiation. Long-wave radiation is downscaled by deriving a cloud-component of all-sky emissivity at grid level and using downscaled temperature and relative humidity fields to describe variability with elevation. Precipitation is downscaled with a simple non-linear lapse and optionally disaggregated using a climatology approach. We test the method in comparison with unscaled grid-level data and a set of reference methods, against a large evaluation dataset (up to 210 stations per variable) in the Swiss Alps. We demonstrate that the method can be used to derive meteorological inputs in complex terrain, with most significant improvements (with respect to reference methods) seen in variables derived from pressure levels: air temperature, relative humidity, wind speed and incoming long-wave radiation. This method may be of use in improving inputs to numerical simulations in heterogeneous and/or remote terrain, especially when statistical methods are not possible, due to lack of

  7. Large-Scale Portfolio Optimization Using Multiobjective Evolutionary Algorithms and Preselection Methods

    Directory of Open Access Journals (Sweden)

    B. Y. Qu

    2017-01-01

    Full Text Available Portfolio optimization problems involve selection of different assets to invest in order to maximize the overall return and minimize the overall risk simultaneously. The complexity of the optimal asset allocation problem increases with an increase in the number of assets available to select from for investing. The optimization problem becomes computationally challenging when there are more than a few hundreds of assets to select from. To reduce the complexity of large-scale portfolio optimization, two asset preselection procedures that consider return and risk of individual asset and pairwise correlation to remove assets that may not potentially be selected into any portfolio are proposed in this paper. With these asset preselection methods, the number of assets considered to be included in a portfolio can be increased to thousands. To test the effectiveness of the proposed methods, a Normalized Multiobjective Evolutionary Algorithm based on Decomposition (NMOEA/D algorithm and several other commonly used multiobjective evolutionary algorithms are applied and compared. Six experiments with different settings are carried out. The experimental results show that with the proposed methods the simulation time is reduced while return-risk trade-off performances are significantly improved. Meanwhile, the NMOEA/D is able to outperform other compared algorithms on all experiments according to the comparative analysis.

  8. Optimization of a method for preparing solid complexes of essential clove oil with β-cyclodextrins.

    Science.gov (United States)

    Hernández-Sánchez, Pilar; López-Miranda, Santiago; Guardiola, Lucía; Serrano-Martínez, Ana; Gabaldón, José Antonio; Nuñez-Delicado, Estrella

    2017-01-01

    Clove oil (CO) is an aromatic oily liquid used in the food, cosmetics and pharmaceutical industries for its functional properties. However, its disadvantages of pungent taste, volatility, light sensitivity and poor water solubility can be solved by applying microencapsulation or complexation techniques. Essential CO was successfully solubilized in aqueous solution by forming inclusion complexes with β-cyclodextrins (β-CDs). Moreover, phase solubility studies demonstrated that essential CO also forms insoluble complexes with β-CDs. Based on these results, essential CO-β-CD solid complexes were prepared by the novel approach of microwave irradiation (MWI), followed by three different drying methods: vacuum oven drying (VO), freeze-drying (FD) or spray-drying (SD). FD was the best option for drying the CO-β-CD solid complexes, followed by VO and SD. MWI can be used efficiently to prepare essential CO-β-CD complexes with good yield on an industrial scale. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  9. Research on image complexity evaluation method based on color information

    Science.gov (United States)

    Wang, Hao; Duan, Jin; Han, Xue-hui; Xiao, Bo

    2017-11-01

    In order to evaluate the complexity of a color image more effectively and find the connection between image complexity and image information, this paper presents a method to compute the complexity of image based on color information.Under the complexity ,the theoretical analysis first divides the complexity from the subjective level, divides into three levels: low complexity, medium complexity and high complexity, and then carries on the image feature extraction, finally establishes the function between the complexity value and the color characteristic model. The experimental results show that this kind of evaluation method can objectively reconstruct the complexity of the image from the image feature research. The experimental results obtained by the method of this paper are in good agreement with the results of human visual perception complexity,Color image complexity has a certain reference value.

  10. Analysis Methods for Extracting Knowledge from Large-Scale WiFi Monitoring to Inform Building Facility Planning

    DEFF Research Database (Denmark)

    Ruiz-Ruiz, Antonio; Blunck, Henrik; Prentow, Thor Siiger

    2014-01-01

    realistic data to inform facility planning. In this paper, we propose analysis methods to extract knowledge from large sets of network collected WiFi traces to better inform facility management and planning in large building complexes. The analysis methods, which build on a rich set of temporal and spatial......The optimization of logistics in large building com- plexes with many resources, such as hospitals, require realistic facility management and planning. Current planning practices rely foremost on manual observations or coarse unverified as- sumptions and therefore do not properly scale or provide....... Spatio-temporal visualization tools built on top of these methods enable planners to inspect and explore extracted information to inform facility-planning activities. To evaluate the methods, we present results for a large hospital complex covering more than 10 hectares. The evaluation is based on Wi...

  11. A New Class of Scaling Correction Methods

    International Nuclear Information System (INIS)

    Mei Li-Jie; Wu Xin; Liu Fu-Yao

    2012-01-01

    When conventional integrators like Runge—Kutta-type algorithms are used, numerical errors can make an orbit deviate from a hypersurface determined by many constraints, which leads to unreliable numerical solutions. Scaling correction methods are a powerful tool to avoid this. We focus on their applications, and also develop a family of new velocity multiple scaling correction methods where scale factors only act on the related components of the integrated momenta. They can preserve exactly some first integrals of motion in discrete or continuous dynamical systems, so that rapid growth of roundoff or truncation errors is suppressed significantly. (general)

  12. Method of producing nano-scaled inorganic platelets

    Science.gov (United States)

    Zhamu, Aruna; Jang, Bor Z.

    2012-11-13

    The present invention provides a method of exfoliating a layered material (e.g., transition metal dichalcogenide) to produce nano-scaled platelets having a thickness smaller than 100 nm, typically smaller than 10 nm. The method comprises (a) dispersing particles of a non-graphite laminar compound in a liquid medium containing therein a surfactant or dispersing agent to obtain a stable suspension or slurry; and (b) exposing the suspension or slurry to ultrasonic waves at an energy level for a sufficient length of time to produce separated nano-scaled platelets. The nano-scaled platelets are candidate reinforcement fillers for polymer nanocomposites.

  13. A Qualitative Method to Estimate HSI Display Complexity

    International Nuclear Information System (INIS)

    Hugo, Jacques; Gertman, David

    2013-01-01

    There is mounting evidence that complex computer system displays in control rooms contribute to cognitive complexity and, thus, to the probability of human error. Research shows that reaction time increases and response accuracy decreases as the number of elements in the display screen increase. However, in terms of supporting the control room operator, approaches focusing on addressing display complexity solely in terms of information density and its location and patterning, will fall short of delivering a properly designed interface. This paper argues that information complexity and semantic complexity are mandatory components when considering display complexity and that the addition of these concepts assists in understanding and resolving differences between designers and the preferences and performance of operators. This paper concludes that a number of simplified methods, when combined, can be used to estimate the impact that a particular display may have on the operator's ability to perform a function accurately and effectively. We present a mixed qualitative and quantitative approach and a method for complexity estimation

  14. An efficient and novel computation method for simulating diffraction patterns from large-scale coded apertures on large-scale focal plane arrays

    Science.gov (United States)

    Shrekenhamer, Abraham; Gottesman, Stephen R.

    2012-10-01

    A novel and memory efficient method for computing diffraction patterns produced on large-scale focal planes by largescale Coded Apertures at wavelengths where diffraction effects are significant has been developed and tested. The scheme, readily implementable on portable computers, overcomes the memory limitations of present state-of-the-art simulation codes such as Zemax. The method consists of first calculating a set of reference complex field (amplitude and phase) patterns on the focal plane produced by a single (reference) central hole, extending to twice the focal plane array size, with one such pattern for each Line-of-Sight (LOS) direction and wavelength in the scene, and with the pattern amplitude corresponding to the square-root of the spectral irradiance from each such LOS direction in the scene at selected wavelengths. Next the set of reference patterns is transformed to generate pattern sets for other holes. The transformation consists of a translational pattern shift corresponding to each hole's position offset and an electrical phase shift corresponding to each hole's position offset and incoming radiance's direction and wavelength. The set of complex patterns for each direction and wavelength is then summed coherently and squared for each detector to yield a set of power patterns unique for each direction and wavelength. Finally the set of power patterns is summed to produce the full waveband diffraction pattern from the scene. With this tool researchers can now efficiently simulate diffraction patterns produced from scenes by large-scale Coded Apertures onto large-scale focal plane arrays to support the development and optimization of coded aperture masks and image reconstruction algorithms.

  15. Accessible methods for the dynamic time-scale decomposition of biochemical systems.

    Science.gov (United States)

    Surovtsova, Irina; Simus, Natalia; Lorenz, Thomas; König, Artjom; Sahle, Sven; Kummer, Ursula

    2009-11-01

    The growing complexity of biochemical models asks for means to rationally dissect the networks into meaningful and rather independent subnetworks. Such foregoing should ensure an understanding of the system without any heuristics employed. Important for the success of such an approach is its accessibility and the clarity of the presentation of the results. In order to achieve this goal, we developed a method which is a modification of the classical approach of time-scale separation. This modified method as well as the more classical approach have been implemented for time-dependent application within the widely used software COPASI. The implementation includes different possibilities for the representation of the results including 3D-visualization. The methods are included in COPASI which is free for academic use and available at www.copasi.org. irina.surovtsova@bioquant.uni-heidelberg.de Supplementary data are available at Bioinformatics online.

  16. Psychometric validation of the Italian Rehabilitation Complexity Scale-Extended version 13

    Science.gov (United States)

    Agosti, Maurizio; Merlo, Andrea; Maini, Maurizio; Lombardi, Francesco; Tedeschi, Claudio; Benedetti, Maria Grazia; Basaglia, Nino; Contini, Mara; Nicolotti, Domenico; Brianti, Rodolfo

    2017-01-01

    In Italy, at present, a well-known problem is inhomogeneous provision of rehabilitative services, as stressed by MoH, requiring appropriate criteria and parameters to plan rehabilitation actions. According to the Italian National Rehabilitation Plan, Comorbidity, Disability and Clinical Complexity should be assessed to define the patient’s real needs. However, to date, clinical complexity is still difficult to measure with shared and validated tools. The study aims to psychometrically validate the Italian Rehabilitation Complexity Scale-Extended v13 (RCS-E v13), in order to meet the guidelines requirements. An observational multicentre prospective cohort study, involving 8 intensive rehabilitation facilities of the Emilia-Romagna Region and 1712 in-patients, [823 male (48%) and 889 female (52%), mean age 68.34 years (95% CI 67.69–69.00 years)] showing neurological, orthopaedic and cardiological problems, was carried out. The construct and concurrent validity of the RCS-E v13 was confirmed through its correlation to Barthel Index (disability) and Cumulative Illness Rating Scale (comorbidity) and appropriate admission criteria (not yet published), respectively. Furthermore, the factor analysis indicated two different components (“Basic Care or Risk—Equipment” and “Medical—Nursing Needs and Therapy Disciplines”) of the RCS-E v13. In conclusion, the Italian RCS-E v13 appears to be a useful tool to assess clinical complexity in the Italian rehab scenario case-mix and its psychometric validation may have an important clinical rehabilitation impact allowing the assessment of the rehabilitation needs considering all three dimensions (disability, comorbidity and clinical complexity) as required by the Guidelines and the inhomogeneity could be reduced. PMID:29045409

  17. Interpreting Popov criteria in Lure´ systems with complex scaling stability analysis

    Science.gov (United States)

    Zhou, J.

    2018-06-01

    The paper presents a novel frequency-domain interpretation of Popov criteria for absolute stability in Lure´ systems by means of what we call complex scaling stability analysis. The complex scaling technique is developed for exponential/asymptotic stability in LTI feedback systems, which dispenses open-loop poles distribution, contour/locus orientation and prior frequency sweeping. Exploiting the technique for alternatively revealing positive realness of transfer functions, re-interpreting Popov criteria is explicated. More specifically, the suggested frequency-domain stability conditions are conformable both in scalar and multivariable cases, and can be implemented either graphically with locus plotting or numerically without; in particular, the latter is suitable as a design tool with auxiliary parameter freedom. The interpretation also reveals further frequency-domain facts about Lure´ systems. Numerical examples are included to illustrate the main results.

  18. Coupled numerical approach combining finite volume and lattice Boltzmann methods for multi-scale multi-physicochemical processes

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Li; He, Ya-Ling [Key Laboratory of Thermo-Fluid Science and Engineering of MOE, School of Energy and Power Engineering, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China); Kang, Qinjun [Computational Earth Science Group (EES-16), Los Alamos National Laboratory, Los Alamos, NM (United States); Tao, Wen-Quan, E-mail: wqtao@mail.xjtu.edu.cn [Key Laboratory of Thermo-Fluid Science and Engineering of MOE, School of Energy and Power Engineering, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China)

    2013-12-15

    A coupled (hybrid) simulation strategy spatially combining the finite volume method (FVM) and the lattice Boltzmann method (LBM), called CFVLBM, is developed to simulate coupled multi-scale multi-physicochemical processes. In the CFVLBM, computational domain of multi-scale problems is divided into two sub-domains, i.e., an open, free fluid region and a region filled with porous materials. The FVM and LBM are used for these two regions, respectively, with information exchanged at the interface between the two sub-domains. A general reconstruction operator (RO) is proposed to derive the distribution functions in the LBM from the corresponding macro scalar, the governing equation of which obeys the convection–diffusion equation. The CFVLBM and the RO are validated in several typical physicochemical problems and then are applied to simulate complex multi-scale coupled fluid flow, heat transfer, mass transport, and chemical reaction in a wall-coated micro reactor. The maximum ratio of the grid size between the FVM and LBM regions is explored and discussed. -- Highlights: •A coupled simulation strategy for simulating multi-scale phenomena is developed. •Finite volume method and lattice Boltzmann method are coupled. •A reconstruction operator is derived to transfer information at the sub-domains interface. •Coupled multi-scale multiple physicochemical processes in micro reactor are simulated. •Techniques to save computational resources and improve the efficiency are discussed.

  19. Multivariate Multi-Scale Permutation Entropy for Complexity Analysis of Alzheimer’s Disease EEG

    Directory of Open Access Journals (Sweden)

    Isabella Palamara

    2012-07-01

    Full Text Available An original multivariate multi-scale methodology for assessing the complexity of physiological signals is proposed. The technique is able to incorporate the simultaneous analysis of multi-channel data as a unique block within a multi-scale framework. The basic complexity measure is done by using Permutation Entropy, a methodology for time series processing based on ordinal analysis. Permutation Entropy is conceptually simple, structurally robust to noise and artifacts, computationally very fast, which is relevant for designing portable diagnostics. Since time series derived from biological systems show structures on multiple spatial-temporal scales, the proposed technique can be useful for other types of biomedical signal analysis. In this work, the possibility of distinguish among the brain states related to Alzheimer’s disease patients and Mild Cognitive Impaired subjects from normal healthy elderly is checked on a real, although quite limited, experimental database.

  20. Studies on combined model based on functional objectives of large scale complex engineering

    Science.gov (United States)

    Yuting, Wang; Jingchun, Feng; Jiabao, Sun

    2018-03-01

    As various functions were included in large scale complex engineering, and each function would be conducted with completion of one or more projects, combined projects affecting their functions should be located. Based on the types of project portfolio, the relationship of projects and their functional objectives were analyzed. On that premise, portfolio projects-technics based on their functional objectives were introduced, then we studied and raised the principles of portfolio projects-technics based on the functional objectives of projects. In addition, The processes of combined projects were also constructed. With the help of portfolio projects-technics based on the functional objectives of projects, our research findings laid a good foundation for management of large scale complex engineering portfolio management.

  1. A Qualitative Method to Estimate HSI Display Complexity

    Energy Technology Data Exchange (ETDEWEB)

    Hugo, Jacques; Gertman, David [Idaho National Laboratory, Idaho (United States)

    2013-04-15

    There is mounting evidence that complex computer system displays in control rooms contribute to cognitive complexity and, thus, to the probability of human error. Research shows that reaction time increases and response accuracy decreases as the number of elements in the display screen increase. However, in terms of supporting the control room operator, approaches focusing on addressing display complexity solely in terms of information density and its location and patterning, will fall short of delivering a properly designed interface. This paper argues that information complexity and semantic complexity are mandatory components when considering display complexity and that the addition of these concepts assists in understanding and resolving differences between designers and the preferences and performance of operators. This paper concludes that a number of simplified methods, when combined, can be used to estimate the impact that a particular display may have on the operator's ability to perform a function accurately and effectively. We present a mixed qualitative and quantitative approach and a method for complexity estimation.

  2. Finite element analysis of multi-material models using a balancing domain decomposition method combined with the diagonal scaling preconditioner

    International Nuclear Information System (INIS)

    Ogino, Masao

    2016-01-01

    Actual problems in science and industrial applications are modeled by multi-materials and large-scale unstructured mesh, and the finite element analysis has been widely used to solve such problems on the parallel computer. However, for large-scale problems, the iterative methods for linear finite element equations suffer from slow or no convergence. Therefore, numerical methods having both robust convergence and scalable parallel efficiency are in great demand. The domain decomposition method is well known as an iterative substructuring method, and is an efficient approach for parallel finite element methods. Moreover, the balancing preconditioner achieves robust convergence. However, in case of problems consisting of very different materials, the convergence becomes bad. There are some research to solve this issue, however not suitable for cases of complex shape and composite materials. In this study, to improve convergence of the balancing preconditioner for multi-materials, a balancing preconditioner combined with the diagonal scaling preconditioner, called Scaled-BDD method, is proposed. Some numerical results are included which indicate that the proposed method has robust convergence for the number of subdomains and shows high performances compared with the original balancing preconditioner. (author)

  3. GRAPHICS-IMAGE MIXED METHOD FOR LARGE-SCALE BUILDINGS RENDERING

    Directory of Open Access Journals (Sweden)

    Y. Zhou

    2018-05-01

    Full Text Available Urban 3D model data is huge and unstructured, LOD and Out-of-core algorithm are usually used to reduce the amount of data that drawn in each frame to improve the rendering efficiency. When the scene is large enough, even the complex optimization algorithm is difficult to achieve better results. Based on the traditional study, a novel idea was developed. We propose a graphics and image mixed method for large-scale buildings rendering. Firstly, the view field is divided into several regions, the graphics-image mixed method used to render the scene on both screen and FBO, then blending the FBO with scree. The algorithm is tested on the huge CityGML model data in the urban areas of New York which contained 188195 public building models, and compared with the Cesium platform. The experiment result shows the system was running smoothly. The experimental results confirm that the algorithm can achieve more massive building scene roaming under the same hardware conditions, and can rendering the scene without vision loss.

  4. High-resolution method for evolving complex interface networks

    Science.gov (United States)

    Pan, Shucheng; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2018-04-01

    In this paper we describe a high-resolution transport formulation of the regional level-set approach for an improved prediction of the evolution of complex interface networks. The novelty of this method is twofold: (i) construction of local level sets and reconstruction of a global level set, (ii) local transport of the interface network by employing high-order spatial discretization schemes for improved representation of complex topologies. Various numerical test cases of multi-region flow problems, including triple-point advection, single vortex flow, mean curvature flow, normal driven flow, dry foam dynamics and shock-bubble interaction show that the method is accurate and suitable for a wide range of complex interface-network evolutions. Its overall computational cost is comparable to the Semi-Lagrangian regional level-set method while the prediction accuracy is significantly improved. The approach thus offers a viable alternative to previous interface-network level-set method.

  5. A stochastic immersed boundary method for fluid-structure dynamics at microscopic length scales

    International Nuclear Information System (INIS)

    Atzberger, Paul J.; Kramer, Peter R.; Peskin, Charles S.

    2007-01-01

    In modeling many biological systems, it is important to take into account flexible structures which interact with a fluid. At the length scale of cells and cell organelles, thermal fluctuations of the aqueous environment become significant. In this work, it is shown how the immersed boundary method of [C.S. Peskin, The immersed boundary method, Acta Num. 11 (2002) 1-39.] for modeling flexible structures immersed in a fluid can be extended to include thermal fluctuations. A stochastic numerical method is proposed which deals with stiffness in the system of equations by handling systematically the statistical contributions of the fastest dynamics of the fluid and immersed structures over long time steps. An important feature of the numerical method is that time steps can be taken in which the degrees of freedom of the fluid are completely underresolved, partially resolved, or fully resolved while retaining a good level of accuracy. Error estimates in each of these regimes are given for the method. A number of theoretical and numerical checks are furthermore performed to assess its physical fidelity. For a conservative force, the method is found to simulate particles with the correct Boltzmann equilibrium statistics. It is shown in three dimensions that the diffusion of immersed particles simulated with the method has the correct scaling in the physical parameters. The method is also shown to reproduce a well-known hydrodynamic effect of a Brownian particle in which the velocity autocorrelation function exhibits an algebraic (τ -3/2 ) decay for long times [B.J. Alder, T.E. Wainwright, Decay of the Velocity Autocorrelation Function, Phys. Rev. A 1(1) (1970) 18-21]. A few preliminary results are presented for more complex systems which demonstrate some potential application areas of the method. Specifically, we present simulations of osmotic effects of molecular dimers, worm-like chain polymer knots, and a basic model of a molecular motor immersed in fluid subject to a

  6. Simulating Engineering Flows through Complex Porous Media via the Lattice Boltzmann Method

    Directory of Open Access Journals (Sweden)

    Vesselin Krassimirov Krastev

    2018-03-01

    Full Text Available In this paper, recent achievements in the application of the lattice Boltzmann method (LBM to complex fluid flows are reported. More specifically, we focus on flows through reactive porous media, such as the flow through the substrate of a selective catalytic reactor (SCR for the reduction of gaseous pollutants in the automotive field; pulsed-flow analysis through heterogeneous catalyst architectures; and transport and electro-chemical phenomena in microbial fuel cells (MFC for novel waste-to-energy applications. To the authors’ knowledge, this is the first known application of LBM modeling to the study of MFCs, which represents by itself a highly innovative and challenging research area. The results discussed here essentially confirm the capabilities of the LBM approach as a flexible and accurate computational tool for the simulation of complex multi-physics phenomena of scientific and technological interest, across physical scales.

  7. Optimization method to branch-and-bound large SBO state spaces under dynamic probabilistic risk assessment via use of LENDIT scales and S2R2 sets

    International Nuclear Information System (INIS)

    Nielsen, Joseph; Tokuhiro, Akira; Khatry, Jivan; Hiromoto, Robert

    2014-01-01

    Traditional probabilistic risk assessment (PRA) methods have been developed to evaluate risk associated with complex systems; however, PRA methods lack the capability to evaluate complex dynamic systems. In these systems, time and energy scales associated with transient events may vary as a function of transition times and energies to arrive at a different physical state. Dynamic PRA (DPRA) methods provide a more rigorous analysis of complex dynamic systems. Unfortunately DPRA methods introduce issues associated with combinatorial explosion of states. In order to address this combinatorial complexity, a branch-and-bound optimization technique is applied to the DPRA formalism to control the combinatorial state explosion. In addition, a new characteristic scaling metric (LENDIT – length, energy, number, distribution, information and time) is proposed as linear constraints that are used to guide the branch-and-bound algorithm to limit the number of possible states to be analyzed. The LENDIT characterization is divided into four groups or sets – 'state, system, resource and response' (S2R2) – describing reactor operations (normal and off-normal). In this paper we introduce the branch-and-bound DPRA approach and the application of LENDIT scales and S2R2 sets to a station blackout (SBO) transient. (author)

  8. Cut Based Method for Comparing Complex Networks.

    Science.gov (United States)

    Liu, Qun; Dong, Zhishan; Wang, En

    2018-03-23

    Revealing the underlying similarity of various complex networks has become both a popular and interdisciplinary topic, with a plethora of relevant application domains. The essence of the similarity here is that network features of the same network type are highly similar, while the features of different kinds of networks present low similarity. In this paper, we introduce and explore a new method for comparing various complex networks based on the cut distance. We show correspondence between the cut distance and the similarity of two networks. This correspondence allows us to consider a broad range of complex networks and explicitly compare various networks with high accuracy. Various machine learning technologies such as genetic algorithms, nearest neighbor classification, and model selection are employed during the comparison process. Our cut method is shown to be suited for comparisons of undirected networks and directed networks, as well as weighted networks. In the model selection process, the results demonstrate that our approach outperforms other state-of-the-art methods with respect to accuracy.

  9. Multistage Spectral Relaxation Method for Solving the Hyperchaotic Complex Systems

    Directory of Open Access Journals (Sweden)

    Hassan Saberi Nik

    2014-01-01

    Full Text Available We present a pseudospectral method application for solving the hyperchaotic complex systems. The proposed method, called the multistage spectral relaxation method (MSRM is based on a technique of extending Gauss-Seidel type relaxation ideas to systems of nonlinear differential equations and using the Chebyshev pseudospectral methods to solve the resulting system on a sequence of multiple intervals. In this new application, the MSRM is used to solve famous hyperchaotic complex systems such as hyperchaotic complex Lorenz system and the complex permanent magnet synchronous motor. We compare this approach to the Runge-Kutta based ode45 solver to show that the MSRM gives accurate results.

  10. Temperature scaling method for Markov chains.

    Science.gov (United States)

    Crosby, Lonnie D; Windus, Theresa L

    2009-01-22

    The use of ab initio potentials in Monte Carlo simulations aimed at investigating the nucleation kinetics of water clusters is complicated by the computational expense of the potential energy determinations. Furthermore, the common desire to investigate the temperature dependence of kinetic properties leads to an urgent need to reduce the expense of performing simulations at many different temperatures. A method is detailed that allows a Markov chain (obtained via Monte Carlo) at one temperature to be scaled to other temperatures of interest without the need to perform additional large simulations. This Markov chain temperature-scaling (TeS) can be generally applied to simulations geared for numerous applications. This paper shows the quality of results which can be obtained by TeS and the possible quantities which may be extracted from scaled Markov chains. Results are obtained for a 1-D analytical potential for which the exact solutions are known. Also, this method is applied to water clusters consisting of between 2 and 5 monomers, using Dynamical Nucleation Theory to determine the evaporation rate constant for monomer loss. Although ab initio potentials are not utilized in this paper, the benefit of this method is made apparent by using the Dang-Chang polarizable classical potential for water to obtain statistical properties at various temperatures.

  11. A new large-scale manufacturing platform for complex biopharmaceuticals.

    Science.gov (United States)

    Vogel, Jens H; Nguyen, Huong; Giovannini, Roberto; Ignowski, Jolene; Garger, Steve; Salgotra, Anil; Tom, Jennifer

    2012-12-01

    Complex biopharmaceuticals, such as recombinant blood coagulation factors, are addressing critical medical needs and represent a growing multibillion-dollar market. For commercial manufacturing of such, sometimes inherently unstable, molecules it is important to minimize product residence time in non-ideal milieu in order to obtain acceptable yields and consistently high product quality. Continuous perfusion cell culture allows minimization of residence time in the bioreactor, but also brings unique challenges in product recovery, which requires innovative solutions. In order to maximize yield, process efficiency, facility and equipment utilization, we have developed, scaled-up and successfully implemented a new integrated manufacturing platform in commercial scale. This platform consists of a (semi-)continuous cell separation process based on a disposable flow path and integrated with the upstream perfusion operation, followed by membrane chromatography on large-scale adsorber capsules in rapid cycling mode. Implementation of the platform at commercial scale for a new product candidate led to a yield improvement of 40% compared to the conventional process technology, while product quality has been shown to be more consistently high. Over 1,000,000 L of cell culture harvest have been processed with 100% success rate to date, demonstrating the robustness of the new platform process in GMP manufacturing. While membrane chromatography is well established for polishing in flow-through mode, this is its first commercial-scale application for bind/elute chromatography in the biopharmaceutical industry and demonstrates its potential in particular for manufacturing of potent, low-dose biopharmaceuticals. Copyright © 2012 Wiley Periodicals, Inc.

  12. A New Feature Extraction Method Based on EEMD and Multi-Scale Fuzzy Entropy for Motor Bearing

    Directory of Open Access Journals (Sweden)

    Huimin Zhao

    2016-12-01

    Full Text Available Feature extraction is one of the most important, pivotal, and difficult problems in mechanical fault diagnosis, which directly relates to the accuracy of fault diagnosis and the reliability of early fault prediction. Therefore, a new fault feature extraction method, called the EDOMFE method based on integrating ensemble empirical mode decomposition (EEMD, mode selection, and multi-scale fuzzy entropy is proposed to accurately diagnose fault in this paper. The EEMD method is used to decompose the vibration signal into a series of intrinsic mode functions (IMFs with a different physical significance. The correlation coefficient analysis method is used to calculate and determine three improved IMFs, which are close to the original signal. The multi-scale fuzzy entropy with the ability of effective distinguishing the complexity of different signals is used to calculate the entropy values of the selected three IMFs in order to form a feature vector with the complexity measure, which is regarded as the inputs of the support vector machine (SVM model for training and constructing a SVM classifier (EOMSMFD based on EDOMFE and SVM for fulfilling fault pattern recognition. Finally, the effectiveness of the proposed method is validated by real bearing vibration signals of the motor with different loads and fault severities. The experiment results show that the proposed EDOMFE method can effectively extract fault features from the vibration signal and that the proposed EOMSMFD method can accurately diagnose the fault types and fault severities for the inner race fault, the outer race fault, and rolling element fault of the motor bearing. Therefore, the proposed method provides a new fault diagnosis technology for rotating machinery.

  13. Scale effect in fatigue resistance under complex stressed state

    International Nuclear Information System (INIS)

    Sosnovskij, L.A.

    1979-01-01

    On the basis the of the fatigue failure statistic theory obtained is the formula for calculated estimation of probabillity of failure under complex stressed state according to partial probabilities of failure under linear stressed state with provision for the scale effect. Also the formula for calculation of equivalent stress is obtained. The verification of both formulae using literary experimental data for plane stressed state torsion has shown that the error of estimations does not exceed 10% for materials with the ultimate strength changing from 61 to 124 kg/mm 2

  14. Decision paths in complex tasks

    Science.gov (United States)

    Galanter, Eugene

    1991-01-01

    Complex real world action and its prediction and control has escaped analysis by the classical methods of psychological research. The reason is that psychologists have no procedures to parse complex tasks into their constituents. Where such a division can be made, based say on expert judgment, there is no natural scale to measure the positive or negative values of the components. Even if we could assign numbers to task parts, we lack rules i.e., a theory, to combine them into a total task representation. We compare here two plausible theories for the amalgamation of the value of task components. Both of these theories require a numerical representation of motivation, for motivation is the primary variable that guides choice and action in well-learned tasks. We address this problem of motivational quantification and performance prediction by developing psychophysical scales of the desireability or aversiveness of task components based on utility scaling methods (Galanter 1990). We modify methods used originally to scale sensory magnitudes (Stevens and Galanter 1957), and that have been applied recently to the measure of task 'workload' by Gopher and Braune (1984). Our modification uses utility comparison scaling techniques which avoid the unnecessary assumptions made by Gopher and Braune. Formula for the utility of complex tasks based on the theoretical models are used to predict decision and choice of alternate paths to the same goal.

  15. Fabrication of the replica templated from butterfly wing scales with complex light trapping structures

    Science.gov (United States)

    Han, Zhiwu; Li, Bo; Mu, Zhengzhi; Yang, Meng; Niu, Shichao; Zhang, Junqiu; Ren, Luquan

    2015-11-01

    The polydimethylsiloxane (PDMS) positive replica templated twice from the excellent light trapping surface of butterfly Trogonoptera brookiana wing scales was fabricated by a simple and promising route. The exact SiO2 negative replica was fabricated by using a synthesis method combining a sol-gel process and subsequent selective etching. Afterwards, a vacuum-aided process was introduced to make PDMS gel fill into the SiO2 negative replica, and the PDMS gel was solidified in an oven. Then, the SiO2 negative replica was used as secondary template and the structures in its surface was transcribed onto the surface of PDMS. At last, the PDMS positive replica was obtained. After comparing the PDMS positive replica and the original bio-template in terms of morphology, dimensions and reflectance spectra and so on, it is evident that the excellent light trapping structures of butterfly wing scales were inherited by the PDMS positive replica faithfully. This bio-inspired route could facilitate the preparation of complex light trapping nanostructure surfaces without any assistance from other power-wasting and expensive nanofabrication technologies.

  16. A large-scale RF-based Indoor Localization System Using Low-complexity Gaussian filter and improved Bayesian inference

    Directory of Open Access Journals (Sweden)

    L. Xiao

    2013-04-01

    Full Text Available The growing convergence among mobile computing device and smart sensors boosts the development of ubiquitous computing and smart spaces, where localization is an essential part to realize the big vision. The general localization methods based on GPS and cellular techniques are not suitable for tracking numerous small size and limited power objects in the indoor case. In this paper, we propose and demonstrate a new localization method, this method is an easy-setup and cost-effective indoor localization system based on off-the-shelf active RFID technology. Our system is not only compatible with the future smart spaces and ubiquitous computing systems, but also suitable for large-scale indoor localization. The use of low-complexity Gaussian Filter (GF, Wheel Graph Model (WGM and Probabilistic Localization Algorithm (PLA make the proposed algorithm robust and suitable for large-scale indoor positioning from uncertainty, self-adjective to varying indoor environment. Using MATLAB simulation, we study the system performances, especially the dependence on a number of system and environment parameters, and their statistical properties. The simulation results prove that our proposed system is an accurate and cost-effective candidate for indoor localization.

  17. Inhibitory effect of glutamic acid on the scale formation process using electrochemical methods.

    Science.gov (United States)

    Karar, A; Naamoune, F; Kahoul, A; Belattar, N

    2016-08-01

    The formation of calcium carbonate CaCO3 in water has some important implications in geoscience researches, ocean chemistry studies, CO2 emission issues and biology. In industry, the scaling phenomenon may cause technical problems, such as reduction in heat transfer efficiency in cooling systems and obstruction of pipes. This paper focuses on the study of the glutamic acid (GA) for reducing CaCO3 scale formation on metallic surfaces in the water of Bir Aissa region. The anti-scaling properties of glutamic acid (GA), used as a complexing agent of Ca(2+) ions, have been evaluated by the chronoamperometry and electrochemical impedance spectroscopy methods in conjunction with a microscopic examination. Chemical and electrochemical study of this water shows a high calcium concentration. The characterization using X-ray diffraction reveals that while the CaCO3 scale formed chemically is a mixture of calcite, aragonite and vaterite, the one deposited electrochemically is a pure calcite. The effect of temperature on the efficiency of the inhibitor was investigated. At 30 and 40°C, a complete scaling inhibition was obtained at a GA concentration of 18 mg/L with 90.2% efficiency rate. However, the efficiency of GA decreased at 50 and 60°C.

  18. Matters of Scale: Sociology in and for a Complex World.

    Science.gov (United States)

    Pyyhtinen, Olli

    2017-08-01

    The article proposes that if sociology is to make sense of a world that is ever more complex and complicated, it is important to reconsider the scale(s) of our relations and actions. Instead of assuming a nested vertical hierarchy of the micro to macro binary, scale should be treated not only as multiple, but also as something produced and sustained in practice. Coming to grips with the complex world, we are living in also necessitates attending to the conduits and connections between various sites, fields, and terrains to which our lives are entangled. The article concludes with a note on the marginalization of sociology from public discussions, and it argues that it is possibly by attending to ambiguity and to the unfinished making of our contemporary world that sociology might have the most to give to discussions about the economy, about the future of humanity, and how to organize society. Cet article suggère que si la sociologie doit nous éclairer sur le sens d'un monde de plus en plus complexe, il est important de revoir l'échelle de nos relations et actions. Au lieu d'assumer une hiérarchie verticale de la dualité micro-macro, cette étendue doit être traitée non seulement comme multiple, mais aussi comme une chose produite et soutenue par la pratique. Pour faire face à la complexité du monde dans lequel nous vivons, il faut aussi de s'occuper des conduits et connections entre des sites divers, des champs, et des terrains dans lesquels nos vies se déroulent. Cet article conclut avec une note sur la marginalisation de la sociologie dans les discussions publiques ; et il défend l'idée que c'est possiblement en appréhendant l'ambiguïté et la construction incomplète de notre monde contemporain que la sociologie peut être la plus fructueuse en termes de discussions portant sur l'économie, le futur de l'humanité, et l'organisation de la société. © 2017 Canadian Sociological Association/La Société canadienne de sociologie.

  19. The Tunneling Method for Global Optimization in Multidimensional Scaling.

    Science.gov (United States)

    Groenen, Patrick J. F.; Heiser, Willem J.

    1996-01-01

    A tunneling method for global minimization in multidimensional scaling is introduced and adjusted for multidimensional scaling with general Minkowski distances. The method alternates a local search step with a tunneling step in which a different configuration is sought with the same STRESS implementation. (SLD)

  20. Psychologicai study on independence consciousness of chinese female university students : Applying Cinderella Complex Scales and of Women's Social Roles

    OpenAIRE

    鄭, 艶花; Zheng, Yanhua

    2004-01-01

    The purpose of this study is to analyze and clarify the independence consciousness of female university students of China applying psychological research methods. In the course of the study a questionnaire research was conducted on eighty three Chinese female university students with regard to the scales of Cinderella complex and the social role attitudes. Firstly the results indicate positive correlations between the independent variable of "defend-family-traditionalism factor" with three fa...

  1. Open quantum maps from complex scaling of kicked scattering systems

    Science.gov (United States)

    Mertig, Normann; Shudo, Akira

    2018-04-01

    We derive open quantum maps from periodically kicked scattering systems and discuss the computation of their resonance spectra in terms of theoretically grounded methods, such as complex scaling and sufficiently weak absorbing potentials. In contrast, we also show that current implementations of open quantum maps, based on strong absorptive or even projective openings, fail to produce the resonance spectra of kicked scattering systems. This comparison pinpoints flaws in current implementations of open quantum maps, namely, the inability to separate resonance eigenvalues from the continuum as well as the presence of diffraction effects due to strong absorption. The reported deviations from the true resonance spectra appear, even if the openings do not affect the classical trapped set, and become appreciable for shorter-lived resonances, e.g., those associated with chaotic orbits. This makes the open quantum maps, which we derive in this paper, a valuable alternative for future explorations of quantum-chaotic scattering systems, for example, in the context of the fractal Weyl law. The results are illustrated for a quantum map model whose classical dynamics exhibits key features of ionization and a trapped set which is organized by a topological horseshoe.

  2. Complexity-aware high efficiency video coding

    CERN Document Server

    Correa, Guilherme; Agostini, Luciano; Cruz, Luis A da Silva

    2016-01-01

    This book discusses computational complexity of High Efficiency Video Coding (HEVC) encoders with coverage extending from the analysis of HEVC compression efficiency and computational complexity to the reduction and scaling of its encoding complexity. After an introduction to the topic and a review of the state-of-the-art research in the field, the authors provide a detailed analysis of the HEVC encoding tools compression efficiency and computational complexity.  Readers will benefit from a set of algorithms for scaling the computational complexity of HEVC encoders, all of which take advantage from the flexibility of the frame partitioning structures allowed by the standard.  The authors also provide a set of early termination methods based on data mining and machine learning techniques, which are able to reduce the computational complexity required to find the best frame partitioning structures. The applicability of the proposed methods is finally exemplified with an encoding time control system that emplo...

  3. Modelling across bioreactor scales: methods, challenges and limitations

    DEFF Research Database (Denmark)

    Gernaey, Krist

    that it is challenging and expensive to acquire experimental data of good quality that can be used for characterizing gradients occurring inside a large industrial scale bioreactor. But which model building methods are available? And how can one ensure that the parameters in such a model are properly estimated? And what......Scale-up and scale-down of bioreactors are very important in industrial biotechnology, especially with the currently available knowledge on the occurrence of gradients in industrial-scale bioreactors. Moreover, it becomes increasingly appealing to model such industrial scale systems, considering...

  4. Stepwise integral scaling method and its application to severe accident phenomena

    International Nuclear Information System (INIS)

    Ishii, M.; Zhang, G.

    1993-10-01

    Severe accidents in light water reactors are characterized by an occurrence of multiphase flow with complicated phase changes, chemical reaction and various bifurcation phenomena. Because of the inherent difficulties associated with full-scale testing, scaled down and simulation experiments are essential part of the severe accident analyses. However, one of the most significant shortcomings in the area is the lack of well-established and reliable scaling method and scaling criteria. In view of this, the stepwise integral scaling method is developed for severe accident analyses. This new scaling method is quite different from the conventional approach. However, its focus on dominant transport mechanisms and use of the integral response of the system make this method relatively simple to apply to very complicated multi-phase flow problems. In order to demonstrate its applicability and usefulness, three case studies have been made. The phenomena considered are (1) corium dispersion in DCH, (2) corium spreading in BWR MARK-I containment, and (3) incore boil-off and heating process. The results of these studies clearly indicate the effectiveness of their stepwise integral scaling method. Such a simple and systematic scaling method has not been previously available to severe accident analyses

  5. Krylov Subspace Methods for Complex Non-Hermitian Linear Systems. Thesis

    Science.gov (United States)

    Freund, Roland W.

    1991-01-01

    We consider Krylov subspace methods for the solution of large sparse linear systems Ax = b with complex non-Hermitian coefficient matrices. Such linear systems arise in important applications, such as inverse scattering, numerical solution of time-dependent Schrodinger equations, underwater acoustics, eddy current computations, numerical computations in quantum chromodynamics, and numerical conformal mapping. Typically, the resulting coefficient matrices A exhibit special structures, such as complex symmetry, or they are shifted Hermitian matrices. In this paper, we first describe a Krylov subspace approach with iterates defined by a quasi-minimal residual property, the QMR method, for solving general complex non-Hermitian linear systems. Then, we study special Krylov subspace methods designed for the two families of complex symmetric respectively shifted Hermitian linear systems. We also include some results concerning the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.

  6. Multi-scale complexity analysis of muscle coactivation during gait in children with cerebral palsy

    Directory of Open Access Journals (Sweden)

    Wen eTao

    2015-07-01

    Full Text Available The objective of this study is to characterize complexity of lower-extremity muscle coactivation and coordination during gait in children with cerebral palsy (CP, children with typical development (TD and healthy adults, by applying recently developed multivariate multi-scale entropy (MMSE analysis to surface EMG signals. Eleven CP children (CP group, eight TD children and seven healthy adults (consider as an entire control group were asked to walk while surface EMG signals were collected from 5 thigh muscles and 3 lower leg muscles on each leg (16 EMG channels in total. The 16-channel surface EMG data, recorded during a series of consecutive gait cycles, were simultaneously processed by multivariate empirical mode decomposition (MEMD, to generate fully aligned data scales for subsequent MMSE analysis. In order to conduct extensive examination of muscle coactivation complexity using the MEMD-enhanced MMSE, 14 data analysis schemes were designed by varying partial muscle combinations and time durations of data segments. Both TD children and healthy adults showed almost consistent MMSE curves over multiple scales for all the 14 schemes, without any significant difference (p > 0.09. However, quite diversity in MMSE curve was observed in the CP group when compared with those in the control group. There appears to be diverse neuropathological processes in CP that may affect dynamical complexity of muscle coactivation and coordination during gait. The abnormal complexity patterns emerging in CP group can be attributed to different factors such as motor control impairments, loss of muscle couplings, and spasticity or paralysis in individual muscles. All these findings expand our knowledge of neuropathology of CP from a novel point of view of muscle co-activation complexity, also indicating the potential to derive a quantitative index for assessing muscle activation characteristics as well as motor function in CP.

  7. Novel patch modelling method for efficient simulation and prediction uncertainty analysis of multi-scale groundwater flow and transport processes

    Science.gov (United States)

    Sreekanth, J.; Moore, Catherine

    2018-04-01

    The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.

  8. a Range Based Method for Complex Facade Modeling

    Science.gov (United States)

    Adami, A.; Fregonese, L.; Taffurelli, L.

    2011-09-01

    the complex architecture. From the point cloud we can extract a false colour map depending on the distance of each point from the average plane. In this way we can represent each point of the facades by a height map in grayscale. In this operation it is important to define the scale of the final result in order to set the correct pixel size in the map. The following step is concerning the use of a modifier which is well-known in computer graphics. In fact the modifier Displacement allows to simulate on a planar surface the original roughness of the object according to a grayscale map. The value of gray is read by the modifier as the distance from the reference plane and it represents the displacement of the corresponding element of the virtual plane. Similar to the bump map, the displacement modifier does not only simulate the effect, but it really deforms the planar surface. In this way the 3d model can be use not only in a static representation, but also in dynamic animation or interactive application. The setting of the plane to be deformed is the most important step in this process. In 3d Max the planar surface has to be characterized by the real dimension of the façade and also by a correct number of quadrangular faces which are the smallest part of the whole surface. In this way we can consider the modified surface as a 3d raster representation where each quadrangular face (corresponding to traditional pixel) is displaced according the value of gray (= distance from the plane). This method can be applied in different context, above all when the object to be represented can be considered as a 2,5 dimension such as facades of architecture in city model or large scale representation. But also it can be used to represent particular effect such as deformation of walls in a complete 3d way.

  9. A RANGE BASED METHOD FOR COMPLEX FACADE MODELING

    Directory of Open Access Journals (Sweden)

    A. Adami

    2012-09-01

    homogeneous point cloud of the complex architecture. From the point cloud we can extract a false colour map depending on the distance of each point from the average plane. In this way we can represent each point of the facades by a height map in grayscale. In this operation it is important to define the scale of the final result in order to set the correct pixel size in the map. The following step is concerning the use of a modifier which is well-known in computer graphics. In fact the modifier Displacement allows to simulate on a planar surface the original roughness of the object according to a grayscale map. The value of gray is read by the modifier as the distance from the reference plane and it represents the displacement of the corresponding element of the virtual plane. Similar to the bump map, the displacement modifier does not only simulate the effect, but it really deforms the planar surface. In this way the 3d model can be use not only in a static representation, but also in dynamic animation or interactive application. The setting of the plane to be deformed is the most important step in this process. In 3d Max the planar surface has to be characterized by the real dimension of the façade and also by a correct number of quadrangular faces which are the smallest part of the whole surface. In this way we can consider the modified surface as a 3d raster representation where each quadrangular face (corresponding to traditional pixel is displaced according the value of gray (= distance from the plane. This method can be applied in different context, above all when the object to be represented can be considered as a 2,5 dimension such as facades of architecture in city model or large scale representation. But also it can be used to represent particular effect such as deformation of walls in a complete 3d way.

  10. A Porosity Method to Describe Complex 3D-Structures Theory and Application to an Explosion

    Directory of Open Access Journals (Sweden)

    M.-F. Robbe

    2006-01-01

    Full Text Available A theoretical method was developed to be able to describe the influence of structures of complex shape on a transient fluid flow without meshing the structures. Structures are considered as solid pores inside the fluid and act as an obstacle for the flow. The method was specifically adapted to fast transient cases.The porosity method was applied to the simulation of a Hypothetical Core Disruptive Accident in a small-scale replica of a Liquid Metal Fast Breeder Reactor. A 2D-axisymmetrical simulation of the MARS test was performed with the EUROPLEXUS code. Whereas the central internal structures of the mock-up could be described with a classical shell model, the influence of the 3D peripheral structures was taken into account with the porosity method

  11. Switching industrial production processes from complex to defined media: method development and case study using the example of Penicillium chrysogenum.

    Science.gov (United States)

    Posch, Andreas E; Spadiut, Oliver; Herwig, Christoph

    2012-06-22

    Filamentous fungi are versatile cell factories and widely used for the production of antibiotics, organic acids, enzymes and other industrially relevant compounds at large scale. As a fact, industrial production processes employing filamentous fungi are commonly based on complex raw materials. However, considerable lot-to-lot variability of complex media ingredients not only demands for exhaustive incoming components inspection and quality control, but unavoidably affects process stability and performance. Thus, switching bioprocesses from complex to defined media is highly desirable. This study presents a strategy for strain characterization of filamentous fungi on partly complex media using redundant mass balancing techniques. Applying the suggested method, interdependencies between specific biomass and side-product formation rates, production of fructooligosaccharides, specific complex media component uptake rates and fungal strains were revealed. A 2-fold increase of the overall penicillin space time yield and a 3-fold increase in the maximum specific penicillin formation rate were reached in defined media compared to complex media. The newly developed methodology enabled fast characterization of two different industrial Penicillium chrysogenum candidate strains on complex media based on specific complex media component uptake kinetics and identification of the most promising strain for switching the process from complex to defined conditions. Characterization at different complex/defined media ratios using only a limited number of analytical methods allowed maximizing the overall industrial objectives of increasing both, method throughput and the generation of scientific process understanding.

  12. Elements of a method to scale ignition reactor Tokamak

    International Nuclear Information System (INIS)

    Cotsaftis, M.

    1984-08-01

    Due to unavoidable uncertainties from present scaling laws when projected to thermonuclear regime, a method is proposed to minimize these uncertainties in order to figure out the main parameters of ignited tokamak. The method mainly consists in searching, if any, a domain in adapted parameters space which allows Ignition, but is the least sensitive to possible change in scaling laws. In other words, Ignition domain is researched which is the intersection of all possible Ignition domains corresponding to all possible scaling laws produced by all possible transports

  13. Dynamical complexity changes during two forms of meditation

    Science.gov (United States)

    Li, Jin; Hu, Jing; Zhang, Yinhong; Zhang, Xiaofeng

    2011-06-01

    Detection of dynamical complexity changes in natural and man-made systems has deep scientific and practical meaning. We use the base-scale entropy method to analyze dynamical complexity changes for heart rate variability (HRV) series during specific traditional forms of Chinese Chi and Kundalini Yoga meditation techniques in healthy young adults. The results show that dynamical complexity decreases in meditation states for two forms of meditation. Meanwhile, we detected changes in probability distribution of m-words during meditation and explained this changes using probability distribution of sine function. The base-scale entropy method may be used on a wider range of physiologic signals.

  14. Global Stability of Complex-Valued Genetic Regulatory Networks with Delays on Time Scales

    Directory of Open Access Journals (Sweden)

    Wang Yajing

    2016-01-01

    Full Text Available In this paper, the global exponential stability of complex-valued genetic regulatory networks with delays is investigated. Besides presenting conditions guaranteeing the existence of a unique equilibrium pattern, its global exponential stability is discussed. Some numerical examples for different time scales.

  15. Multi-Scale Entropy Analysis as a Method for Time-Series Analysis of Climate Data

    Directory of Open Access Journals (Sweden)

    Heiko Balzter

    2015-03-01

    Full Text Available Evidence is mounting that the temporal dynamics of the climate system are changing at the same time as the average global temperature is increasing due to multiple climate forcings. A large number of extreme weather events such as prolonged cold spells, heatwaves, droughts and floods have been recorded around the world in the past 10 years. Such changes in the temporal scaling behaviour of climate time-series data can be difficult to detect. While there are easy and direct ways of analysing climate data by calculating the means and variances for different levels of temporal aggregation, these methods can miss more subtle changes in their dynamics. This paper describes multi-scale entropy (MSE analysis as a tool to study climate time-series data and to identify temporal scales of variability and their change over time in climate time-series. MSE estimates the sample entropy of the time-series after coarse-graining at different temporal scales. An application of MSE to Central European, variance-adjusted, mean monthly air temperature anomalies (CRUTEM4v is provided. The results show that the temporal scales of the current climate (1960–2014 are different from the long-term average (1850–1960. For temporal scale factors longer than 12 months, the sample entropy increased markedly compared to the long-term record. Such an increase can be explained by systems theory with greater complexity in the regional temperature data. From 1961 the patterns of monthly air temperatures are less regular at time-scales greater than 12 months than in the earlier time period. This finding suggests that, at these inter-annual time scales, the temperature variability has become less predictable than in the past. It is possible that climate system feedbacks are expressed in altered temporal scales of the European temperature time-series data. A comparison with the variance and Shannon entropy shows that MSE analysis can provide additional information on the

  16. Predicting protein complexes using a supervised learning method combined with local structural information.

    Science.gov (United States)

    Dong, Yadong; Sun, Yongqi; Qin, Chao

    2018-01-01

    The existing protein complex detection methods can be broadly divided into two categories: unsupervised and supervised learning methods. Most of the unsupervised learning methods assume that protein complexes are in dense regions of protein-protein interaction (PPI) networks even though many true complexes are not dense subgraphs. Supervised learning methods utilize the informative properties of known complexes; they often extract features from existing complexes and then use the features to train a classification model. The trained model is used to guide the search process for new complexes. However, insufficient extracted features, noise in the PPI data and the incompleteness of complex data make the classification model imprecise. Consequently, the classification model is not sufficient for guiding the detection of complexes. Therefore, we propose a new robust score function that combines the classification model with local structural information. Based on the score function, we provide a search method that works both forwards and backwards. The results from experiments on six benchmark PPI datasets and three protein complex datasets show that our approach can achieve better performance compared with the state-of-the-art supervised, semi-supervised and unsupervised methods for protein complex detection, occasionally significantly outperforming such methods.

  17. The linearly scaling 3D fragment method for large scale electronic structure calculations

    Energy Technology Data Exchange (ETDEWEB)

    Zhao Zhengji [National Energy Research Scientific Computing Center (NERSC) (United States); Meza, Juan; Shan Hongzhang; Strohmaier, Erich; Bailey, David; Wang Linwang [Computational Research Division, Lawrence Berkeley National Laboratory (United States); Lee, Byounghak, E-mail: ZZhao@lbl.go [Physics Department, Texas State University (United States)

    2009-07-01

    The linearly scaling three-dimensional fragment (LS3DF) method is an O(N) ab initio electronic structure method for large-scale nano material simulations. It is a divide-and-conquer approach with a novel patching scheme that effectively cancels out the artificial boundary effects, which exist in all divide-and-conquer schemes. This method has made ab initio simulations of thousand-atom nanosystems feasible in a couple of hours, while retaining essentially the same accuracy as the direct calculation methods. The LS3DF method won the 2008 ACM Gordon Bell Prize for algorithm innovation. Our code has reached 442 Tflop/s running on 147,456 processors on the Cray XT5 (Jaguar) at OLCF, and has been run on 163,840 processors on the Blue Gene/P (Intrepid) at ALCF, and has been applied to a system containing 36,000 atoms. In this paper, we will present the recent parallel performance results of this code, and will apply the method to asymmetric CdSe/CdS core/shell nanorods, which have potential applications in electronic devices and solar cells.

  18. Complex data modeling and computationally intensive methods for estimation and prediction

    CERN Document Server

    Secchi, Piercesare; Advances in Complex Data Modeling and Computational Methods in Statistics

    2015-01-01

    The book is addressed to statisticians working at the forefront of the statistical analysis of complex and high dimensional data and offers a wide variety of statistical models, computer intensive methods and applications: network inference from the analysis of high dimensional data; new developments for bootstrapping complex data; regression analysis for measuring the downsize reputational risk; statistical methods for research on the human genome dynamics; inference in non-euclidean settings and for shape data; Bayesian methods for reliability and the analysis of complex data; methodological issues in using administrative data for clinical and epidemiological research; regression models with differential regularization; geostatistical methods for mobility analysis through mobile phone data exploration. This volume is the result of a careful selection among the contributions presented at the conference "S.Co.2013: Complex data modeling and computationally intensive methods for estimation and prediction" held...

  19. VLSI scaling methods and low power CMOS buffer circuit

    International Nuclear Information System (INIS)

    Sharma Vijay Kumar; Pattanaik Manisha

    2013-01-01

    Device scaling is an important part of the very large scale integration (VLSI) design to boost up the success path of VLSI industry, which results in denser and faster integration of the devices. As technology node moves towards the very deep submicron region, leakage current and circuit reliability become the key issues. Both are increasing with the new technology generation and affecting the performance of the overall logic circuit. The VLSI designers must keep the balance in power dissipation and the circuit's performance with scaling of the devices. In this paper, different scaling methods are studied first. These scaling methods are used to identify the effects of those scaling methods on the power dissipation and propagation delay of the CMOS buffer circuit. For mitigating the power dissipation in scaled devices, we have proposed a reliable leakage reduction low power transmission gate (LPTG) approach and tested it on complementary metal oxide semiconductor (CMOS) buffer circuit. All simulation results are taken on HSPICE tool with Berkeley predictive technology model (BPTM) BSIM4 bulk CMOS files. The LPTG CMOS buffer reduces 95.16% power dissipation with 84.20% improvement in figure of merit at 32 nm technology node. Various process, voltage and temperature variations are analyzed for proving the robustness of the proposed approach. Leakage current uncertainty decreases from 0.91 to 0.43 in the CMOS buffer circuit that causes large circuit reliability. (semiconductor integrated circuits)

  20. SCALE-6 Sensitivity/Uncertainty Methods and Covariance Data

    International Nuclear Information System (INIS)

    Williams, Mark L.; Rearden, Bradley T.

    2008-01-01

    Computational methods and data used for sensitivity and uncertainty analysis within the SCALE nuclear analysis code system are presented. The methodology used to calculate sensitivity coefficients and similarity coefficients and to perform nuclear data adjustment is discussed. A description is provided of the SCALE-6 covariance library based on ENDF/B-VII and other nuclear data evaluations, supplemented by 'low-fidelity' approximate covariances. SCALE (Standardized Computer Analyses for Licensing Evaluation) is a modular code system developed by Oak Ridge National Laboratory (ORNL) to perform calculations for criticality safety, reactor physics, and radiation shielding applications. SCALE calculations typically use sequences that execute a predefined series of executable modules to compute particle fluxes and responses like the critical multiplication factor. SCALE also includes modules for sensitivity and uncertainty (S/U) analysis of calculated responses. The S/U codes in SCALE are collectively referred to as TSUNAMI (Tools for Sensitivity and UNcertainty Analysis Methodology Implementation). SCALE-6-scheduled for release in 2008-contains significant new capabilities, including important enhancements in S/U methods and data. The main functions of TSUNAMI are to (a) compute nuclear data sensitivity coefficients and response uncertainties, (b) establish similarity between benchmark experiments and design applications, and (c) reduce uncertainty in calculated responses by consolidating integral benchmark experiments. TSUNAMI includes easy-to-use graphical user interfaces for defining problem input and viewing three-dimensional (3D) geometries, as well as an integrated plotting package.

  1. Unplanned Complex Suicide-A Consideration of Multiple Methods.

    Science.gov (United States)

    Ateriya, Navneet; Kanchan, Tanuj; Shekhawat, Raghvendra Singh; Setia, Puneet; Saraf, Ashish

    2018-05-01

    Detailed death investigations are mandatory to find out the exact cause and manner in non-natural deaths. In this reference, use of multiple methods in suicide poses a challenge for the investigators especially when the choice of methods to cause death is unplanned. There is an increased likelihood that doubts of homicide are raised in cases of unplanned complex suicides. A case of complex suicide is reported where the victim resorted to multiple methods to end his life, and what appeared to be an unplanned variant based on the death scene investigations. A meticulous crime scene examination, interviews of the victim's relatives and other witnesses, and a thorough autopsy are warranted to conclude on the cause and manner of death in all such cases. © 2017 American Academy of Forensic Sciences.

  2. A reduced-scaling density matrix-based method for the computation of the vibrational Hessian matrix at the self-consistent field level

    International Nuclear Information System (INIS)

    Kussmann, Jörg; Luenser, Arne; Beer, Matthias; Ochsenfeld, Christian

    2015-01-01

    An analytical method to calculate the molecular vibrational Hessian matrix at the self-consistent field level is presented. By analysis of the multipole expansions of the relevant derivatives of Coulomb-type two-electron integral contractions, we show that the effect of the perturbation on the electronic structure due to the displacement of nuclei decays at least as r −2 instead of r −1 . The perturbation is asymptotically local, and the computation of the Hessian matrix can, in principle, be performed with O(N) complexity. Our implementation exhibits linear scaling in all time-determining steps, with some rapid but quadratic-complexity steps remaining. Sample calculations illustrate linear or near-linear scaling in the construction of the complete nuclear Hessian matrix for sparse systems. For more demanding systems, scaling is still considerably sub-quadratic to quadratic, depending on the density of the underlying electronic structure

  3. Kernel methods for large-scale genomic data analysis

    Science.gov (United States)

    Xing, Eric P.; Schaid, Daniel J.

    2015-01-01

    Machine learning, particularly kernel methods, has been demonstrated as a promising new tool to tackle the challenges imposed by today’s explosive data growth in genomics. They provide a practical and principled approach to learning how a large number of genetic variants are associated with complex phenotypes, to help reveal the complexity in the relationship between the genetic markers and the outcome of interest. In this review, we highlight the potential key role it will have in modern genomic data processing, especially with regard to integration with classical methods for gene prioritizing, prediction and data fusion. PMID:25053743

  4. Geophysical mapping of complex glaciogenic large-scale structures

    DEFF Research Database (Denmark)

    Høyer, Anne-Sophie

    2013-01-01

    This thesis presents the main results of a four year PhD study concerning the use of geophysical data in geological mapping. The study is related to the Geocenter project, “KOMPLEKS”, which focuses on the mapping of complex, large-scale geological structures. The study area is approximately 100 km2...... data types and co-interpret them in order to improve our geological understanding. However, in order to perform this successfully, methodological considerations are necessary. For instance, a structure indicated by a reflection in the seismic data is not always apparent in the resistivity data...... information) can be collected. The geophysical data are used together with geological analyses from boreholes and pits to interpret the geological history of the hill-island. The geophysical data reveal that the glaciotectonic structures truncate at the surface. The directions of the structures were mapped...

  5. Scale factor measure method without turntable for angular rate gyroscope

    Science.gov (United States)

    Qi, Fangyi; Han, Xuefei; Yao, Yanqing; Xiong, Yuting; Huang, Yuqiong; Wang, Hua

    2018-03-01

    In this paper, a scale factor test method without turntable is originally designed for the angular rate gyroscope. A test system which consists of test device, data acquisition circuit and data processing software based on Labview platform is designed. Taking advantage of gyroscope's sensitivity of angular rate, a gyroscope with known scale factor, serves as a standard gyroscope. The standard gyroscope is installed on the test device together with a measured gyroscope. By shaking the test device around its edge which is parallel to the input axis of gyroscope, the scale factor of the measured gyroscope can be obtained in real time by the data processing software. This test method is fast. It helps test system miniaturized, easy to carry or move. Measure quarts MEMS gyroscope's scale factor multi-times by this method, the difference is less than 0.2%. Compare with testing by turntable, the scale factor difference is less than 1%. The accuracy and repeatability of the test system seems good.

  6. Reorganizing Complex Network to Improve Large-Scale Multiagent Teamwork

    Directory of Open Access Journals (Sweden)

    Yang Xu

    2014-01-01

    Full Text Available Large-scale multiagent teamwork has been popular in various domains. Similar to human society infrastructure, agents only coordinate with some of the others, with a peer-to-peer complex network structure. Their organization has been proven as a key factor to influence their performance. To expedite team performance, we have analyzed that there are three key factors. First, complex network effects may be able to promote team performance. Second, coordination interactions coming from their sources are always trying to be routed to capable agents. Although they could be transferred across the network via different paths, their sources and sinks depend on the intrinsic nature of the team which is irrelevant to the network connections. In addition, the agents involved in the same plan often form a subteam and communicate with each other more frequently. Therefore, if the interactions between agents can be statistically recorded, we are able to set up an integrated network adjustment algorithm by combining the three key factors. Based on our abstracted teamwork simulations and the coordination statistics, we implemented the adaptive reorganization algorithm. The experimental results briefly support our design that the reorganized network is more capable of coordinating heterogeneous agents.

  7. Scalable Parallel Methods for Analyzing Metagenomics Data at Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Daily, Jeffrey A. [Washington State Univ., Pullman, WA (United States)

    2015-05-01

    The field of bioinformatics and computational biology is currently experiencing a data revolution. The exciting prospect of making fundamental biological discoveries is fueling the rapid development and deployment of numerous cost-effective, high-throughput next-generation sequencing technologies. The result is that the DNA and protein sequence repositories are being bombarded with new sequence information. Databases are continuing to report a Moore’s law-like growth trajectory in their database sizes, roughly doubling every 18 months. In what seems to be a paradigm-shift, individual projects are now capable of generating billions of raw sequence data that need to be analyzed in the presence of already annotated sequence information. While it is clear that data-driven methods, such as sequencing homology detection, are becoming the mainstay in the field of computational life sciences, the algorithmic advancements essential for implementing complex data analytics at scale have mostly lagged behind. Sequence homology detection is central to a number of bioinformatics applications including genome sequencing and protein family characterization. Given millions of sequences, the goal is to identify all pairs of sequences that are highly similar (or “homologous”) on the basis of alignment criteria. While there are optimal alignment algorithms to compute pairwise homology, their deployment for large-scale is currently not feasible; instead, heuristic methods are used at the expense of quality. In this dissertation, we present the design and evaluation of a parallel implementation for conducting optimal homology detection on distributed memory supercomputers. Our approach uses a combination of techniques from asynchronous load balancing (viz. work stealing, dynamic task counters), data replication, and exact-matching filters to achieve homology detection at scale. Results for a collection of 2.56M sequences show parallel efficiencies of ~75-100% on up to 8K cores

  8. Scalable Parallel Methods for Analyzing Metagenomics Data at Extreme Scale

    International Nuclear Information System (INIS)

    Daily, Jeffrey A.

    2015-01-01

    The field of bioinformatics and computational biology is currently experiencing a data revolution. The exciting prospect of making fundamental biological discoveries is fueling the rapid development and deployment of numerous cost-effective, high-throughput next-generation sequencing technologies. The result is that the DNA and protein sequence repositories are being bombarded with new sequence information. Databases are continuing to report a Moore's law-like growth trajectory in their database sizes, roughly doubling every 18 months. In what seems to be a paradigm-shift, individual projects are now capable of generating billions of raw sequence data that need to be analyzed in the presence of already annotated sequence information. While it is clear that data-driven methods, such as sequencing homology detection, are becoming the mainstay in the field of computational life sciences, the algorithmic advancements essential for implementing complex data analytics at scale have mostly lagged behind. Sequence homology detection is central to a number of bioinformatics applications including genome sequencing and protein family characterization. Given millions of sequences, the goal is to identify all pairs of sequences that are highly similar (or 'homologous') on the basis of alignment criteria. While there are optimal alignment algorithms to compute pairwise homology, their deployment for large-scale is currently not feasible; instead, heuristic methods are used at the expense of quality. In this dissertation, we present the design and evaluation of a parallel implementation for conducting optimal homology detection on distributed memory supercomputers. Our approach uses a combination of techniques from asynchronous load balancing (viz. work stealing, dynamic task counters), data replication, and exact-matching filters to achieve homology detection at scale. Results for a collection of 2.56M sequences show parallel efficiencies of ~75-100% on up to 8K

  9. Relating system-to-CFD coupled code analyses to theoretical framework of a multi-scale method

    International Nuclear Information System (INIS)

    Cadinu, F.; Kozlowski, T.; Dinh, T.N.

    2007-01-01

    Over past decades, analyses of transient processes and accidents in a nuclear power plant have been performed, to a significant extent and with a great success, by means of so called system codes, e.g. RELAP5, CATHARE, ATHLET codes. These computer codes, based on a multi-fluid model of two-phase flow, provide an effective, one-dimensional description of the coolant thermal-hydraulics in the reactor system. For some components in the system, wherever needed, the effect of multi-dimensional flow is accounted for through approximate models. The later are derived from scaled experiments conducted for selected accident scenarios. Increasingly, however, we have to deal with newer and ever more complex accident scenarios. In some such cases the system codes fail to serve as simulation vehicle, largely due to its deficient treatment of multi-dimensional flow (in e.g. downcomer, lower plenum). A possible way of improvement is to use the techniques of Computational Fluid Dynamics (CFD). Based on solving Navier-Stokes equations, CFD codes have been developed and used, broadly, to perform analysis of multi-dimensional flow, dominantly in non-nuclear industry and for single-phase flow applications. It is clear that CFD simulations can not substitute system codes but just complement them. Given the intrinsic multi-scale nature of this problem, we propose to relate it to the more general field of research on multi-scale simulations. Even though multi-scale methods are developed on case-by-case basis, the need for a unified framework brought to the development of the heterogeneous multi-scale method (HMM)

  10. Exploring a multi-scale method for molecular simulation in continuum solvent model: Explicit simulation of continuum solvent as an incompressible fluid.

    Science.gov (United States)

    Xiao, Li; Luo, Ray

    2017-12-07

    We explored a multi-scale algorithm for the Poisson-Boltzmann continuum solvent model for more robust simulations of biomolecules. In this method, the continuum solvent/solute interface is explicitly simulated with a numerical fluid dynamics procedure, which is tightly coupled to the solute molecular dynamics simulation. There are multiple benefits to adopt such a strategy as presented below. At this stage of the development, only nonelectrostatic interactions, i.e., van der Waals and hydrophobic interactions, are included in the algorithm to assess the quality of the solvent-solute interface generated by the new method. Nevertheless, numerical challenges exist in accurately interpolating the highly nonlinear van der Waals term when solving the finite-difference fluid dynamics equations. We were able to bypass the challenge rigorously by merging the van der Waals potential and pressure together when solving the fluid dynamics equations and by considering its contribution in the free-boundary condition analytically. The multi-scale simulation method was first validated by reproducing the solute-solvent interface of a single atom with analytical solution. Next, we performed the relaxation simulation of a restrained symmetrical monomer and observed a symmetrical solvent interface at equilibrium with detailed surface features resembling those found on the solvent excluded surface. Four typical small molecular complexes were then tested, both volume and force balancing analyses showing that these simple complexes can reach equilibrium within the simulation time window. Finally, we studied the quality of the multi-scale solute-solvent interfaces for the four tested dimer complexes and found that they agree well with the boundaries as sampled in the explicit water simulations.

  11. DGDFT: A massively parallel method for large scale density functional theory calculations.

    Science.gov (United States)

    Hu, Wei; Lin, Lin; Yang, Chao

    2015-09-28

    We describe a massively parallel implementation of the recently developed discontinuous Galerkin density functional theory (DGDFT) method, for efficient large-scale Kohn-Sham DFT based electronic structure calculations. The DGDFT method uses adaptive local basis (ALB) functions generated on-the-fly during the self-consistent field iteration to represent the solution to the Kohn-Sham equations. The use of the ALB set provides a systematic way to improve the accuracy of the approximation. By using the pole expansion and selected inversion technique to compute electron density, energy, and atomic forces, we can make the computational complexity of DGDFT scale at most quadratically with respect to the number of electrons for both insulating and metallic systems. We show that for the two-dimensional (2D) phosphorene systems studied here, using 37 basis functions per atom allows us to reach an accuracy level of 1.3 × 10(-4) Hartree/atom in terms of the error of energy and 6.2 × 10(-4) Hartree/bohr in terms of the error of atomic force, respectively. DGDFT can achieve 80% parallel efficiency on 128,000 high performance computing cores when it is used to study the electronic structure of 2D phosphorene systems with 3500-14 000 atoms. This high parallel efficiency results from a two-level parallelization scheme that we will describe in detail.

  12. DGDFT: A massively parallel method for large scale density functional theory calculations

    International Nuclear Information System (INIS)

    Hu, Wei; Yang, Chao; Lin, Lin

    2015-01-01

    We describe a massively parallel implementation of the recently developed discontinuous Galerkin density functional theory (DGDFT) method, for efficient large-scale Kohn-Sham DFT based electronic structure calculations. The DGDFT method uses adaptive local basis (ALB) functions generated on-the-fly during the self-consistent field iteration to represent the solution to the Kohn-Sham equations. The use of the ALB set provides a systematic way to improve the accuracy of the approximation. By using the pole expansion and selected inversion technique to compute electron density, energy, and atomic forces, we can make the computational complexity of DGDFT scale at most quadratically with respect to the number of electrons for both insulating and metallic systems. We show that for the two-dimensional (2D) phosphorene systems studied here, using 37 basis functions per atom allows us to reach an accuracy level of 1.3 × 10 −4 Hartree/atom in terms of the error of energy and 6.2 × 10 −4 Hartree/bohr in terms of the error of atomic force, respectively. DGDFT can achieve 80% parallel efficiency on 128,000 high performance computing cores when it is used to study the electronic structure of 2D phosphorene systems with 3500-14 000 atoms. This high parallel efficiency results from a two-level parallelization scheme that we will describe in detail

  13. DGDFT: A massively parallel method for large scale density functional theory calculations

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Wei, E-mail: whu@lbl.gov; Yang, Chao, E-mail: cyang@lbl.gov [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Lin, Lin, E-mail: linlin@math.berkeley.edu [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Department of Mathematics, University of California, Berkeley, California 94720 (United States)

    2015-09-28

    We describe a massively parallel implementation of the recently developed discontinuous Galerkin density functional theory (DGDFT) method, for efficient large-scale Kohn-Sham DFT based electronic structure calculations. The DGDFT method uses adaptive local basis (ALB) functions generated on-the-fly during the self-consistent field iteration to represent the solution to the Kohn-Sham equations. The use of the ALB set provides a systematic way to improve the accuracy of the approximation. By using the pole expansion and selected inversion technique to compute electron density, energy, and atomic forces, we can make the computational complexity of DGDFT scale at most quadratically with respect to the number of electrons for both insulating and metallic systems. We show that for the two-dimensional (2D) phosphorene systems studied here, using 37 basis functions per atom allows us to reach an accuracy level of 1.3 × 10{sup −4} Hartree/atom in terms of the error of energy and 6.2 × 10{sup −4} Hartree/bohr in terms of the error of atomic force, respectively. DGDFT can achieve 80% parallel efficiency on 128,000 high performance computing cores when it is used to study the electronic structure of 2D phosphorene systems with 3500-14 000 atoms. This high parallel efficiency results from a two-level parallelization scheme that we will describe in detail.

  14. Methods for determination of extractable complex composition

    International Nuclear Information System (INIS)

    Sergievskij, V.V.

    1984-01-01

    Specific features and restrictions of main methods for determining the extractable complex composition by the distribution data (methods of equilibrium shift, saturation, mathematical models) are considered. Special attention is given to the solution of inverse problems with account for hydration effect on the activity of organic phase components. By example of the systems lithium halides-isoamyl alcohol, thorium nitrate-n-hexyl alcohol, mineral acids tri-n-butyl phosphate (TBP), metal nitrates (uranium lanthanides) - TBP the results on determining stoichiometry of extraction equilibria obtained by various methods are compared

  15. New complex variable meshless method for advection—diffusion problems

    International Nuclear Information System (INIS)

    Wang Jian-Fei; Cheng Yu-Min

    2013-01-01

    In this paper, an improved complex variable meshless method (ICVMM) for two-dimensional advection—diffusion problems is developed based on improved complex variable moving least-square (ICVMLS) approximation. The equivalent functional of two-dimensional advection—diffusion problems is formed, the variation method is used to obtain the equation system, and the penalty method is employed to impose the essential boundary conditions. The difference method for two-point boundary value problems is used to obtain the discrete equations. Then the corresponding formulas of the ICVMM for advection—diffusion problems are presented. Two numerical examples with different node distributions are used to validate and inestigate the accuracy and efficiency of the new method in this paper. It is shown that ICVMM is very effective for advection—diffusion problems, and has a good convergent character, accuracy, and computational efficiency

  16. A simple analytical scaling method for a scaled-down test facility simulating SB-LOCAs in a passive PWR

    International Nuclear Information System (INIS)

    Lee, Sang Il

    1992-02-01

    A Simple analytical scaling method is developed for a scaled-down test facility simulating SB-LOCAs in a passive PWR. The whole scenario of a SB-LOCA is divided into two phases on the basis of the pressure trend ; depressurization phase and pot-boiling phase. The pressure and the core mixture level are selected as the most critical parameters to be preserved between the prototype and the scaled-down model. In each phase the high important phenomena having the influence on the critical parameters are identified and the scaling parameters governing the high important phenomena are generated by the present method. To validate the model used, Marviken CFT and 336 rod bundle experiment are simulated. The models overpredict both the pressure and two phase mixture level, but it shows agreement at least qualitatively with experimental results. In order to validate whether the scaled-down model well represents the important phenomena, we simulate the nondimensional pressure response of a cold-leg 4-inch break transient for AP-600 and the scaled-down model. The results of the present method are in excellent agreement with those of AP-600. It can be concluded that the present method is suitable for scaling the test facility simulating SB-LOCAs in a passive PWR

  17. Energy-scales convergence for optimal and robust quantum transport in photosynthetic complexes

    Energy Technology Data Exchange (ETDEWEB)

    Mohseni, M. [Google Research, Venice, California 90291 (United States); Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 (United States); Shabani, A. [Department of Chemistry, Princeton University, Princeton, New Jersey 08544 (United States); Department of Chemistry, University of California at Berkeley, Berkeley, California 94720 (United States); Lloyd, S. [Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 (United States); Rabitz, H. [Department of Chemistry, Princeton University, Princeton, New Jersey 08544 (United States)

    2014-01-21

    Underlying physical principles for the high efficiency of excitation energy transfer in light-harvesting complexes are not fully understood. Notably, the degree of robustness of these systems for transporting energy is not known considering their realistic interactions with vibrational and radiative environments within the surrounding solvent and scaffold proteins. In this work, we employ an efficient technique to estimate energy transfer efficiency of such complex excitonic systems. We observe that the dynamics of the Fenna-Matthews-Olson (FMO) complex leads to optimal and robust energy transport due to a convergence of energy scales among all important internal and external parameters. In particular, we show that the FMO energy transfer efficiency is optimum and stable with respect to important parameters of environmental interactions including reorganization energy λ, bath frequency cutoff γ, temperature T, and bath spatial correlations. We identify the ratio of k{sub B}λT/ℏγ⁢g as a single key parameter governing quantum transport efficiency, where g is the average excitonic energy gap.

  18. Complex Quantum Network Manifolds in Dimension d > 2 are Scale-Free

    Science.gov (United States)

    Bianconi, Ginestra; Rahmede, Christoph

    2015-09-01

    In quantum gravity, several approaches have been proposed until now for the quantum description of discrete geometries. These theoretical frameworks include loop quantum gravity, causal dynamical triangulations, causal sets, quantum graphity, and energetic spin networks. Most of these approaches describe discrete spaces as homogeneous network manifolds. Here we define Complex Quantum Network Manifolds (CQNM) describing the evolution of quantum network states, and constructed from growing simplicial complexes of dimension . We show that in d = 2 CQNM are homogeneous networks while for d > 2 they are scale-free i.e. they are characterized by large inhomogeneities of degrees like most complex networks. From the self-organized evolution of CQNM quantum statistics emerge spontaneously. Here we define the generalized degrees associated with the -faces of the -dimensional CQNMs, and we show that the statistics of these generalized degrees can either follow Fermi-Dirac, Boltzmann or Bose-Einstein distributions depending on the dimension of the -faces.

  19. Energy-scales convergence for optimal and robust quantum transport in photosynthetic complexes

    International Nuclear Information System (INIS)

    Mohseni, M.; Shabani, A.; Lloyd, S.; Rabitz, H.

    2014-01-01

    Underlying physical principles for the high efficiency of excitation energy transfer in light-harvesting complexes are not fully understood. Notably, the degree of robustness of these systems for transporting energy is not known considering their realistic interactions with vibrational and radiative environments within the surrounding solvent and scaffold proteins. In this work, we employ an efficient technique to estimate energy transfer efficiency of such complex excitonic systems. We observe that the dynamics of the Fenna-Matthews-Olson (FMO) complex leads to optimal and robust energy transport due to a convergence of energy scales among all important internal and external parameters. In particular, we show that the FMO energy transfer efficiency is optimum and stable with respect to important parameters of environmental interactions including reorganization energy λ, bath frequency cutoff γ, temperature T, and bath spatial correlations. We identify the ratio of k B λT/ℏγ⁢g as a single key parameter governing quantum transport efficiency, where g is the average excitonic energy gap

  20. Synchronization and Causality Across Time-scales: Complex Dynamics and Extremes in El Niño/Southern Oscillation

    Science.gov (United States)

    Jajcay, N.; Kravtsov, S.; Tsonis, A.; Palus, M.

    2017-12-01

    A better understanding of dynamics in complex systems, such as the Earth's climate is one of the key challenges for contemporary science and society. A large amount of experimental data requires new mathematical and computational approaches. Natural complex systems vary on many temporal and spatial scales, often exhibiting recurring patterns and quasi-oscillatory phenomena. The statistical inference of causal interactions and synchronization between dynamical phenomena evolving on different temporal scales is of vital importance for better understanding of underlying mechanisms and a key for modeling and prediction of such systems. This study introduces and applies information theory diagnostics to phase and amplitude time series of different wavelet components of the observed data that characterizes El Niño. A suite of significant interactions between processes operating on different time scales was detected, and intermittent synchronization among different time scales has been associated with the extreme El Niño events. The mechanisms of these nonlinear interactions were further studied in conceptual low-order and state-of-the-art dynamical, as well as statistical climate models. Observed and simulated interactions exhibit substantial discrepancies, whose understanding may be the key to an improved prediction. Moreover, the statistical framework which we apply here is suitable for direct usage of inferring cross-scale interactions in nonlinear time series from complex systems such as the terrestrial magnetosphere, solar-terrestrial interactions, seismic activity or even human brain dynamics.

  1. Identifying influential spreaders in complex networks based on kshell hybrid method

    Science.gov (United States)

    Namtirtha, Amrita; Dutta, Animesh; Dutta, Biswanath

    2018-06-01

    Influential spreaders are the key players in maximizing or controlling the spreading in a complex network. Identifying the influential spreaders using kshell decomposition method has become very popular in the recent time. In the literature, the core nodes i.e. with the largest kshell index of a network are considered as the most influential spreaders. We have studied the kshell method and spreading dynamics of nodes using Susceptible-Infected-Recovered (SIR) epidemic model to understand the behavior of influential spreaders in terms of its topological location in the network. From the study, we have found that every node in the core area is not the most influential spreader. Even a strategically placed lower shell node can also be a most influential spreader. Moreover, the core area can also be situated at the periphery of the network. The existing indexing methods are only designed to identify the most influential spreaders from core nodes and not from lower shells. In this work, we propose a kshell hybrid method to identify highly influential spreaders not only from the core but also from lower shells. The proposed method comprises the parameters such as kshell power, node's degree, contact distance, and many levels of neighbors' influence potential. The proposed method is evaluated using nine real world network datasets. In terms of the spreading dynamics, the experimental results show the superiority of the proposed method over the other existing indexing methods such as the kshell method, the neighborhood coreness centrality, the mixed degree decomposition, etc. Furthermore, the proposed method can also be applied to large-scale networks by considering the three levels of neighbors' influence potential.

  2. Appropriate complexity for the prediction of coastal and estuarine geomorphic behaviour at decadal to centennial scales

    Science.gov (United States)

    French, Jon; Payo, Andres; Murray, Brad; Orford, Julian; Eliot, Matt; Cowell, Peter

    2016-03-01

    Coastal and estuarine landforms provide a physical template that not only accommodates diverse ecosystem functions and human activities, but also mediates flood and erosion risks that are expected to increase with climate change. In this paper, we explore some of the issues associated with the conceptualisation and modelling of coastal morphological change at time and space scales relevant to managers and policy makers. Firstly, we revisit the question of how to define the most appropriate scales at which to seek quantitative predictions of landform change within an age defined by human interference with natural sediment systems and by the prospect of significant changes in climate and ocean forcing. Secondly, we consider the theoretical bases and conceptual frameworks for determining which processes are most important at a given scale of interest and the related problem of how to translate this understanding into models that are computationally feasible, retain a sound physical basis and demonstrate useful predictive skill. In particular, we explore the limitations of a primary scale approach and the extent to which these can be resolved with reference to the concept of the coastal tract and application of systems theory. Thirdly, we consider the importance of different styles of landform change and the need to resolve not only incremental evolution of morphology but also changes in the qualitative dynamics of a system and/or its gross morphological configuration. The extreme complexity and spatially distributed nature of landform systems means that quantitative prediction of future changes must necessarily be approached through mechanistic modelling of some form or another. Geomorphology has increasingly embraced so-called 'reduced complexity' models as a means of moving from an essentially reductionist focus on the mechanics of sediment transport towards a more synthesist view of landform evolution. However, there is little consensus on exactly what constitutes

  3. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. I. An efficient and simple linear scaling local MP2 method that uses an intermediate basis of pair natural orbitals

    Energy Technology Data Exchange (ETDEWEB)

    Pinski, Peter; Riplinger, Christoph; Neese, Frank, E-mail: evaleev@vt.edu, E-mail: frank.neese@cec.mpg.de [Max Planck Institute for Chemical Energy Conversion, Stiftstr. 34-36, D-45470 Mülheim an der Ruhr (Germany); Valeev, Edward F., E-mail: evaleev@vt.edu, E-mail: frank.neese@cec.mpg.de [Department of Chemistry, Virginia Tech, Blacksburg, Virginia 24061 (United States)

    2015-07-21

    In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implements sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in

  4. Random walk-based similarity measure method for patterns in complex object

    Directory of Open Access Journals (Sweden)

    Liu Shihu

    2017-04-01

    Full Text Available This paper discusses the similarity of the patterns in complex objects. The complex object is composed both of the attribute information of patterns and the relational information between patterns. Bearing in mind the specificity of complex object, a random walk-based similarity measurement method for patterns is constructed. In this method, the reachability of any two patterns with respect to the relational information is fully studied, and in the case of similarity of patterns with respect to the relational information can be calculated. On this bases, an integrated similarity measurement method is proposed, and algorithms 1 and 2 show the performed calculation procedure. One can find that this method makes full use of the attribute information and relational information. Finally, a synthetic example shows that our proposed similarity measurement method is validated.

  5. III. FROM SMALL TO BIG: METHODS FOR INCORPORATING LARGE SCALE DATA INTO DEVELOPMENTAL SCIENCE.

    Science.gov (United States)

    Davis-Kean, Pamela E; Jager, Justin

    2017-06-01

    For decades, developmental science has been based primarily on relatively small-scale data collections with children and families. Part of the reason for the dominance of this type of data collection is the complexity of collecting cognitive and social data on infants and small children. These small data sets are limited in both power to detect differences and the demographic diversity to generalize clearly and broadly. Thus, in this chapter we will discuss the value of using existing large-scale data sets to tests the complex questions of child development and how to develop future large-scale data sets that are both representative and can answer the important questions of developmental scientists. © 2017 The Society for Research in Child Development, Inc.

  6. The MIMIC Method with Scale Purification for Detecting Differential Item Functioning

    Science.gov (United States)

    Wang, Wen-Chung; Shih, Ching-Lin; Yang, Chih-Chien

    2009-01-01

    This study implements a scale purification procedure onto the standard MIMIC method for differential item functioning (DIF) detection and assesses its performance through a series of simulations. It is found that the MIMIC method with scale purification (denoted as M-SP) outperforms the standard MIMIC method (denoted as M-ST) in controlling…

  7. Ensemble Sensitivity Analysis of a Severe Downslope Windstorm in Complex Terrain: Implications for Forecast Predictability Scales and Targeted Observing Networks

    Science.gov (United States)

    2013-09-01

    observations, linear regression finds the straight line that explains the linear relationship of the sample. This line is given by the equation y = mx + b...SENSITIVITY ANALYSIS OF A SEVERE DOWNSLOPE WINDSTORM IN COMPLEX TERRAIN: IMPLICATIONS FOR FORECAST PREDICTABILITY SCALES AND TARGETED OBSERVING...SENSITIVITY ANALYSIS OF A SEVERE DOWNSLOPE WINDSTORM IN COMPLEX TERRAIN: IMPLICATIONS FOR FORECAST PREDICTABILITY SCALES AND TARGETED OBSERVING NETWORKS

  8. Retention of habitat complexity minimizes disassembly of reef fish communities following disturbance: a large-scale natural experiment.

    Directory of Open Access Journals (Sweden)

    Michael J Emslie

    Full Text Available High biodiversity ecosystems are commonly associated with complex habitats. Coral reefs are highly diverse ecosystems, but are under increasing pressure from numerous stressors, many of which reduce live coral cover and habitat complexity with concomitant effects on other organisms such as reef fishes. While previous studies have highlighted the importance of habitat complexity in structuring reef fish communities, they employed gradient or meta-analyses which lacked a controlled experimental design over broad spatial scales to explicitly separate the influence of live coral cover from overall habitat complexity. Here a natural experiment using a long term (20 year, spatially extensive (∼ 115,000 kms(2 dataset from the Great Barrier Reef revealed the fundamental importance of overall habitat complexity for reef fishes. Reductions of both live coral cover and habitat complexity had substantial impacts on fish communities compared to relatively minor impacts after major reductions in coral cover but not habitat complexity. Where habitat complexity was substantially reduced, species abundances broadly declined and a far greater number of fish species were locally extirpated, including economically important fishes. This resulted in decreased species richness and a loss of diversity within functional groups. Our results suggest that the retention of habitat complexity following disturbances can ameliorate the impacts of coral declines on reef fishes, so preserving their capacity to perform important functional roles essential to reef resilience. These results add to a growing body of evidence about the importance of habitat complexity for reef fishes, and represent the first large-scale examination of this question on the Great Barrier Reef.

  9. A direction of developing a mining method and mining complexes

    Energy Technology Data Exchange (ETDEWEB)

    Gabov, V.V.; Efimov, I.A. [St. Petersburg State Mining Institute, St. Petersburg (Russian Federation). Vorkuta Branch

    1996-12-31

    The analyses of a mining method as a main factor determining the development stages of mining units is presented. The paper suggests a perspective mining method which differs from the known ones by following peculiarities: the direction selectivity of cuts with regard to coal seams structure; the cutting speed, thickness and succession of dusts. This method may be done by modulate complexes (a shield carrying a cutting head for coal mining), their mining devices being supplied with hydraulic drive. An experimental model of the module complex has been developed. 2 refs.

  10. Deposit and scale prevention methods in thermal sea water desalination

    International Nuclear Information System (INIS)

    Froehner, K.R.

    1977-01-01

    Introductory remarks deal with the 'fouling factor' and its influence on the overall heat transfer coefficient of msf evaporators. The composition of the matter dissolved in sea water and the thermal and chemical properties lead to formation of alkaline scale or even hard, sulphate scale on the heat exchanger tube walls and can hamper plant operation and economics seriously. Among the scale prevention methods are 1) pH control by acid dosing (decarbonation), 2) 'threshold treatment' by dosing of inhibitors of different kind, 3) mechanical cleaning by sponge rubber balls guided through the heat exchanger tubes, in general combined with methods no. 1 or 2, and 4) application of a scale crystals germ slurry (seeding). Mention is made of several other scale prevention proposals. The problems encountered with marine life (suspension, deposit, growth) in desalination plants are touched. (orig.) [de

  11. Methods of scaling threshold color difference using printed samples

    Science.gov (United States)

    Huang, Min; Cui, Guihua; Liu, Haoxue; Luo, M. Ronnier

    2012-01-01

    A series of printed samples on substrate of semi-gloss paper and with the magnitude of threshold color difference were prepared for scaling the visual color difference and to evaluate the performance of different method. The probabilities of perceptibly was used to normalized to Z-score and different color differences were scaled to the Z-score. The visual color difference was got, and checked with the STRESS factor. The results indicated that only the scales have been changed but the relative scales between pairs in the data are preserved.

  12. Modeling complex work systems - method meets reality

    NARCIS (Netherlands)

    van der Veer, Gerrit C.; Hoeve, Machteld; Lenting, Bert

    1996-01-01

    Modeling an existing task situation is often a first phase in the (re)design of information systems. For complex systems design, this model should consider both the people and the organization involved, the work, and situational aspects. Groupware Task Analysis (GTA) as part of a method for the

  13. Complexity analysis of accelerated MCMC methods for Bayesian inversion

    International Nuclear Information System (INIS)

    Hoang, Viet Ha; Schwab, Christoph; Stuart, Andrew M

    2013-01-01

    The Bayesian approach to inverse problems, in which the posterior probability distribution on an unknown field is sampled for the purposes of computing posterior expectations of quantities of interest, is starting to become computationally feasible for partial differential equation (PDE) inverse problems. Balancing the sources of error arising from finite-dimensional approximation of the unknown field, the PDE forward solution map and the sampling of the probability space under the posterior distribution are essential for the design of efficient computational Bayesian methods for PDE inverse problems. We study Bayesian inversion for a model elliptic PDE with an unknown diffusion coefficient. We provide complexity analyses of several Markov chain Monte Carlo (MCMC) methods for the efficient numerical evaluation of expectations under the Bayesian posterior distribution, given data δ. Particular attention is given to bounds on the overall work required to achieve a prescribed error level ε. Specifically, we first bound the computational complexity of ‘plain’ MCMC, based on combining MCMC sampling with linear complexity multi-level solvers for elliptic PDE. Our (new) work versus accuracy bounds show that the complexity of this approach can be quite prohibitive. Two strategies for reducing the computational complexity are then proposed and analyzed: first, a sparse, parametric and deterministic generalized polynomial chaos (gpc) ‘surrogate’ representation of the forward response map of the PDE over the entire parameter space, and, second, a novel multi-level Markov chain Monte Carlo strategy which utilizes sampling from a multi-level discretization of the posterior and the forward PDE. For both of these strategies, we derive asymptotic bounds on work versus accuracy, and hence asymptotic bounds on the computational complexity of the algorithms. In particular, we provide sufficient conditions on the regularity of the unknown coefficients of the PDE and on the

  14. Large-scale synthesis of YSZ nanopowder by Pechini method

    Indian Academy of Sciences (India)

    Administrator

    structure and chemical purity of 99⋅1% by inductively coupled plasma optical emission spectroscopy on a large scale. Keywords. Sol–gel; yttria-stabilized zirconia; large scale; nanopowder; Pechini method. 1. Introduction. Zirconia has attracted the attention of many scientists because of its tremendous thermal, mechanical ...

  15. Measuring the Complexity of Urban Form and Design

    OpenAIRE

    Boeing, Geoff

    2017-01-01

    Complex systems have become a popular lens for conceptualizing cities, and complexity has substantial implications for urban performance and resilience. This paper develops a typology of methods and measures for assessing the complexity of the built form at the scale of urban design. It extends quantitative methods from urban planning, network science, ecosystems studies, fractal geometry, and information theory to the physical urban form and the analysis of qualitative human experience. Metr...

  16. Evaluating high risks in large-scale projects using an extended VIKOR method under a fuzzy environment

    Directory of Open Access Journals (Sweden)

    S. Ebrahimnejad

    2012-04-01

    Full Text Available The complexity of large-scale projects has led to numerous risks in their life cycle. This paper presents a new risk evaluation approach in order to rank the high risks in large-scale projects and improve the performance of these projects. It is based on the fuzzy set theory that is an effective tool to handle uncertainty. It is also based on an extended VIKOR method that is one of the well-known multiple criteria decision-making (MCDM methods. The proposed decision-making approach integrates knowledge and experience acquired from professional experts, since they perform the risk identification and also the subjective judgments of the performance rating for high risks in terms of conflicting criteria, including probability, impact, quickness of reaction toward risk, event measure quantity and event capability criteria. The most notable difference of the proposed VIKOR method with its traditional version is just the use of fuzzy decision-matrix data to calculate the ranking index without the need to ask the experts. Finally, the proposed approach is illustrated with a real-case study in an Iranian power plant project, and the associated results are compared with two well-known decision-making methods under a fuzzy environment.

  17. Dimensional scaling for quasistationary states

    International Nuclear Information System (INIS)

    Kais, S.; Herschbach, D.R.

    1993-01-01

    Complex energy eigenvalues which specify the location and width of quasibound or resonant states are computed to good approximation by a simple dimensional scaling method. As applied to bound states, the method involves minimizing an effective potential function in appropriately scaled coordinates to obtain exact energies in the D→∞ limit, then computing approximate results for D=3 by a perturbation expansion in 1/D about this limit. For resonant states, the same procedure is used, with the radial coordinate now allowed to be complex. Five examples are treated: the repulsive exponential potential (e - r); a squelched harmonic oscillator (r 2 e - r); the inverted Kratzer potential (r -1 repulsion plus r -2 attraction); the Lennard-Jones potential (r -12 repulsion, r -6 attraction); and quasibound states for the rotational spectrum of the hydrogen molecule (X 1 summation g + , v=0, J=0 to 50). Comparisons with numerical integrations and other methods show that the much simpler dimensional scaling method, carried to second-order (terms in 1/D 2 ), yields good results over an extremely wide range of the ratio of level widths to spacings. Other methods have not yet evaluated the very broad H 2 rotational resonances reported here (J>39), which lie far above the centrifugal barrier

  18. Computational RNA secondary structure design: empirical complexity and improved methods

    Directory of Open Access Journals (Sweden)

    Condon Anne

    2007-01-01

    Full Text Available Abstract Background We investigate the empirical complexity of the RNA secondary structure design problem, that is, the scaling of the typical difficulty of the design task for various classes of RNA structures as the size of the target structure is increased. The purpose of this work is to understand better the factors that make RNA structures hard to design for existing, high-performance algorithms. Such understanding provides the basis for improving the performance of one of the best algorithms for this problem, RNA-SSD, and for characterising its limitations. Results To gain insights into the practical complexity of the problem, we present a scaling analysis on random and biologically motivated structures using an improved version of the RNA-SSD algorithm, and also the RNAinverse algorithm from the Vienna package. Since primary structure constraints are relevant for designing RNA structures, we also investigate the correlation between the number and the location of the primary structure constraints when designing structures and the performance of the RNA-SSD algorithm. The scaling analysis on random and biologically motivated structures supports the hypothesis that the running time of both algorithms scales polynomially with the size of the structure. We also found that the algorithms are in general faster when constraints are placed only on paired bases in the structure. Furthermore, we prove that, according to the standard thermodynamic model, for some structures that the RNA-SSD algorithm was unable to design, there exists no sequence whose minimum free energy structure is the target structure. Conclusion Our analysis helps to better understand the strengths and limitations of both the RNA-SSD and RNAinverse algorithms, and suggests ways in which the performance of these algorithms can be further improved.

  19. Improved dynamical scaling analysis using the kernel method for nonequilibrium relaxation.

    Science.gov (United States)

    Echinaka, Yuki; Ozeki, Yukiyasu

    2016-10-01

    The dynamical scaling analysis for the Kosterlitz-Thouless transition in the nonequilibrium relaxation method is improved by the use of Bayesian statistics and the kernel method. This allows data to be fitted to a scaling function without using any parametric model function, which makes the results more reliable and reproducible and enables automatic and faster parameter estimation. Applying this method, the bootstrap method is introduced and a numerical discrimination for the transition type is proposed.

  20. An improved sampling method of complex network

    Science.gov (United States)

    Gao, Qi; Ding, Xintong; Pan, Feng; Li, Weixing

    2014-12-01

    Sampling subnet is an important topic of complex network research. Sampling methods influence the structure and characteristics of subnet. Random multiple snowball with Cohen (RMSC) process sampling which combines the advantages of random sampling and snowball sampling is proposed in this paper. It has the ability to explore global information and discover the local structure at the same time. The experiments indicate that this novel sampling method could keep the similarity between sampling subnet and original network on degree distribution, connectivity rate and average shortest path. This method is applicable to the situation where the prior knowledge about degree distribution of original network is not sufficient.

  1. Sensemaking in a Value Based Context for Large Scale Complex Engineered Systems

    Science.gov (United States)

    Sikkandar Basha, Nazareen

    The design and the development of Large-Scale Complex Engineered Systems (LSCES) requires the involvement of multiple teams and numerous levels of the organization and interactions with large numbers of people and interdisciplinary departments. Traditionally, requirements-driven Systems Engineering (SE) is used in the design and development of these LSCES. The requirements are used to capture the preferences of the stakeholder for the LSCES. Due to the complexity of the system, multiple levels of interactions are required to elicit the requirements of the system within the organization. Since LSCES involves people and interactions between the teams and interdisciplinary departments, it should be socio-technical in nature. The elicitation of the requirements of most large-scale system projects are subjected to creep in time and cost due to the uncertainty and ambiguity of requirements during the design and development. In an organization structure, the cost and time overrun can occur at any level and iterate back and forth thus increasing the cost and time. To avoid such creep past researches have shown that rigorous approaches such as value based designing can be used to control it. But before the rigorous approaches can be used, the decision maker should have a proper understanding of requirements creep and the state of the system when the creep occurs. Sensemaking is used to understand the state of system when the creep occurs and provide a guidance to decision maker. This research proposes the use of the Cynefin framework, sensemaking framework which can be used in the design and development of LSCES. It can aide in understanding the system and decision making to minimize the value gap due to requirements creep by eliminating ambiguity which occurs during design and development. A sample hierarchical organization is used to demonstrate the state of the system at the occurrence of requirements creep in terms of cost and time using the Cynefin framework. These

  2. Statistical methods for anomaly detection in the complex process; Methodes statistiques de detection d'anomalies de fonctionnement dans les processus complexes

    Energy Technology Data Exchange (ETDEWEB)

    Al Mouhamed, Mayez

    1977-09-15

    In a number of complex physical systems the accessible signals are often characterized by random fluctuations about a mean value. The fluctuations (signature) often transmit information about the state of the system that the mean value cannot predict. This study is undertaken to elaborate statistical methods of anomaly detection on the basis of signature analysis of the noise inherent in the process. The algorithm presented first learns the characteristics of normal operation of a complex process. Then it detects small deviations from the normal behavior. The algorithm can be implemented in a medium-sized computer for on line application. (author) [French] Dans de nombreux systemes physiques complexes les grandeurs accessibles a l'homme sont souvent caracterisees par des fluctuations aleatoires autour d'une valeur moyenne. Les fluctuations (signatures) transmettent souvent des informations sur l'etat du systeme que la valeur moyenne ne peut predire. Cette etude est entreprise pour elaborer des methodes statistiques de detection d'anomalies de fonctionnement sur la base de l'analyse des signatures contenues dans les signaux de bruit provenant du processus. L'algorithme presente est capable de: 1/ Apprendre les caracteristiques des operations normales dans un processus complexe. 2/ Detecter des petites deviations par rapport a la conduite normale du processus. L'algorithme peut etre implante sur un calculateur de taille moyenne pour les applications en ligne. (auteur)

  3. Method for analysis the complex grounding cables system

    International Nuclear Information System (INIS)

    Ackovski, R.; Acevski, N.

    2002-01-01

    A new iterative method for the analysis of the performances of the complex grounding systems (GS) in underground cable power networks with coated and/or uncoated metal sheathed cables is proposed in this paper. The analyzed grounding system consists of the grounding grid of a high voltage (HV) supplying transformer station (TS), middle voltage/low voltage (MV/LV) consumer TSs and arbitrary number of power cables, connecting them. The derived method takes into consideration the drops of voltage in the cable sheets and the mutual influence among all earthing electrodes, due to the resistive coupling through the soil. By means of the presented method it is possible to calculate the main grounding system performances, such as earth electrode potentials under short circuit fault to ground conditions, earth fault current distribution in the whole complex grounding system, step and touch voltages in the nearness of the earthing electrodes dissipating the fault current in the earth, impedances (resistances) to ground of all possible fault locations, apparent shield impedances to ground of all power cables, e.t.c. The proposed method is based on the admittance summation method [1] and is appropriately extended, so that it takes into account resistive coupling between the elements that the GS. (Author)

  4. Features of the method of large-scale paleolandscape reconstructions

    Science.gov (United States)

    Nizovtsev, Vyacheslav; Erman, Natalia; Graves, Irina

    2017-04-01

    The method of paleolandscape reconstructions was tested in the key area of the basin of the Central Dubna, located at the junction of the Taldom and Sergiev Posad districts of the Moscow region. A series of maps was created which shows paleoreconstructions of the original (indigenous) living environment of initial settlers during main time periods of the Holocene age and features of human interaction with landscapes at the early stages of economic development of the territory (in the early and middle Holocene). The sequence of these works is as follows. 1. Comprehensive analysis of topographic maps of different scales and aerial and satellite images, stock materials of geological and hydrological surveys and prospecting of peat deposits, archaeological evidence on ancient settlements, palynological and osteological analysis, analysis of complex landscape and archaeological studies. 2. Mapping of factual material and analyzing of the spatial distribution of archaeological sites were performed. 3. Running of a large-scale field landscape mapping (sample areas) and compiling of maps of the modern landscape structure. On this basis, edaphic properties of the main types of natural boundaries were analyzed and their resource base was determined. 4. Reconstruction of lake-river system during the main periods of the Holocene. The boundaries of restored paleolakes were determined based on power and territorial confinement of decay ooze. 5. On the basis of landscape and edaphic method the actual paleolandscape reconstructions for the main periods of the Holocene were performed. During the reconstructions of the original, indigenous flora we relied on data of palynological studies conducted on the studied area or in similar landscape conditions. 6. The result was a retrospective analysis and periodization of the settlement process, economic development and the formation of the first anthropogenically transformed landscape complexes. The reconstruction of the dynamics of the

  5. Formal methods applied to industrial complex systems implementation of the B method

    CERN Document Server

    Boulanger, Jean-Louis

    2014-01-01

    This book presents real-world examples of formal techniques in an industrial context. It covers formal methods such as SCADE and/or the B Method, in various fields such as railways, aeronautics, and the automotive industry. The purpose of this book is to present a summary of experience on the use of "formal methods" (based on formal techniques such as proof, abstract interpretation and model-checking) in industrial examples of complex systems, based on the experience of people currently involved in the creation and assessment of safety critical system software. The involvement of people from

  6. Global-Scale Hydrology: Simple Characterization of Complex Simulation

    Science.gov (United States)

    Koster, Randal D.

    1999-01-01

    Atmospheric general circulation models (AGCMS) are unique and valuable tools for the analysis of large-scale hydrology. AGCM simulations of climate provide tremendous amounts of hydrological data with a spatial and temporal coverage unmatched by observation systems. To the extent that the AGCM behaves realistically, these data can shed light on the nature of the real world's hydrological cycle. In the first part of the seminar, I will describe the hydrological cycle in a typical AGCM, with some emphasis on the validation of simulated precipitation against observations. The second part of the seminar will focus on a key goal in large-scale hydrology studies, namely the identification of simple, overarching controls on hydrological behavior hidden amidst the tremendous amounts of data produced by the highly complex AGCM parameterizations. In particular, I will show that a simple 50-year-old climatological relation (and a recent extension we made to it) successfully predicts, to first order, both the annual mean and the interannual variability of simulated evaporation and runoff fluxes. The seminar will conclude with an example of a practical application of global hydrology studies. The accurate prediction of weather statistics several months in advance would have tremendous societal benefits, and conventional wisdom today points at the use of coupled ocean-atmosphere-land models for such seasonal-to-interannual prediction. Understanding the hydrological cycle in AGCMs is critical to establishing the potential for such prediction. Our own studies show, among other things, that soil moisture retention can lead to significant precipitation predictability in many midlatitude and tropical regions.

  7. Complex networks principles, methods and applications

    CERN Document Server

    Latora, Vito; Russo, Giovanni

    2017-01-01

    Networks constitute the backbone of complex systems, from the human brain to computer communications, transport infrastructures to online social systems and metabolic reactions to financial markets. Characterising their structure improves our understanding of the physical, biological, economic and social phenomena that shape our world. Rigorous and thorough, this textbook presents a detailed overview of the new theory and methods of network science. Covering algorithms for graph exploration, node ranking and network generation, among the others, the book allows students to experiment with network models and real-world data sets, providing them with a deep understanding of the basics of network theory and its practical applications. Systems of growing complexity are examined in detail, challenging students to increase their level of skill. An engaging presentation of the important principles of network science makes this the perfect reference for researchers and undergraduate and graduate students in physics, ...

  8. A new approach of chaos and complex network method to study fluctuation and phase transition in nuclear collision at high energy

    Energy Technology Data Exchange (ETDEWEB)

    Bhaduri, Susmita; Bhaduri, Anirban; Ghosh, Dipak [Deepa Ghosh Research Foundation, Kolkata (India)

    2017-06-15

    In the endeavour to study fluctuation and a signature of phase transition in ultrarelativistic nuclear collision during the process of particle production, an approach based on chaos and complex network is proposed. In this work we have attempted an exhaustive study of pion fluctuation in η space, φ space, their cross-correlation and finally two-dimensional fluctuation in terms of scaling of void probability distribution. The analysis is done on the η values and their corresponding φ values extracted from the {sup 32}S-Ag/Br interaction at an incident energy of 200 GeV per nucleon. The methods used are Multifractal Detrended Cross-Correlation Analysis (MF-DXA) and a chaos-based rigorous complex network method -Visibility Graph. The analysis reveals that the highest degree of cross-correlation between pseudorapidity and azimuthal angles exists in the most central region of the interaction. The analysis further shows that two-dimensional void distribution corresponding to the η-φ space reveals a strong scaling behaviour. Both cross-correlation coefficients of MF-DXA and PSVG (Power of the Scale-freeness in Visibility Graph, which is implicitly connected with the Hurst exponent) can be effectively used for the quantitative assessment of pion fluctuation in a very precise manner and have the capability to assess the tendency of approaching criticality for phase transitions. (orig.)

  9. To address surface reaction network complexity using scaling relations machine learning and DFT calculations

    International Nuclear Information System (INIS)

    Ulissi, Zachary W.; Medford, Andrew J.; Bligaard, Thomas; Nørskov, Jens K.

    2017-01-01

    Surface reaction networks involving hydrocarbons exhibit enormous complexity with thousands of species and reactions for all but the very simplest of chemistries. We present a framework for optimization under uncertainty for heterogeneous catalysis reaction networks using surrogate models that are trained on the fly. The surrogate model is constructed by teaching a Gaussian process adsorption energies based on group additivity fingerprints, combined with transition-state scaling relations and a simple classifier for determining the rate-limiting step. The surrogate model is iteratively used to predict the most important reaction step to be calculated explicitly with computationally demanding electronic structure theory. Applying these methods to the reaction of syngas on rhodium(111), we identify the most likely reaction mechanism. Lastly, propagating uncertainty throughout this process yields the likelihood that the final mechanism is complete given measurements on only a subset of the entire network and uncertainty in the underlying density functional theory calculations.

  10. A NDVI assisted remote sensing image adaptive scale segmentation method

    Science.gov (United States)

    Zhang, Hong; Shen, Jinxiang; Ma, Yanmei

    2018-03-01

    Multiscale segmentation of images can effectively form boundaries of different objects with different scales. However, for the remote sensing image which widely coverage with complicated ground objects, the number of suitable segmentation scales, and each of the scale size is still difficult to be accurately determined, which severely restricts the rapid information extraction of the remote sensing image. A great deal of experiments showed that the normalized difference vegetation index (NDVI) can effectively express the spectral characteristics of a variety of ground objects in remote sensing images. This paper presents a method using NDVI assisted adaptive segmentation of remote sensing images, which segment the local area by using NDVI similarity threshold to iteratively select segmentation scales. According to the different regions which consist of different targets, different segmentation scale boundaries could be created. The experimental results showed that the adaptive segmentation method based on NDVI can effectively create the objects boundaries for different ground objects of remote sensing images.

  11. A multi-scale method of mapping urban influence

    Science.gov (United States)

    Timothy G. Wade; James D. Wickham; Nicola Zacarelli; Kurt H. Riitters

    2009-01-01

    Urban development can impact environmental quality and ecosystem services well beyond urban extent. Many methods to map urban areas have been developed and used in the past, but most have simply tried to map existing extent of urban development, and all have been single-scale techniques. The method presented here uses a clustering approach to look beyond the extant...

  12. Determining Complex Structures using Docking Method with Single Particle Scattering Data

    Directory of Open Access Journals (Sweden)

    Haiguang Liu

    2017-04-01

    Full Text Available Protein complexes are critical for many molecular functions. Due to intrinsic flexibility and dynamics of complexes, their structures are more difficult to determine using conventional experimental methods, in contrast to individual subunits. One of the major challenges is the crystallization of protein complexes. Using X-ray free electron lasers (XFELs, it is possible to collect scattering signals from non-crystalline protein complexes, but data interpretation is more difficult because of unknown orientations. Here, we propose a hybrid approach to determine protein complex structures by combining XFEL single particle scattering data with computational docking methods. Using simulations data, we demonstrate that a small set of single particle scattering data collected at random orientations can be used to distinguish the native complex structure from the decoys generated using docking algorithms. The results also indicate that a small set of single particle scattering data is superior to spherically averaged intensity profile in distinguishing complex structures. Given the fact that XFEL experimental data are difficult to acquire and at low abundance, this hybrid approach should find wide applications in data interpretations.

  13. Dual linear structured support vector machine tracking method via scale correlation filter

    Science.gov (United States)

    Li, Weisheng; Chen, Yanquan; Xiao, Bin; Feng, Chen

    2018-01-01

    Adaptive tracking-by-detection methods based on structured support vector machine (SVM) performed well on recent visual tracking benchmarks. However, these methods did not adopt an effective strategy of object scale estimation, which limits the overall tracking performance. We present a tracking method based on a dual linear structured support vector machine (DLSSVM) with a discriminative scale correlation filter. The collaborative tracker comprised of a DLSSVM model and a scale correlation filter obtains good results in tracking target position and scale estimation. The fast Fourier transform is applied for detection. Extensive experiments show that our tracking approach outperforms many popular top-ranking trackers. On a benchmark including 100 challenging video sequences, the average precision of the proposed method is 82.8%.

  14. Quantifying Complexity in Quantum Phase Transitions via Mutual Information Complex Networks.

    Science.gov (United States)

    Valdez, Marc Andrew; Jaschke, Daniel; Vargas, David L; Carr, Lincoln D

    2017-12-01

    We quantify the emergent complexity of quantum states near quantum critical points on regular 1D lattices, via complex network measures based on quantum mutual information as the adjacency matrix, in direct analogy to quantifying the complexity of electroencephalogram or functional magnetic resonance imaging measurements of the brain. Using matrix product state methods, we show that network density, clustering, disparity, and Pearson's correlation obtain the critical point for both quantum Ising and Bose-Hubbard models to a high degree of accuracy in finite-size scaling for three classes of quantum phase transitions, Z_{2}, mean field superfluid to Mott insulator, and a Berzinskii-Kosterlitz-Thouless crossover.

  15. Quantifying Complexity in Quantum Phase Transitions via Mutual Information Complex Networks

    Science.gov (United States)

    Valdez, Marc Andrew; Jaschke, Daniel; Vargas, David L.; Carr, Lincoln D.

    2017-12-01

    We quantify the emergent complexity of quantum states near quantum critical points on regular 1D lattices, via complex network measures based on quantum mutual information as the adjacency matrix, in direct analogy to quantifying the complexity of electroencephalogram or functional magnetic resonance imaging measurements of the brain. Using matrix product state methods, we show that network density, clustering, disparity, and Pearson's correlation obtain the critical point for both quantum Ising and Bose-Hubbard models to a high degree of accuracy in finite-size scaling for three classes of quantum phase transitions, Z2, mean field superfluid to Mott insulator, and a Berzinskii-Kosterlitz-Thouless crossover.

  16. Complex finite element sensitivity method for creep analysis

    International Nuclear Information System (INIS)

    Gomez-Farias, Armando; Montoya, Arturo; Millwater, Harry

    2015-01-01

    The complex finite element method (ZFEM) has been extended to perform sensitivity analysis for mechanical and structural systems undergoing creep deformation. ZFEM uses a complex finite element formulation to provide shape, material, and loading derivatives of the system response, providing an insight into the essential factors which control the behavior of the system as a function of time. A complex variable-based quadrilateral user element (UEL) subroutine implementing the power law creep constitutive formulation was incorporated within the Abaqus commercial finite element software. The results of the complex finite element computations were verified by comparing them to the reference solution for the steady-state creep problem of a thick-walled cylinder in the power law creep range. A practical application of the ZFEM implementation to creep deformation analysis is the calculation of the skeletal point of a notched bar test from a single ZFEM run. In contrast, the standard finite element procedure requires multiple runs. The value of the skeletal point is that it identifies the location where the stress state is accurate, regardless of the certainty of the creep material properties. - Highlights: • A novel finite element sensitivity method (ZFEM) for creep was introduced. • ZFEM has the capability to calculate accurate partial derivatives. • ZFEM can be used for identification of the skeletal point of creep structures. • ZFEM can be easily implemented in a commercial software, e.g. Abaqus. • ZFEM results were shown to be in excellent agreement with analytical solutions

  17. Workshop on Recent Trends in Complex Methods for Partial Differential Equations

    CERN Document Server

    Celebi, A; Tutschke, Wolfgang

    1999-01-01

    This volume is a collection of manscripts mainly originating from talks and lectures given at the Workshop on Recent Trends in Complex Methods for Par­ tial Differential Equations held from July 6 to 10, 1998 at the Middle East Technical University in Ankara, Turkey, sponsored by The Scientific and Tech­ nical Research Council of Turkey and the Middle East Technical University. This workshop is a continuation oftwo workshops from 1988 and 1993 at the In­ ternational Centre for Theoretical Physics in Trieste, Italy entitled Functional analytic Methods in Complex Analysis and Applications to Partial Differential Equations. Since classical complex analysis of one and several variables has a long tra­ dition it is of high level. But most of its basic problems are solved nowadays so that within the last few decades it has lost more and more attention. The area of complex and functional analytic methods in partial differential equations, however, is still a growing and flourishing field, in particular as these ...

  18. A novel method for preparation of HAMLET-like protein complexes.

    Science.gov (United States)

    Permyakov, Sergei E; Knyazeva, Ekaterina L; Leonteva, Marina V; Fadeev, Roman S; Chekanov, Aleksei V; Zhadan, Andrei P; Håkansson, Anders P; Akatov, Vladimir S; Permyakov, Eugene A

    2011-09-01

    Some natural proteins induce tumor-selective apoptosis. α-Lactalbumin (α-LA), a milk calcium-binding protein, is converted into an antitumor form, called HAMLET/BAMLET, via partial unfolding and association with oleic acid (OA). Besides triggering multiple cell death mechanisms in tumor cells, HAMLET exhibits bactericidal activity against Streptococcus pneumoniae. The existing methods for preparation of active complexes of α-LA with OA employ neutral pH solutions, which greatly limit water solubility of OA. Therefore these methods suffer from low scalability and/or heterogeneity of the resulting α-LA - OA samples. In this study we present a novel method for preparation of α-LA - OA complexes using alkaline conditions that favor aqueous solubility of OA. The unbound OA is removed by precipitation under acidic conditions. The resulting sample, bLA-OA-45, bears 11 OA molecules and exhibits physico-chemical properties similar to those of BAMLET. Cytotoxic activities of bLA-OA-45 against human epidermoid larynx carcinoma and S. pneumoniae D39 cells are close to those of HAMLET. Treatment of S. pneumoniae with bLA-OA-45 or HAMLET induces depolarization and rupture of the membrane. The cells are markedly rescued from death upon pretreatment with an inhibitor of Ca(2+) transport. Hence, the activation mechanisms of S. pneumoniae death are analogous for these two complexes. The developed express method for preparation of active α-LA - OA complex is high-throughput and suited for development of other protein complexes with low-molecular-weight amphiphilic substances possessing valuable cytotoxic properties. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  19. A large scale analysis of information-theoretic network complexity measures using chemical structures.

    Directory of Open Access Journals (Sweden)

    Matthias Dehmer

    Full Text Available This paper aims to investigate information-theoretic network complexity measures which have already been intensely used in mathematical- and medicinal chemistry including drug design. Numerous such measures have been developed so far but many of them lack a meaningful interpretation, e.g., we want to examine which kind of structural information they detect. Therefore, our main contribution is to shed light on the relatedness between some selected information measures for graphs by performing a large scale analysis using chemical networks. Starting from several sets containing real and synthetic chemical structures represented by graphs, we study the relatedness between a classical (partition-based complexity measure called the topological information content of a graph and some others inferred by a different paradigm leading to partition-independent measures. Moreover, we evaluate the uniqueness of network complexity measures numerically. Generally, a high uniqueness is an important and desirable property when designing novel topological descriptors having the potential to be applied to large chemical databases.

  20. Concussion As a Multi-Scale Complex System: An Interdisciplinary Synthesis of Current Knowledge

    Directory of Open Access Journals (Sweden)

    Erin S. Kenzie

    2017-09-01

    Full Text Available Traumatic brain injury (TBI has been called “the most complicated disease of the most complex organ of the body” and is an increasingly high-profile public health issue. Many patients report long-term impairments following even “mild” injuries, but reliable criteria for diagnosis and prognosis are lacking. Every clinical trial for TBI treatment to date has failed to demonstrate reliable and safe improvement in outcomes, and the existing body of literature is insufficient to support the creation of a new classification system. Concussion, or mild TBI, is a highly heterogeneous phenomenon, and numerous factors interact dynamically to influence an individual’s recovery trajectory. Many of the obstacles faced in research and clinical practice related to TBI and concussion, including observed heterogeneity, arguably stem from the complexity of the condition itself. To improve understanding of this complexity, we review the current state of research through the lens provided by the interdisciplinary field of systems science, which has been increasingly applied to biomedical issues. The review was conducted iteratively, through multiple phases of literature review, expert interviews, and systems diagramming and represents the first phase in an effort to develop systems models of concussion. The primary focus of this work was to examine concepts and ways of thinking about concussion that currently impede research design and block advancements in care of TBI. Results are presented in the form of a multi-scale conceptual framework intended to synthesize knowledge across disciplines, improve research design, and provide a broader, multi-scale model for understanding concussion pathophysiology, classification, and treatment.

  1. Using mixed methods to develop and evaluate complex interventions in palliative care research.

    Science.gov (United States)

    Farquhar, Morag C; Ewing, Gail; Booth, Sara

    2011-12-01

    there is increasing interest in combining qualitative and quantitative research methods to provide comprehensiveness and greater knowledge yield. Mixed methods are valuable in the development and evaluation of complex interventions. They are therefore particularly valuable in palliative care research where the majority of interventions are complex, and the identification of outcomes particularly challenging. this paper aims to introduce the role of mixed methods in the development and evaluation of complex interventions in palliative care, and how they may be used in palliative care research. the paper defines mixed methods and outlines why and how mixed methods are used to develop and evaluate complex interventions, with a pragmatic focus on design and data collection issues and data analysis. Useful texts are signposted and illustrative examples provided of mixed method studies in palliative care, including a detailed worked example of the development and evaluation of a complex intervention in palliative care for breathlessness. Key challenges to conducting mixed methods in palliative care research are identified in relation to data collection, data integration in analysis, costs and dissemination and how these might be addressed. the development and evaluation of complex interventions in palliative care benefit from the application of mixed methods. Mixed methods enable better understanding of whether and how an intervention works (or does not work) and inform the design of subsequent studies. However, they can be challenging: mixed method studies in palliative care will benefit from working with agreed protocols, multidisciplinary teams and engaging staff with appropriate skill sets.

  2. Lattice Boltzmann methods for complex micro-flows: applicability and limitations for practical applications

    Energy Technology Data Exchange (ETDEWEB)

    Suga, K, E-mail: suga@me.osakafu-u.ac.jp [Department of Mechanical Engineering, Osaka Prefecture University, 1-1 Gakuen-cho, Naka-ku, Sakai, Osaka 599-8531 (Japan)

    2013-06-15

    The extensive evaluation studies of the lattice Boltzmann method for micro-scale flows ({mu}-flow LBM) by the author's group are summarized. For the two-dimensional test cases, force-driven Poiseuille flows, Couette flows, a combined nanochannel flow, and flows in a nanochannel with a square- or triangular cylinder are discussed. The three-dimensional (3D) test cases are nano-mesh flows and a flow between 3D bumpy walls. The reference data for the complex test flow geometries are from the molecular dynamics simulations of the Lennard-Jones fluid by the author's group. The focused flows are mainly in the slip and a part of the transitional flow regimes at Kn < 1. The evaluated schemes of the {mu}-flow LBMs are the lattice Bhatnagar-Gross-Krook and the multiple-relaxation time LBMs with several boundary conditions and discrete velocity models. The effects of the discrete velocity models, the wall boundary conditions, the near-wall correction models of the molecular mean free path and the regularization process are discussed to confirm the applicability and the limitations of the {mu}-flow LBMs for complex flow geometries. (invited review)

  3. Lattice Boltzmann methods for complex micro-flows: applicability and limitations for practical applications

    International Nuclear Information System (INIS)

    Suga, K

    2013-01-01

    The extensive evaluation studies of the lattice Boltzmann method for micro-scale flows (μ-flow LBM) by the author's group are summarized. For the two-dimensional test cases, force-driven Poiseuille flows, Couette flows, a combined nanochannel flow, and flows in a nanochannel with a square- or triangular cylinder are discussed. The three-dimensional (3D) test cases are nano-mesh flows and a flow between 3D bumpy walls. The reference data for the complex test flow geometries are from the molecular dynamics simulations of the Lennard-Jones fluid by the author's group. The focused flows are mainly in the slip and a part of the transitional flow regimes at Kn < 1. The evaluated schemes of the μ-flow LBMs are the lattice Bhatnagar–Gross–Krook and the multiple-relaxation time LBMs with several boundary conditions and discrete velocity models. The effects of the discrete velocity models, the wall boundary conditions, the near-wall correction models of the molecular mean free path and the regularization process are discussed to confirm the applicability and the limitations of the μ-flow LBMs for complex flow geometries. (invited review)

  4. Molecular photoionization using the complex Kohn variational method

    International Nuclear Information System (INIS)

    Lynch, D.L.; Schneider, B.I.

    1992-01-01

    We have applied the complex Kohn variational method to the study of molecular-photoionization processes. This requires electron-ion scattering calculations enforcing incoming boundary conditions. The sensitivity of these results to the choice of the cutoff function in the Kohn method has been studied and we have demonstrated that a simple matching of the irregular function to a linear combination of regular functions produces accurate scattering phase shifts

  5. The scaling of stress distribution under small scale yielding by T-scaling method and application to prediction of the temperature dependence on fracture toughness

    International Nuclear Information System (INIS)

    Ishihara, Kenichi; Hamada, Takeshi; Meshii, Toshiyuki

    2017-01-01

    In this paper, a new method for scaling the crack tip stress distribution under small scale yielding condition was proposed and named as T-scaling method. This method enables to identify the different stress distributions for materials with different tensile properties but identical load in terms of K or J. Then by assuming that the temperature dependence of a material is represented as the stress-strain relationship temperature dependence, a method to predict the fracture load at an arbitrary temperature from the already known fracture load at a reference temperature was proposed. This method combined the T-scaling method and the knowledge “fracture stress for slip induced cleavage fracture is temperature independent.” Once the fracture load is predicted, fracture toughness J c at the temperature under consideration can be evaluated by running elastic-plastic finite element analysis. Finally, the above-mentioned framework to predict the J c temperature dependency of a material in the ductile-to-brittle temperature distribution was validated for 0.55% carbon steel JIS S55C. The proposed framework seems to have a possibility to solve the problem the master curve is facing in the relatively higher temperature region, by requiring only tensile tests. (author)

  6. A Low Complexity Discrete Radiosity Method

    OpenAIRE

    Chatelier , Pierre Yves; Malgouyres , Rémy

    2006-01-01

    International audience; Rather than using Monte Carlo sampling techniques or patch projections to compute radiosity, it is possible to use a discretization of a scene into voxels and perform some discrete geometry calculus to quickly compute visibility information. In such a framework , the radiosity method may be as precise as a patch-based radiosity using hemicube computation for form-factors, but it lowers the overall theoretical complexity to an O(N log N) + O(N), where the O(N) is largel...

  7. Etoile Project : Social Intelligent ICT-System for very large scale education in complex systems

    Science.gov (United States)

    Bourgine, P.; Johnson, J.

    2009-04-01

    The project will devise new theory and implement new ICT-based methods of delivering high-quality low-cost postgraduate education to many thousands of people in a scalable way, with the cost of each extra student being negligible (Socially Intelligent Resource Mining system to gather large volumes of high quality educational resources from the internet; new methods to deconstruct these to produce a semantically tagged Learning Object Database; a Living Course Ecology to support the creation and maintenance of evolving course materials; systems to deliver courses; and a ‘socially intelligent assessment system'. The system will be tested on one to ten thousand postgraduate students in Europe working towards the Complex System Society's title of European PhD in Complex Systems. Étoile will have a very high impact both scientifically and socially by (i) the provision of new scalable ICT-based methods for providing very low cost scientific education, (ii) the creation of new mathematical and statistical theory for the multiscale dynamics of complex systems, (iii) the provision of a working example of adaptation and emergence in complex socio-technical systems, and (iv) making a major educational contribution to European complex systems science and its applications.

  8. Methods for Large-Scale Nonlinear Optimization.

    Science.gov (United States)

    1980-05-01

    STANFORD, CALIFORNIA 94305 METHODS FOR LARGE-SCALE NONLINEAR OPTIMIZATION by Philip E. Gill, Waiter Murray, I Michael A. Saunden, and Masgaret H. Wright...typical iteration can be partitioned so that where B is an m X m basise matrix. This partition effectively divides the vari- ables into three classes... attention is given to the standard of the coding or the documentation. A much better way of obtaining mathematical software is from a software library

  9. Large-scale Comparative Study of Hi-C-based Chromatin 3D Structure Modeling Methods

    KAUST Repository

    Wang, Cheng

    2018-05-17

    Chromatin is a complex polymer molecule in eukaryotic cells, primarily consisting of DNA and histones. Many works have shown that the 3D folding of chromatin structure plays an important role in DNA expression. The recently proposed Chro- mosome Conformation Capture technologies, especially the Hi-C assays, provide us an opportunity to study how the 3D structures of the chromatin are organized. Based on the data from Hi-C experiments, many chromatin 3D structure modeling methods have been proposed. However, there is limited ground truth to validate these methods and no robust chromatin structure alignment algorithms to evaluate the performance of these methods. In our work, we first made a thorough literature review of 25 publicly available population Hi-C-based chromatin 3D structure modeling methods. Furthermore, to evaluate and to compare the performance of these methods, we proposed a novel data simulation method, which combined the population Hi-C data and single-cell Hi-C data without ad hoc parameters. Also, we designed a global and a local alignment algorithms to measure the similarity between the templates and the chromatin struc- tures predicted by different modeling methods. Finally, the results from large-scale comparative tests indicated that our alignment algorithms significantly outperform the algorithms in literature.

  10. Conjugate gradient type methods for linear systems with complex symmetric coefficient matrices

    Science.gov (United States)

    Freund, Roland

    1989-01-01

    We consider conjugate gradient type methods for the solution of large sparse linear system Ax equals b with complex symmetric coefficient matrices A equals A(T). Such linear systems arise in important applications, such as the numerical solution of the complex Helmholtz equation. Furthermore, most complex non-Hermitian linear systems which occur in practice are actually complex symmetric. We investigate conjugate gradient type iterations which are based on a variant of the nonsymmetric Lanczos algorithm for complex symmetric matrices. We propose a new approach with iterates defined by a quasi-minimal residual property. The resulting algorithm presents several advantages over the standard biconjugate gradient method. We also include some remarks on the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.

  11. Complexity, Methodology and Method: Crafting a Critical Process of Research

    Science.gov (United States)

    Alhadeff-Jones, Michel

    2013-01-01

    This paper defines a theoretical framework aiming to support the actions and reflections of researchers looking for a "method" in order to critically conceive the complexity of a scientific process of research. First, it starts with a brief overview of the core assumptions framing Morin's "paradigm of complexity" and Le…

  12. A dissipative particle dynamics method for arbitrarily complex geometries

    Science.gov (United States)

    Li, Zhen; Bian, Xin; Tang, Yu-Hang; Karniadakis, George Em

    2018-02-01

    Dissipative particle dynamics (DPD) is an effective Lagrangian method for modeling complex fluids in the mesoscale regime but so far it has been limited to relatively simple geometries. Here, we formulate a local detection method for DPD involving arbitrarily shaped geometric three-dimensional domains. By introducing an indicator variable of boundary volume fraction (BVF) for each fluid particle, the boundary of arbitrary-shape objects is detected on-the-fly for the moving fluid particles using only the local particle configuration. Therefore, this approach eliminates the need of an analytical description of the boundary and geometry of objects in DPD simulations and makes it possible to load the geometry of a system directly from experimental images or computer-aided designs/drawings. More specifically, the BVF of a fluid particle is defined by the weighted summation over its neighboring particles within a cutoff distance. Wall penetration is inferred from the value of the BVF and prevented by a predictor-corrector algorithm. The no-slip boundary condition is achieved by employing effective dissipative coefficients for liquid-solid interactions. Quantitative evaluations of the new method are performed for the plane Poiseuille flow, the plane Couette flow and the Wannier flow in a cylindrical domain and compared with their corresponding analytical solutions and (high-order) spectral element solution of the Navier-Stokes equations. We verify that the proposed method yields correct no-slip boundary conditions for velocity and generates negligible fluctuations of density and temperature in the vicinity of the wall surface. Moreover, we construct a very complex 3D geometry - the "Brown Pacman" microfluidic device - to explicitly demonstrate how to construct a DPD system with complex geometry directly from loading a graphical image. Subsequently, we simulate the flow of a surfactant solution through this complex microfluidic device using the new method. Its

  13. A method of orbital analysis for large-scale first-principles simulations

    Energy Technology Data Exchange (ETDEWEB)

    Ohwaki, Tsukuru [Advanced Materials Laboratory, Nissan Research Center, Nissan Motor Co., Ltd., 1 Natsushima-cho, Yokosuka, Kanagawa 237-8523 (Japan); Otani, Minoru [Nanosystem Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Ibaraki 305-8568 (Japan); Ozaki, Taisuke [Research Center for Simulation Science (RCSS), Japan Advanced Institute of Science and Technology (JAIST), 1-1 Asahidai, Nomi, Ishikawa 923-1292 (Japan)

    2014-06-28

    An efficient method of calculating the natural bond orbitals (NBOs) based on a truncation of the entire density matrix of a whole system is presented for large-scale density functional theory calculations. The method recovers an orbital picture for O(N) electronic structure methods which directly evaluate the density matrix without using Kohn-Sham orbitals, thus enabling quantitative analysis of chemical reactions in large-scale systems in the language of localized Lewis-type chemical bonds. With the density matrix calculated by either an exact diagonalization or O(N) method, the computational cost is O(1) for the calculation of NBOs associated with a local region where a chemical reaction takes place. As an illustration of the method, we demonstrate how an electronic structure in a local region of interest can be analyzed by NBOs in a large-scale first-principles molecular dynamics simulation for a liquid electrolyte bulk model (propylene carbonate + LiBF{sub 4})

  14. A method of orbital analysis for large-scale first-principles simulations

    International Nuclear Information System (INIS)

    Ohwaki, Tsukuru; Otani, Minoru; Ozaki, Taisuke

    2014-01-01

    An efficient method of calculating the natural bond orbitals (NBOs) based on a truncation of the entire density matrix of a whole system is presented for large-scale density functional theory calculations. The method recovers an orbital picture for O(N) electronic structure methods which directly evaluate the density matrix without using Kohn-Sham orbitals, thus enabling quantitative analysis of chemical reactions in large-scale systems in the language of localized Lewis-type chemical bonds. With the density matrix calculated by either an exact diagonalization or O(N) method, the computational cost is O(1) for the calculation of NBOs associated with a local region where a chemical reaction takes place. As an illustration of the method, we demonstrate how an electronic structure in a local region of interest can be analyzed by NBOs in a large-scale first-principles molecular dynamics simulation for a liquid electrolyte bulk model (propylene carbonate + LiBF 4 )

  15. Measuring Early Communication in Spanish Speaking Children: The Communication Complexity Scale in Peru.

    Science.gov (United States)

    Atwood, Erin; Brady, Nancy C; Esplund, Amy

    There is a great need in the United States to develop presymbolic evaluation tools that are widely available and accurate for individuals that come from a bilingual and/or multicultural setting. The Communication Complexity Scale (CCS) is a measure that evaluates expressive presymbolic communication including gestures, vocalizations and eye gaze. Studying the effectiveness of this tool in a Spanish speaking environment was undertaken to determine the applicability of the CCS with Spanish speaking children. Methods & Procedures: In 2011-2012, researchers from the University of Kansas and Centro Ann Sullivan del Perú (CASP) investigated communication in a cohort of 71 young Spanish speaking children with developmental disabilities and a documented history of self-injurious, stereotyped and aggressive behaviors. Communication was assessed first by parental report with translated versions of the Communication and Symbolic Behavior Scales (CSBS), a well-known assessment of early communication, and then eleven months later with the CCS. We hypothesized that the CCS and the CSBS measures would be significantly correlated in this population of Spanish speaking children. The CSBS scores from time 1 with a mean participant age of 41 months were determined to have a strong positive relationship to the CCS scores obtained at time 2 with a mean participant age of 52 months. The CCS is strongly correlated to a widely accepted measure of early communication. These findings support the validity of the Spanish version of the CCS and demonstrate its usefulness for children from another culture and for children in a Spanish speaking environment.

  16. Dimensionality and scaling properties of the Patient Categorisation Tool in patients with complex rehabilitation needs following acquired brain injury

    Directory of Open Access Journals (Sweden)

    Richard J. Siegert

    2018-03-01

    Full Text Available Objective: To investigate the scaling properties of the Patient Categorisation Tool (PCAT as an instrument to measure complexity of rehabilitation needs. Design: Psychometric analysis in a multicentre cohort from the UK national clinical database. Patients: A total of 8,222 patents admitted for specialist inpatient rehabilitation following acquired brain injury. Methods: Dimensionality was explored using principal components analysis with Varimax rotation, followed by Rasch analysis on a random sample of n = 500. Results: Principal components analysis identified 3 components explaining 50% of variance. The partial credit Rasch model was applied for the 17-item PCAT scale using a “super-items” methodology based on the principal components analysis results. Two out of 5 initially created super-items displayed signs of local dependency, which significantly affected the estimates. They were combined into a single super-item resulting in satisfactory model fit and unidimensionality. Differential item functioning (DIF of 2 super-items was addressed by splitting between age groups (<65 and ≥ 65 years to produce the best model fit (χ2/df = 54.72, p = 0.235 and reliability (Person Separation Index (PSI = 0.79. Ordinal-to-interval conversion tables were produced. Conclusion: The PCAT has satisfied expectations of the unidimensional Rasch model in the current sample after minor modifications, and demonstrated acceptable reliability for individual assessment of rehabilitation complexity.

  17. An Extended Newmark-FDTD Method for Complex Dispersive Media

    Directory of Open Access Journals (Sweden)

    Yu-Qiang Zhang

    2018-01-01

    Full Text Available Based on polarizability in the form of a complex quadratic rational function, a novel finite-difference time-domain (FDTD approach combined with the Newmark algorithm is presented for dealing with a complex dispersive medium. In this paper, the time-stepping equation of the polarization vector is derived by applying simultaneously the Newmark algorithm to the two sides of a second-order time-domain differential equation obtained from the relation between the polarization vector and electric field intensity in the frequency domain by the inverse Fourier transform. Then, its accuracy and stability are discussed from the two aspects of theoretical analysis and numerical computation. It is observed that this method possesses the advantages of high accuracy, high stability, and a wide application scope and can thus be applied to the treatment of many complex dispersion models, including the complex conjugate pole residue model, critical point model, modified Lorentz model, and complex quadratic rational function.

  18. Simulating Complex, Cold-region Process Interactions Using a Multi-scale, Variable-complexity Hydrological Model

    Science.gov (United States)

    Marsh, C.; Pomeroy, J. W.; Wheater, H. S.

    2017-12-01

    Accurate management of water resources is necessary for social, economic, and environmental sustainability worldwide. In locations with seasonal snowcovers, the accurate prediction of these water resources is further complicated due to frozen soils, solid-phase precipitation, blowing snow transport, and snowcover-vegetation-atmosphere interactions. Complex process interactions and feedbacks are a key feature of hydrological systems and may result in emergent phenomena, i.e., the arising of novel and unexpected properties within a complex system. One example is the feedback associated with blowing snow redistribution, which can lead to drifts that cause locally-increased soil moisture, thus increasing plant growth that in turn subsequently impacts snow redistribution, creating larger drifts. Attempting to simulate these emergent behaviours is a significant challenge, however, and there is concern that process conceptualizations within current models are too incomplete to represent the needed interactions. An improved understanding of the role of emergence in hydrological systems often requires high resolution distributed numerical hydrological models that incorporate the relevant process dynamics. The Canadian Hydrological Model (CHM) provides a novel tool for examining cold region hydrological systems. Key features include efficient terrain representation, allowing simulations at various spatial scales, reduced computational overhead, and a modular process representation allowing for an alternative-hypothesis framework. Using both physics-based and conceptual process representations sourced from long term process studies and the current cold regions literature allows for comparison of process representations and importantly, their ability to produce emergent behaviours. Examining the system in a holistic, process-based manner can hopefully derive important insights and aid in development of improved process representations.

  19. Canopy BRF simulation of forest with different crown shape and height in larger scale based on Radiosity method

    Science.gov (United States)

    Song, Jinling; Qu, Yonghua; Wang, Jindi; Wan, Huawei; Liu, Xiaoqing

    2007-06-01

    Radiosity method is based on the computer simulation of 3D real structures of vegetations, such as leaves, branches and stems, which are composed by many facets. Using this method we can simulate the canopy reflectance and its bidirectional distribution of the vegetation canopy in visible and NIR regions. But with vegetations are more complex, more facets to compose them, so large memory and lots of time to calculate view factors are required, which are the choke points of using Radiosity method to calculate canopy BRF of lager scale vegetation scenes. We derived a new method to solve the problem, and the main idea is to abstract vegetation crown shapes and to simplify their structures, which can lessen the number of facets. The facets are given optical properties according to the reflectance, transmission and absorption of the real structure canopy. Based on the above work, we can simulate the canopy BRF of the mix scenes with different species vegetation in the large scale. In this study, taking broadleaf trees as an example, based on their structure characteristics, we abstracted their crowns as ellipsoid shells, and simulated the canopy BRF in visible and NIR regions of the large scale scene with different crown shape and different height ellipsoids. Form this study, we can conclude: LAI, LAD the probability gap, the sunlit and shaded surfaces are more important parameter to simulate the simplified vegetation canopy BRF. And the Radiosity method can apply us canopy BRF data in any conditions for our research.

  20. Test equating, scaling, and linking methods and practices

    CERN Document Server

    Kolen, Michael J

    2014-01-01

    This book provides an introduction to test equating, scaling, and linking, including those concepts and practical issues that are critical for developers and all other testing professionals.  In addition to statistical procedures, successful equating, scaling, and linking involves many aspects of testing, including procedures to develop tests, to administer and score tests, and to interpret scores earned on tests. Test equating methods are used with many standardized tests in education and psychology to ensure that scores from multiple test forms can be used interchangeably.  Test scaling is the process of developing score scales that are used when scores on standardized tests are reported. In test linking, scores from two or more tests are related to one another. Linking has received much recent attention, due largely to investigations of linking similarly named tests from different test publishers or tests constructed for different purposes. In recent years, researchers from the education, psychology, and...

  1. Max-Min SINR in Large-Scale Single-Cell MU-MIMO: Asymptotic Analysis and Low Complexity Transceivers

    KAUST Repository

    Sifaou, Houssem; Kammoun, Abla; Sanguinetti, Luca; Debbah, Merouane; Alouini, Mohamed-Slim

    2016-01-01

    This work focuses on the downlink and uplink of large-scale single-cell MU-MIMO systems in which the base station (BS) endowed with M antennas communicates with K single-antenna user equipments (UEs). Particularly, we aim at reducing the complexity

  2. BOX-COX REGRESSION METHOD IN TIME SCALING

    Directory of Open Access Journals (Sweden)

    ATİLLA GÖKTAŞ

    2013-06-01

    Full Text Available Box-Cox regression method with λj, for j = 1, 2, ..., k, power transformation can be used when dependent variable and error term of the linear regression model do not satisfy the continuity and normality assumptions. The situation obtaining the smallest mean square error  when optimum power λj, transformation for j = 1, 2, ..., k, of Y has been discussed. Box-Cox regression method is especially appropriate to adjust existence skewness or heteroscedasticity of error terms for a nonlinear functional relationship between dependent and explanatory variables. In this study, the advantage and disadvantage use of Box-Cox regression method have been discussed in differentiation and differantial analysis of time scale concept.

  3. An efficient Korringa-Kohn-Rostoker method for ''complex'' lattices

    International Nuclear Information System (INIS)

    Yussouff, M.; Zeller, R.

    1980-10-01

    We present a modification of the exact KKR-band structure method which uses (a) a new energy expansion for structure constants and (b) only the reciprocal lattice summation. It is quite efficient and particularly useful for 'complex' lattices. The band structure of hexagonal-close-packed Beryllium at symmetry points is presented as an example of this method. (author)

  4. S-curve networks and an approximate method for estimating degree distributions of complex networks

    Science.gov (United States)

    Guo, Jin-Li

    2010-12-01

    In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference values for optimizing the distribution of IPv4 address resource and the development of IPv6. Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing limit, it proposes a finite network model with a bulk growth. The model is said to be an S-curve network. Analysis demonstrates that the analytic method based on uniform distributions (i.e., Barabási-Albert method) is not suitable for the network. It develops an approximate method to predict the growth dynamics of the individual nodes, and uses this to calculate analytically the degree distribution and the scaling exponents. The analytical result agrees with the simulation well, obeying an approximately power-law form. This method can overcome a shortcoming of Barabási-Albert method commonly used in current network research.

  5. S-curve networks and an approximate method for estimating degree distributions of complex networks

    International Nuclear Information System (INIS)

    Guo Jin-Li

    2010-01-01

    In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference values for optimizing the distribution of IPv4 address resource and the development of IPv6. Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing limit, it proposes a finite network model with a bulk growth. The model is said to be an S-curve network. Analysis demonstrates that the analytic method based on uniform distributions (i.e., Barabási-Albert method) is not suitable for the network. It develops an approximate method to predict the growth dynamics of the individual nodes, and uses this to calculate analytically the degree distribution and the scaling exponents. The analytical result agrees with the simulation well, obeying an approximately power-law form. This method can overcome a shortcoming of Barabási-Albert method commonly used in current network research. (general)

  6. Investigation of complexing equilibrium of polyacrylate-anion with cadmium ions by polarographic method

    Energy Technology Data Exchange (ETDEWEB)

    Avlyanov, Zh K; Kabanov, N M; Zezin, A B

    1985-01-01

    Polarographic investigation of cadmium complex with polyacrylate-anion in aqueous KCl solution is carried out. It is shown that the polarographic method allows one to define equilibrium constants of polymer metallic complex (PMC) formation even in the case when current magnitudes are defined by PMC dissociation reaction kinetic characteristics. The obtained equilibrium constants of stepped complexing provide the values of mean coordination PAAxCd complex number of approximately 1.5, that coincides with the value obtained by the potentiometric method.

  7. Investigation of complexing equilibrium of polyacrylate-anion with cadmium ions by polarographic method

    International Nuclear Information System (INIS)

    Avlyanov, Zh.K.; Kabanov, N.M.; Zezin, A.B.

    1985-01-01

    Polarographic investigation of cadmium complex with polyacrylate-anion in aqueous KCl solution is carried out. It is shown that the polarographic method allows one to define equilibrium constants of polymer metallic complex (PMC) formation even in the case, when current magnitudes are defined by PMC dissociation reaction kinetic characteristics. The obtained equilibrium constants of stepped complexing provide the values of mean coordination PAAxCd complex number of approximately 1.5, that coinsides with the value obtained by the potentiometric method

  8. Advanced differential quadrature methods

    CERN Document Server

    Zong, Zhi

    2009-01-01

    Modern Tools to Perform Numerical DifferentiationThe original direct differential quadrature (DQ) method has been known to fail for problems with strong nonlinearity and material discontinuity as well as for problems involving singularity, irregularity, and multiple scales. But now researchers in applied mathematics, computational mechanics, and engineering have developed a range of innovative DQ-based methods to overcome these shortcomings. Advanced Differential Quadrature Methods explores new DQ methods and uses these methods to solve problems beyond the capabilities of the direct DQ method.After a basic introduction to the direct DQ method, the book presents a number of DQ methods, including complex DQ, triangular DQ, multi-scale DQ, variable order DQ, multi-domain DQ, and localized DQ. It also provides a mathematical compendium that summarizes Gauss elimination, the Runge-Kutta method, complex analysis, and more. The final chapter contains three codes written in the FORTRAN language, enabling readers to q...

  9. A variational multi-scale method with spectral approximation of the sub-scales: Application to the 1D advection-diffusion equations

    KAUST Repository

    Chacó n Rebollo, Tomá s; Dia, Ben Mansour

    2015-01-01

    This paper introduces a variational multi-scale method where the sub-grid scales are computed by spectral approximations. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base of eigenfunctions which are orthonormal in weighted L2 spaces. This allows to element-wise calculate the sub-grid scales by means of the associated spectral expansion. We propose a feasible VMS-spectral method by truncation of this spectral expansion to a finite number of modes. We apply this general framework to the convection-diffusion equation, by analytically computing the family of eigenfunctions. We perform a convergence and error analysis. We also present some numerical tests that show the stability of the method for an odd number of spectral modes, and an improvement of accuracy in the large resolved scales, due to the adding of the sub-grid spectral scales.

  10. A variational multi-scale method with spectral approximation of the sub-scales: Application to the 1D advection-diffusion equations

    KAUST Repository

    Chacón Rebollo, Tomás

    2015-03-01

    This paper introduces a variational multi-scale method where the sub-grid scales are computed by spectral approximations. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base of eigenfunctions which are orthonormal in weighted L2 spaces. This allows to element-wise calculate the sub-grid scales by means of the associated spectral expansion. We propose a feasible VMS-spectral method by truncation of this spectral expansion to a finite number of modes. We apply this general framework to the convection-diffusion equation, by analytically computing the family of eigenfunctions. We perform a convergence and error analysis. We also present some numerical tests that show the stability of the method for an odd number of spectral modes, and an improvement of accuracy in the large resolved scales, due to the adding of the sub-grid spectral scales.

  11. Rosenberg's Self-Esteem Scale: Two Factors or Method Effects.

    Science.gov (United States)

    Tomas, Jose M.; Oliver, Amparo

    1999-01-01

    Results of a study with 640 Spanish high school students suggest the existence of a global self-esteem factor underlying responses to Rosenberg's (M. Rosenberg, 1965) Self-Esteem Scale, although the inclusion of method effects is needed to achieve a good model fit. Method effects are associated with item wording. (SLD)

  12. Estimating basin scale evapotranspiration (ET) by water balance and remote sensing methods

    Science.gov (United States)

    Senay, G.B.; Leake, S.; Nagler, P.L.; Artan, G.; Dickinson, J.; Cordova, J.T.; Glenn, E.P.

    2011-01-01

    Evapotranspiration (ET) is an important hydrological process that can be studied and estimated at multiple spatial scales ranging from a leaf to a river basin. We present a review of methods in estimating basin scale ET and its applications in understanding basin water balance dynamics. The review focuses on two aspects of ET: (i) how the basin scale water balance approach is used to estimate ET; and (ii) how ‘direct’ measurement and modelling approaches are used to estimate basin scale ET. Obviously, the basin water balance-based ET requires the availability of good precipitation and discharge data to calculate ET as a residual on longer time scales (annual) where net storage changes are assumed to be negligible. ET estimated from such a basin water balance principle is generally used for validating the performance of ET models. On the other hand, many of the direct estimation methods involve the use of remotely sensed data to estimate spatially explicit ET and use basin-wide averaging to estimate basin scale ET. The direct methods can be grouped into soil moisture balance modelling, satellite-based vegetation index methods, and methods based on satellite land surface temperature measurements that convert potential ET into actual ET using a proportionality relationship. The review also includes the use of complementary ET estimation principles for large area applications. The review identifies the need to compare and evaluate the different ET approaches using standard data sets in basins covering different hydro-climatic regions of the world.

  13. Development of polygon elements based on the scaled boundary finite element method

    International Nuclear Information System (INIS)

    Chiong, Irene; Song Chongmin

    2010-01-01

    We aim to extend the scaled boundary finite element method to construct conforming polygon elements. The development of the polygonal finite element is highly anticipated in computational mechanics as greater flexibility and accuracy can be achieved using these elements. The scaled boundary polygonal finite element will enable new developments in mesh generation, better accuracy from a higher order approximation and better transition elements in finite element meshes. Polygon elements of arbitrary number of edges and order have been developed successfully. The edges of an element are discretised with line elements. The displacement solution of the scaled boundary finite element method is used in the development of shape functions. They are shown to be smooth and continuous within the element, and satisfy compatibility and completeness requirements. Furthermore, eigenvalue decomposition has been used to depict element modes and outcomes indicate the ability of the scaled boundary polygonal element to express rigid body and constant strain modes. Numerical tests are presented; the patch test is passed and constant strain modes verified. Accuracy and convergence of the method are also presented and the performance of the scaled boundary polygonal finite element is verified on Cook's swept panel problem. Results show that the scaled boundary polygonal finite element method outperforms a traditional mesh and accuracy and convergence are achieved from fewer nodes. The proposed method is also shown to be truly flexible, and applies to arbitrary n-gons formed of irregular and non-convex polygons.

  14. An applet for the Gabor similarity scaling of the differences between complex stimuli.

    Science.gov (United States)

    Margalit, Eshed; Biederman, Irving; Herald, Sarah B; Yue, Xiaomin; von der Malsburg, Christoph

    2016-11-01

    It is widely accepted that after the first cortical visual area, V1, a series of stages achieves a representation of complex shapes, such as faces and objects, so that they can be understood and recognized. A major challenge for the study of complex shape perception has been the lack of a principled basis for scaling of the physical differences between stimuli so that their similarity can be specified, unconfounded by early-stage differences. Without the specification of such similarities, it is difficult to make sound inferences about the contributions of later stages to neural activity or psychophysical performance. A Web-based app is described that is based on the Malsburg Gabor-jet model (Lades et al., 1993), which allows easy specification of the V1 similarity of pairs of stimuli, no matter how intricate. The model predicts the psycho physical discriminability of metrically varying faces and complex blobs almost perfectly (Yue, Biederman, Mangini, von der Malsburg, & Amir, 2012), and serves as the input stage of a large family of contemporary neurocomputational models of vision.

  15. Complex Method Mixed with PSO Applying to Optimization Design of Bridge Crane Girder

    Directory of Open Access Journals (Sweden)

    He Yan

    2017-01-01

    Full Text Available In engineer design, basic complex method has not enough global search ability for the nonlinear optimization problem, so it mixed with particle swarm optimization (PSO has been presented in the paper,that is the optimal particle evaluated from fitness function of particle swarm displacement complex vertex in order to realize optimal principle of the largest complex central distance.This method is applied to optimization design problems of box girder of bridge crane with constraint conditions.At first a mathematical model of the girder optimization has been set up,in which box girder cross section area of bridge crane is taken as the objective function, and its four sizes parameters as design variables, girder mechanics performance, manufacturing process, border sizes and so on requirements as constraint conditions. Then complex method mixed with PSO is used to solve optimization design problem of cane box girder from constrained optimization studying approach, and its optimal results have achieved the goal of lightweight design and reducing the crane manufacturing cost . The method is reliable, practical and efficient by the practical engineer calculation and comparative analysis with basic complex method.

  16. Research on performance evaluation and anti-scaling mechanism of green scale inhibitors by static and dynamic methods

    International Nuclear Information System (INIS)

    Liu, D.

    2011-01-01

    Increasing environmental concerns and discharge limitations have imposed additional challenges in treating process waters. Thus, the concept of 'Green Chemistry' was proposed and green scale inhibitors became a focus of water treatment technology. Finding some economical and environmentally friendly inhibitors is one of the major research focuses nowadays. In this dissertation, the inhibition performance of different phosphonates as CaCO 3 scale inhibitors in simulated cooling water was evaluated. Homo-, co-, and ter-polymers were also investigated for their performance as Ca-phosphonate inhibitors. Addition of polymers as inhibitors with phosphonates could reduce Ca-phosphonate precipitation and enhance the inhibition efficiency for CaCO 3 scale. The synergistic effect of poly-aspartic acid (PASP) and Poly-epoxy-succinic acid (PESA) on inhibition of scaling has been studied using both static and dynamic methods. Results showed that the anti-scaling performance of PASP combined with PESA was superior to that of PASP or PESA alone for CaCO 3 , CaSO 4 and BaSO 4 scale. The influence of dosage, temperature and Ca 2+ concentration was also investigated in simulated cooling water circuit. Moreover, SEM analysis demonstrated the modification of crystalline morphology in the presence of PASP and PESA. In this work, we also investigated the respective inhibition effectiveness of copper and zinc ions for scaling in drinking water by the method of Rapid Controlled Precipitation (RCP). The results indicated that the zinc ion and copper ion were high efficient inhibitors of low concentration, and the analysis of SEM and IR showed that copper and zinc ions could affect the calcium carbonate germination and change the crystal morphology. Moreover, the influence of temperature and dissolved CO 2 on the scaling potential of a mineral water (Salvetat) in the presence of copper and zinc ions was studied by laboratory experiments. An ideal scale inhibitor should be a solid form

  17. Generalized Combination Complex Synchronization for Fractional-Order Chaotic Complex Systems

    Directory of Open Access Journals (Sweden)

    Cuimei Jiang

    2015-07-01

    Full Text Available Based on two fractional-order chaotic complex drive systems and one fractional-order chaotic complex response system with different dimensions, we propose generalized combination complex synchronization. In this new synchronization scheme, there are two complex scaling matrices that are non-square matrices. On the basis of the stability theory of fractional-order linear systems, we design a general controller via active control. Additionally, by virtue of two complex scaling matrices, generalized combination complex synchronization between fractional-order chaotic complex systems and real systems is investigated. Finally, three typical examples are given to demonstrate the effectiveness and feasibility of the schemes.

  18. Hybrid RANS/LES method for wind flow over complex terrain

    DEFF Research Database (Denmark)

    Bechmann, Andreas; Sørensen, Niels N.

    2010-01-01

    for flows at high Reynolds numbers. To reduce the computational cost of traditional LES, a hybrid method is proposed in which the near-wall eddies are modelled in a Reynolds-averaged sense. Close to walls, the flow is treated with the Reynolds-averaged Navier-Stokes (RANS) equations (unsteady RANS...... rough walls. Previous attempts of combining RANS and LES has resulted in unphysical transition regions between the two layers, but the present work improves this region by using a stochastic backscatter model. To demonstrate the ability of the proposed hybrid method, simulations are presented for wind...... the turbulent kinetic energy, whereas the new method captures the high turbulence levels well but underestimates the mean velocity. The presented results are for a relative mild configuration of complex terrain, but the proposed method can also be used for highly complex terrain where the benefits of the new...

  19. A novel image fusion algorithm based on 2D scale-mixing complex wavelet transform and Bayesian MAP estimation for multimodal medical images

    Directory of Open Access Journals (Sweden)

    Abdallah Bengueddoudj

    2017-05-01

    Full Text Available In this paper, we propose a new image fusion algorithm based on two-dimensional Scale-Mixing Complex Wavelet Transform (2D-SMCWT. The fusion of the detail 2D-SMCWT coefficients is performed via a Bayesian Maximum a Posteriori (MAP approach by considering a trivariate statistical model for the local neighboring of 2D-SMCWT coefficients. For the approximation coefficients, a new fusion rule based on the Principal Component Analysis (PCA is applied. We conduct several experiments using three different groups of multimodal medical images to evaluate the performance of the proposed method. The obtained results prove the superiority of the proposed method over the state of the art fusion methods in terms of visual quality and several commonly used metrics. Robustness of the proposed method is further tested against different types of noise. The plots of fusion metrics establish the accuracy of the proposed fusion method.

  20. Ethnographic methods for process evaluations of complex health behaviour interventions.

    Science.gov (United States)

    Morgan-Trimmer, Sarah; Wood, Fiona

    2016-05-04

    This article outlines the contribution that ethnography could make to process evaluations for trials of complex health-behaviour interventions. Process evaluations are increasingly used to examine how health-behaviour interventions operate to produce outcomes and often employ qualitative methods to do this. Ethnography shares commonalities with the qualitative methods currently used in health-behaviour evaluations but has a distinctive approach over and above these methods. It is an overlooked methodology in trials of complex health-behaviour interventions that has much to contribute to the understanding of how interventions work. These benefits are discussed here with respect to three strengths of ethnographic methodology: (1) producing valid data, (2) understanding data within social contexts, and (3) building theory productively. The limitations of ethnography within the context of process evaluations are also discussed.

  1. Sub-Scale Orion Parachute Test Results from the National Full-Scale Aerodynamics Complex 80- By 120-ft Wind Tunnel

    Science.gov (United States)

    Anderson, Brian P.; Greathouse, James S.; Powell, Jessica M.; Ross, James C.; Schairer, Edward T.; Kushner, Laura; Porter, Barry J.; Goulding, Patrick W., II; Zwicker, Matthew L.; Mollmann, Catherine

    2017-01-01

    A two-week test campaign was conducted in the National Full-Scale Aerodynamics Complex 80 x 120-ft Wind Tunnel in support of Orion parachute pendulum mitigation activities. The test gathered static aerodynamic data using an instrumented, 3-tether system attached to the parachute vent in combination with an instrumented parachute riser. Dynamic data was also gathered by releasing the tether system and measuring canopy performance using photogrammetry. Several canopy configurations were tested and compared against the current Orion parachute design to understand changes in drag performance and aerodynamic stability. These configurations included canopies with varying levels and locations of geometric porosity as well as sails with increased levels of fullness. In total, 37 runs were completed for a total of 392 data points. Immediately after the end of the testing campaign a down-select decision was made based on preliminary data to support follow-on sub-scale air drop testing. A summary of a more rigorous analysis of the test data is also presented.

  2. SparseMaps—A systematic infrastructure for reduced-scaling electronic structure methods. III. Linear-scaling multireference domain-based pair natural orbital N-electron valence perturbation theory

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Yang; Sivalingam, Kantharuban; Neese, Frank, E-mail: Frank.Neese@cec.mpg.de [Max Planck Institut für Chemische Energiekonversion, Stiftstr. 34-36, D-45470 Mülheim an der Ruhr (Germany); Valeev, Edward F. [Department of Chemistry, Virginia Tech, Blacksburg, Virginia 24014 (United States)

    2016-03-07

    Multi-reference (MR) electronic structure methods, such as MR configuration interaction or MR perturbation theory, can provide reliable energies and properties for many molecular phenomena like bond breaking, excited states, transition states or magnetic properties of transition metal complexes and clusters. However, owing to their inherent complexity, most MR methods are still too computationally expensive for large systems. Therefore the development of more computationally attractive MR approaches is necessary to enable routine application for large-scale chemical systems. Among the state-of-the-art MR methods, second-order N-electron valence state perturbation theory (NEVPT2) is an efficient, size-consistent, and intruder-state-free method. However, there are still two important bottlenecks in practical applications of NEVPT2 to large systems: (a) the high computational cost of NEVPT2 for large molecules, even with moderate active spaces and (b) the prohibitive cost for treating large active spaces. In this work, we address problem (a) by developing a linear scaling “partially contracted” NEVPT2 method. This development uses the idea of domain-based local pair natural orbitals (DLPNOs) to form a highly efficient algorithm. As shown previously in the framework of single-reference methods, the DLPNO concept leads to an enormous reduction in computational effort while at the same time providing high accuracy (approaching 99.9% of the correlation energy), robustness, and black-box character. In the DLPNO approach, the virtual space is spanned by pair natural orbitals that are expanded in terms of projected atomic orbitals in large orbital domains, while the inactive space is spanned by localized orbitals. The active orbitals are left untouched. Our implementation features a highly efficient “electron pair prescreening” that skips the negligible inactive pairs. The surviving pairs are treated using the partially contracted NEVPT2 formalism. A detailed

  3. The architecture of ArgR-DNA complexes at the genome-scale in Escherichia coli

    DEFF Research Database (Denmark)

    Cho, Suhyung; Cho, Yoo-Bok; Kang, Taek Jin

    2015-01-01

    DNA-binding motifs that are recognized by transcription factors (TFs) have been well studied; however, challenges remain in determining the in vivo architecture of TF-DNA complexes on a genome-scale. Here, we determined the in vivo architecture of Escherichia coli arginine repressor (ArgR)-DNA co...

  4. IMPACT OF MATRIX INVERSION ON THE COMPLEXITY OF THE FINITE ELEMENT METHOD

    Directory of Open Access Journals (Sweden)

    M. Sybis

    2016-04-01

    Full Text Available Purpose. The development of a wide construction market and a desire to design innovative architectural building constructions has resulted in the need to create complex numerical models of objects having increasingly higher computational complexity. The purpose of this work is to show that choosing a proper method for solving the set of equations can improve the calculation time (reduce the complexity by a few levels of magnitude. Methodology. The article presents an analysis of the impact of matrix inversion algorithm on the deflection calculation in the beam, using the finite element method (FEM. Based on the literature analysis, common methods of calculating set of equations were determined. From the found solutions the Gaussian elimination, LU and Cholesky decomposition methods have been implemented to determine the effect of the matrix inversion algorithm used for solving the equations set on the number of computational operations performed. In addition, each of the implemented method has been further optimized thereby reducing the number of necessary arithmetic operations. Findings. These optimizations have been performed on the use of certain properties of the matrix, such as symmetry or significant number of zero elements in the matrix. The results of the analysis are presented for the division of the beam to 5, 50, 100 and 200 nodes, for which the deflection has been calculated. Originality. The main achievement of this work is that it shows the impact of the used methodology on the complexity of solving the problem (or equivalently, time needed to obtain results. Practical value. The difference between the best (the less complex and the worst (the most complex is in the row of few orders of magnitude. This result shows that choosing wrong methodology may enlarge time needed to perform calculation significantly.

  5. A multiple-scaling method of the computation of threaded structures

    International Nuclear Information System (INIS)

    Andrieux, S.; Leger, A.

    1989-01-01

    The numerical computation of threaded structures usually leads to very large finite elements problems. It was therefore very difficult to carry out some parametric studies, especially in non-linear cases involving plasticity or unilateral contact conditions. Nevertheless, these parametric studies are essential in many industrial problems, for instance for the evaluation of various repairing processes of the closure studs of PWR. It is well known that such repairing generally involves several modifications of the thread geometry, of the number of active threads, of the flange clamping conditions, and so on. This paper is devoted to the description of a two-scale method, which easily allows parametric studies. The main idea of this method consists of dividing the problem into a global part, and a local part. The local problem is solved by F.E.M. on the precise geometry of the thread of some elementary loadings. The global one is formulated on the gudgeon scale and is reduced to a monodimensional one. The resolution of this global problem leads to the unsignificant computational cost. Then, a post-processing gives the stress field at the thread scale anywhere in the assembly. After recalling some principles of the two-scales approach, the method is described. The validation by comparison with a direct F.E. computation and some further applications are presented

  6. Method of producing carbon coated nano- and micron-scale particles

    Science.gov (United States)

    Perry, W. Lee; Weigle, John C; Phillips, Jonathan

    2013-12-17

    A method of making carbon-coated nano- or micron-scale particles comprising entraining particles in an aerosol gas, providing a carbon-containing gas, providing a plasma gas, mixing the aerosol gas, the carbon-containing gas, and the plasma gas proximate a torch, bombarding the mixed gases with microwaves, and collecting resulting carbon-coated nano- or micron-scale particles.

  7. Multi-scale calculation based on dual domain material point method combined with molecular dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Dhakal, Tilak Raj [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-02-27

    This dissertation combines the dual domain material point method (DDMP) with molecular dynamics (MD) in an attempt to create a multi-scale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically non-equilibrium state, and conventional constitutive relations are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a MD simulation of a group of atoms surrounding the material point. Rather than restricting the multi-scale simulation in a small spatial region, such as phase interfaces, or crack tips, this multi-scale method can be used to consider non-equilibrium thermodynamic e ects in a macroscopic domain. This method takes advantage that the material points only communicate with mesh nodes, not among themselves; therefore MD simulations for material points can be performed independently in parallel. First, using a one-dimensional shock problem as an example, the numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the DDMP method are investigated. Among these methods, only the DDMP method converges as the number of particles increases, but the large number of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the

  8. Complex Data Modeling and Computationally Intensive Statistical Methods

    CERN Document Server

    Mantovan, Pietro

    2010-01-01

    The last years have seen the advent and development of many devices able to record and store an always increasing amount of complex and high dimensional data; 3D images generated by medical scanners or satellite remote sensing, DNA microarrays, real time financial data, system control datasets. The analysis of this data poses new challenging problems and requires the development of novel statistical models and computational methods, fueling many fascinating and fast growing research areas of modern statistics. The book offers a wide variety of statistical methods and is addressed to statistici

  9. Analytical Method to Estimate the Complex Permittivity of Oil Samples

    Directory of Open Access Journals (Sweden)

    Lijuan Su

    2018-03-01

    Full Text Available In this paper, an analytical method to estimate the complex dielectric constant of liquids is presented. The method is based on the measurement of the transmission coefficient in an embedded microstrip line loaded with a complementary split ring resonator (CSRR, which is etched in the ground plane. From this response, the dielectric constant and loss tangent of the liquid under test (LUT can be extracted, provided that the CSRR is surrounded by such LUT, and the liquid level extends beyond the region where the electromagnetic fields generated by the CSRR are present. For that purpose, a liquid container acting as a pool is added to the structure. The main advantage of this method, which is validated from the measurement of the complex dielectric constant of olive and castor oil, is that reference samples for calibration are not required.

  10. A spatial method to calculate small-scale fisheries effort in data poor scenarios.

    Science.gov (United States)

    Johnson, Andrew Frederick; Moreno-Báez, Marcia; Giron-Nava, Alfredo; Corominas, Julia; Erisman, Brad; Ezcurra, Exequiel; Aburto-Oropeza, Octavio

    2017-01-01

    To gauge the collateral impacts of fishing we must know where fishing boats operate and how much they fish. Although small-scale fisheries land approximately the same amount of fish for human consumption as industrial fleets globally, methods of estimating their fishing effort are comparatively poor. We present an accessible, spatial method of calculating the effort of small-scale fisheries based on two simple measures that are available, or at least easily estimated, in even the most data-poor fisheries: the number of boats and the local coastal human population. We illustrate the method using a small-scale fisheries case study from the Gulf of California, Mexico, and show that our measure of Predicted Fishing Effort (PFE), measured as the number of boats operating in a given area per day adjusted by the number of people in local coastal populations, can accurately predict fisheries landings in the Gulf. Comparing our values of PFE to commercial fishery landings throughout the Gulf also indicates that the current number of small-scale fishing boats in the Gulf is approximately double what is required to land theoretical maximum fish biomass. Our method is fishery-type independent and can be used to quantitatively evaluate the efficacy of growth in small-scale fisheries. This new method provides an important first step towards estimating the fishing effort of small-scale fleets globally.

  11. Ultrafast laser spectroscopy in complex solid state materials

    Energy Technology Data Exchange (ETDEWEB)

    Li, Tianqi [Iowa State Univ., Ames, IA (United States)

    2014-12-01

    This thesis summarizes my work on applying the ultrafast laser spectroscopy to the complex solid state materials. It shows that the ultrafast laser pulse can coherently control the material properties in the femtosecond time scale. And the ultrafast laser spectroscopy can be employed as a dynamical method for revealing the fundamental physical problems in the complex material systems.

  12. Multiple time-scale methods in particle simulations of plasmas

    International Nuclear Information System (INIS)

    Cohen, B.I.

    1985-01-01

    This paper surveys recent advances in the application of multiple time-scale methods to particle simulation of collective phenomena in plasmas. These methods dramatically improve the efficiency of simulating low-frequency kinetic behavior by allowing the use of a large timestep, while retaining accuracy. The numerical schemes surveyed provide selective damping of unwanted high-frequency waves and preserve numerical stability in a variety of physics models: electrostatic, magneto-inductive, Darwin and fully electromagnetic. The paper reviews hybrid simulation models, the implicitmoment-equation method, the direct implicit method, orbit averaging, and subcycling

  13. Complex modular structure of large-scale brain networks

    Science.gov (United States)

    Valencia, M.; Pastor, M. A.; Fernández-Seara, M. A.; Artieda, J.; Martinerie, J.; Chavez, M.

    2009-06-01

    Modular structure is ubiquitous among real-world networks from related proteins to social groups. Here we analyze the modular organization of brain networks at a large scale (voxel level) extracted from functional magnetic resonance imaging signals. By using a random-walk-based method, we unveil the modularity of brain webs and show modules with a spatial distribution that matches anatomical structures with functional significance. The functional role of each node in the network is studied by analyzing its patterns of inter- and intramodular connections. Results suggest that the modular architecture constitutes the structural basis for the coexistence of functional integration of distant and specialized brain areas during normal brain activities at rest.

  14. Quantifying Multiscale Habitat Structural Complexity: A Cost-Effective Framework for Underwater 3D Modelling

    Directory of Open Access Journals (Sweden)

    Renata Ferrari

    2016-02-01

    Full Text Available Coral reef habitat structural complexity influences key ecological processes, ecosystem biodiversity, and resilience. Measuring structural complexity underwater is not trivial and researchers have been searching for accurate and cost-effective methods that can be applied across spatial extents for over 50 years. This study integrated a set of existing multi-view, image-processing algorithms, to accurately compute metrics of structural complexity (e.g., ratio of surface to planar area underwater solely from images. This framework resulted in accurate, high-speed 3D habitat reconstructions at scales ranging from small corals to reef-scapes (10s km2. Structural complexity was accurately quantified from both contemporary and historical image datasets across three spatial scales: (i branching coral colony (Acropora spp.; (ii reef area (400 m2; and (iii reef transect (2 km. At small scales, our method delivered models with <1 mm error over 90% of the surface area, while the accuracy at transect scale was 85.3% ± 6% (CI. Advantages are: no need for an a priori requirement for image size or resolution, no invasive techniques, cost-effectiveness, and utilization of existing imagery taken from off-the-shelf cameras (both monocular or stereo. This remote sensing method can be integrated to reef monitoring and improve our knowledge of key aspects of coral reef dynamics, from reef accretion to habitat provisioning and productivity, by measuring and up-scaling estimates of structural complexity.

  15. Linking Supply Chain Network Complexity to Interdependence and Risk-Assessment: Scale Development and Empirical Investigation

    Directory of Open Access Journals (Sweden)

    Samyadip Chakraborty

    2015-12-01

    Full Text Available Concepts like supply chain network complexity, interdependence and risk assessment have been prominently discussed directly and indirectly in management literature over past decades and plenty of frameworks and conceptual prescriptive research works have been published contributing towards building the body of knowledge. However previous studies often lacked quantification of the findings. Consequently, the need for suitable scales becomes prominent for measuring those constructs to empirically support the conceptualized relationships. This paper expands the understanding of supply chain network complexity (SCNC and also highlights its implications on interdependence (ID between the actors and risk assessment (RAS in transaction relationships. In doing so, SCNC and RAS are operationalized to understand how SCNC affects interdependence and risk assessment between the actors in the supply chain network. The contribution of this study lies in developing and validating multi-item scales for these constructs and empirically establishing the hypothesized relationships in the Indian context based on firm data collected using survey–based questionnaire. The methodology followed included structural equation modeling. The study findings indicate that SCNC had significant relationship with interdependence, which in turn significantly affected risk assessment. This study carries both academic and managerial implications and provides an empirically supported framework linking network complexity with the two key variables (ID and RAS, playing crucial roles in managerial decision making. This study contributes to the body of knowledge and aims at guiding managers in better understanding transaction relationships.

  16. Calibration of a complex activated sludge model for the full-scale wastewater treatment plant

    OpenAIRE

    Liwarska-Bizukojc, Ewa; Olejnik, Dorota; Biernacki, Rafal; Ledakowicz, Stanislaw

    2011-01-01

    In this study, the results of the calibration of the complex activated sludge model implemented in BioWin software for the full-scale wastewater treatment plant are presented. Within the calibration of the model, sensitivity analysis of its parameters and the fractions of carbonaceous substrate were performed. In the steady-state and dynamic calibrations, a successful agreement between the measured and simulated values of the output variables was achieved. Sensitivity analysis revealed that u...

  17. Regularization methods for ill-posed problems in multiple Hilbert scales

    International Nuclear Information System (INIS)

    Mazzieri, Gisela L; Spies, Ruben D

    2012-01-01

    Several convergence results in Hilbert scales under different source conditions are proved and orders of convergence and optimal orders of convergence are derived. Also, relations between those source conditions are proved. The concept of a multiple Hilbert scale on a product space is introduced, and regularization methods on these scales are defined, both for the case of a single observation and for the case of multiple observations. In the latter case, it is shown how vector-valued regularization functions in these multiple Hilbert scales can be used. In all cases, convergence is proved and orders and optimal orders of convergence are shown. Finally, some potential applications and open problems are discussed. (paper)

  18. Experimental methods for laboratory-scale ensilage of lignocellulosic biomass

    International Nuclear Information System (INIS)

    Tanjore, Deepti; Richard, Tom L.; Marshall, Megan N.

    2012-01-01

    Anaerobic fermentation is a potential storage method for lignocellulosic biomass in biofuel production processes. Since biomass is seasonally harvested, stocks are often dried or frozen at laboratory scale prior to fermentation experiments. Such treatments prior to fermentation studies cause irreversible changes in the plant cells, influencing the initial state of biomass and thereby the progression of the fermentation processes itself. This study investigated the effects of drying, refrigeration, and freezing relative to freshly harvested corn stover in lab-scale ensilage studies. Particle sizes, as well as post-ensilage drying temperatures for compositional analysis, were tested to identify the appropriate sample processing methods. After 21 days of ensilage the lowest pH value (3.73 ± 0.03), lowest dry matter loss (4.28 ± 0.26 g. 100 g-1DM), and highest water soluble carbohydrate (WSC) concentrations (7.73 ± 0.26 g. 100 g-1DM) were observed in control biomass (stover ensiled within 12 h of harvest without any treatments). WSC concentration was significantly reduced in samples refrigerated for 7 days prior to ensilage (3.86 ± 0.49 g. 100 g −1 DM). However, biomass frozen prior to ensilage produced statistically similar results to the fresh biomass control, especially in treatments with cell wall degrading enzymes. Grinding to decrease particle size reduced the variance amongst replicates for pH values of individual reactors to a minor extent. Drying biomass prior to extraction of WSCs resulted in degradation of the carbohydrates and a reduced estimate of their concentrations. The methods developed in this study can be used to improve ensilage experiments and thereby help in developing ensilage as a storage method for biofuel production. -- Highlights: ► Laboratory-scale methods to assess the influence of ensilage biofuel production. ► Drying, freezing, and refrigeration of biomass influenced microbial fermentation. ► Freshly ensiled stover exhibited

  19. Measuring acute rehabilitation needs in trauma: preliminary evaluation of the Rehabilitation Complexity Scale.

    Science.gov (United States)

    Hoffman, Karen; West, Anita; Nott, Philippa; Cole, Elaine; Playford, Diane; Liu, Clarence; Brohi, Karim

    2013-01-01

    Injury severity, disability and care dependency are frequently used as surrogate measures for rehabilitation requirements following trauma. The true rehabilitation needs of patients may be different but there are no validated tools for the measurement of rehabilitation complexity in acute trauma care. The aim of the study was to evaluate the potential utility of the Rehabilitation Complexity Scale (RCS) version 2 in measuring acute rehabilitation needs in trauma patients. A prospective observation study of 103 patients with traumatic injuries in a Major Trauma Centre. Rehabilitation complexity was measured using the RCS and disability was measured using the Barthel Index. Demographic information and injury characteristics were obtained from the trauma database. The RCS was closely correlated with injury severity (r=0.69, p<0.001) and the Barthel Index (r=0.91, p<0.001). However the Barthel was poor at discriminating between patients rehabilitation needs, especially for patients with higher injury severities. Of 58 patients classified as 'very dependent' by the Barthel, 21 (36%) had low or moderate rehabilitation complexity. The RCS correlated with acute hospital length of stay (r=0.64, p=<0.001) and patients with a low RCS were more likely to be discharged home. The Barthel which had a flooring effect (56% of patients classified as very dependent were discharged home) and lacked discrimination despite close statistical correlation. The RCS outperformed the ISS and the Barthel in its ability to identify rehabilitation requirements in relation to injury severity, rehabilitation complexity, length of stay and discharge destination. The RCS is potentially a feasible and useful tool for the assessment of rehabilitation complexity in acute trauma care by providing specific measurement of patients' rehabilitation requirements. A larger longitudinal study is needed to evaluate the RCS in the assessment of patient need, service provision and trauma system performance

  20. Method of producing exfoliated graphite, flexible graphite, and nano-scaled graphene platelets

    Science.gov (United States)

    Zhamu, Aruna; Shi, Jinjun; Guo, Jiusheng; Jang, Bor Z.

    2010-11-02

    The present invention provides a method of exfoliating a layered material (e.g., graphite and graphite oxide) to produce nano-scaled platelets having a thickness smaller than 100 nm, typically smaller than 10 nm. The method comprises (a) dispersing particles of graphite, graphite oxide, or a non-graphite laminar compound in a liquid medium containing therein a surfactant or dispersing agent to obtain a stable suspension or slurry; and (b) exposing the suspension or slurry to ultrasonic waves at an energy level for a sufficient length of time to produce separated nano-scaled platelets. The nano-scaled platelets are candidate reinforcement fillers for polymer nanocomposites. Nano-scaled graphene platelets are much lower-cost alternatives to carbon nano-tubes or carbon nano-fibers.

  1. Task-Management Method Using R-Tree Spatial Cloaking for Large-Scale Crowdsourcing

    Directory of Open Access Journals (Sweden)

    Yan Li

    2017-12-01

    Full Text Available With the development of sensor technology and the popularization of the data-driven service paradigm, spatial crowdsourcing systems have become an important way of collecting map-based location data. However, large-scale task management and location privacy are important factors for participants in spatial crowdsourcing. In this paper, we propose the use of an R-tree spatial cloaking-based task-assignment method for large-scale spatial crowdsourcing. We use an estimated R-tree based on the requested crowdsourcing tasks to reduce the crowdsourcing server-side inserting cost and enable the scalability. By using Minimum Bounding Rectangle (MBR-based spatial anonymous data without exact position data, this method preserves the location privacy of participants in a simple way. In our experiment, we showed that our proposed method is faster than the current method, and is very efficient when the scale is increased.

  2. The subcatchment- and catchment-scale hydrology of a boreal headwater peatland complex with sporadic permafrost.

    Science.gov (United States)

    Sonnentag, O.; Helbig, M.; Connon, R.; Hould Gosselin, G.; Ryu, Y.; Karoline, W.; Hanisch, J.; Moore, T. R.; Quinton, W. L.

    2017-12-01

    The permafrost region of the Northern Hemisphere has been experiencing twice the rate of climate warming compared to the rest of the Earth, resulting in the degradation of the cryosphere. A large portion of the high-latitude boreal forests of northwestern Canada grows on low-lying organic-rich lands with relative warm and thin isolated, sporadic and discontinuous permafrost. Along this southern limit of permafrost, increasingly warmer temperatures have caused widespread permafrost thaw leading to land cover changes at unprecedented rates. A prominent change includes wetland expansion at the expense of Picea mariana (black spruce)-dominated forest due to ground surface subsidence caused by the thawing of ice-rich permafrost leading to collapsing peat plateaus. Recent conceptual advances have provided important new insights into high-latitude boreal forest hydrology. However, refined quantitative understanding of the mechanisms behind water storage and movement at subcatchment and catchment scales is needed from a water resources management perspective. Here we combine multi-year daily runoff measurements with spatially explicit estimates of evapotranspiration, modelled with the Breathing Earth System Simulator, to characterize the monthly growing season catchment scale ( 150 km2) hydrological response of a boreal headwater peatland complex with sporadic permafrost in the southern Northwest Territories. The corresponding water budget components at subcatchment scale ( 0.1 km2) were obtained from concurrent cutthroat flume runoff and eddy covariance evapotranspiration measurements. The highly significant linear relationships for runoff (r2=0.64) and evapotranspiration (r2=0.75) between subcatchment and catchment scales suggest that the mineral upland-dominated downstream portion of the catchment acts hydrologically similar to the headwater portion dominated by boreal peatland complexes. Breakpoint analysis in combination with moving window statistics on multi

  3. A Normalization-Free and Nonparametric Method Sharpens Large-Scale Transcriptome Analysis and Reveals Common Gene Alteration Patterns in Cancers.

    Science.gov (United States)

    Li, Qi-Gang; He, Yong-Han; Wu, Huan; Yang, Cui-Ping; Pu, Shao-Yan; Fan, Song-Qing; Jiang, Li-Ping; Shen, Qiu-Shuo; Wang, Xiao-Xiong; Chen, Xiao-Qiong; Yu, Qin; Li, Ying; Sun, Chang; Wang, Xiangting; Zhou, Jumin; Li, Hai-Peng; Chen, Yong-Bin; Kong, Qing-Peng

    2017-01-01

    Heterogeneity in transcriptional data hampers the identification of differentially expressed genes (DEGs) and understanding of cancer, essentially because current methods rely on cross-sample normalization and/or distribution assumption-both sensitive to heterogeneous values. Here, we developed a new method, Cross-Value Association Analysis (CVAA), which overcomes the limitation and is more robust to heterogeneous data than the other methods. Applying CVAA to a more complex pan-cancer dataset containing 5,540 transcriptomes discovered numerous new DEGs and many previously rarely explored pathways/processes; some of them were validated, both in vitro and in vivo , to be crucial in tumorigenesis, e.g., alcohol metabolism ( ADH1B ), chromosome remodeling ( NCAPH ) and complement system ( Adipsin ). Together, we present a sharper tool to navigate large-scale expression data and gain new mechanistic insights into tumorigenesis.

  4. Low-complexity video encoding method for wireless image transmission in capsule endoscope.

    Science.gov (United States)

    Takizawa, Kenichi; Hamaguchi, Kiyoshi

    2010-01-01

    This paper presents a low-complexity video encoding method applicable for wireless image transmission in capsule endoscopes. This encoding method is based on Wyner-Ziv theory, in which side information available at a transmitter is treated as side information at its receiver. Therefore complex processes in video encoding, such as estimation of the motion vector, are moved to the receiver side, which has a larger-capacity battery. As a result, the encoding process is only to decimate coded original data through channel coding. We provide a performance evaluation for a low-density parity check (LDPC) coding method in the AWGN channel.

  5. On a computational method for modelling complex ecosystems by superposition procedure

    International Nuclear Information System (INIS)

    He Shanyu.

    1986-12-01

    In this paper, the Superposition Procedure is concisely described, and a computational method for modelling a complex ecosystem is proposed. With this method, the information contained in acceptable submodels and observed data can be utilized to maximal degree. (author). 1 ref

  6. Cost and time-effective method for multi-scale measures of rugosity, fractal dimension, and vector dispersion from coral reef 3D models.

    Directory of Open Access Journals (Sweden)

    G C Young

    Full Text Available We present a method to construct and analyse 3D models of underwater scenes using a single cost-effective camera on a standard laptop with (a free or low-cost software, (b no computer programming ability, and (c minimal man hours for both filming and analysis. This study focuses on four key structural complexity metrics: point-to-point distances, linear rugosity (R, fractal dimension (D, and vector dispersion (1/k. We present the first assessment of accuracy and precision of structure-from-motion (SfM 3D models from an uncalibrated GoPro™ camera at a small scale (4 m2 and show that they can provide meaningful, ecologically relevant results. Models had root mean square errors of 1.48 cm in X-Y and 1.35 in Z, and accuracies of 86.8% (R, 99.6% (D at scales 30-60 cm, 93.6% (D at scales 1-5 cm, and 86.9 (1/k. Values of R were compared to in-situ chain-and-tape measurements, while values of D and 1/k were compared with ground truths from 3D printed objects modelled underwater. All metrics varied less than 3% between independently rendered models. We thereby improve and rigorously validate a tool for ecologists to non-invasively quantify coral reef structural complexity with a variety of multi-scale metrics.

  7. Epidemic dynamics and endemic states in complex networks

    OpenAIRE

    Pastor-Satorras, Romualdo; Vespignani, Alessandro

    2001-01-01

    We study by analytical methods and large scale simulations a dynamical model for the spreading of epidemics in complex networks. In networks with exponentially bounded connectivity we recover the usual epidemic behavior with a threshold defining a critical point below which the infection prevalence is null. On the contrary, on a wide range of scale-free networks we observe the absence of an epidemic threshold and its associated critical behavior. This implies that scale-free networks are pron...

  8. Multi-scale approximation of Vlasov equation

    International Nuclear Information System (INIS)

    Mouton, A.

    2009-09-01

    One of the most important difficulties of numerical simulation of magnetized plasmas is the existence of multiple time and space scales, which can be very different. In order to produce good simulations of these multi-scale phenomena, it is recommended to develop some models and numerical methods which are adapted to these problems. Nowadays, the two-scale convergence theory introduced by G. Nguetseng and G. Allaire is one of the tools which can be used to rigorously derive multi-scale limits and to obtain new limit models which can be discretized with a usual numerical method: this procedure is so-called a two-scale numerical method. The purpose of this thesis is to develop a two-scale semi-Lagrangian method and to apply it on a gyrokinetic Vlasov-like model in order to simulate a plasma submitted to a large external magnetic field. However, the physical phenomena we have to simulate are quite complex and there are many questions without answers about the behaviour of a two-scale numerical method, especially when such a method is applied on a nonlinear model. In a first part, we develop a two-scale finite volume method and we apply it on the weakly compressible 1D isentropic Euler equations. Even if this mathematical context is far from a Vlasov-like model, it is a relatively simple framework in order to study the behaviour of a two-scale numerical method in front of a nonlinear model. In a second part, we develop a two-scale semi-Lagrangian method for the two-scale model developed by E. Frenod, F. Salvarani et E. Sonnendrucker in order to simulate axisymmetric charged particle beams. Even if the studied physical phenomena are quite different from magnetic fusion experiments, the mathematical context of the one-dimensional paraxial Vlasov-Poisson model is very simple for establishing the basis of a two-scale semi-Lagrangian method. In a third part, we use the two-scale convergence theory in order to improve M. Bostan's weak-* convergence results about the finite

  9. Bioclim Deliverable D6b: application of statistical down-scaling within the BIOCLIM hierarchical strategy: methods, data requirements and underlying assumptions

    International Nuclear Information System (INIS)

    2004-01-01

    The overall aim of BIOCLIM is to assess the possible long term impacts due to climate change on the safety of radioactive waste repositories in deep formations. The coarse spatial scale of the Earth-system Models of Intermediate Complexity (EMICs) used in BIOCLIM compared with the BIOCLIM study regions and the needs of performance assessment creates a need for down-scaling. Most of the developmental work on down-scaling methodologies undertaken by the international research community has focused on down-scaling from the general circulation model (GCM) scale (with a typical spatial resolution of 400 km by 400 km over Europe in the current generation of models) using dynamical down-scaling (i.e., regional climate models (RCMs), which typically have a spatial resolution of 50 km by 50 km for models whose domain covers the European region) or statistical methods (which can provide information at the point or station scale) in order to construct scenarios of anthropogenic climate change up to 2100. Dynamical down-scaling (with the MAR RCM) is used in BIOCLIM WP2 to down-scale from the GCM (i.e., IPSL C M4 D ) scale. In the original BIOCLIM description of work, it was proposed that UEA would apply statistical down-scaling to IPSL C M4 D output in WP2 as part of the hierarchical strategy. Statistical down-scaling requires the identification of statistical relationships between the observed large-scale and regional/local climate, which are then applied to large-scale GCM output, on the assumption that these relationships remain valid in the future (the assumption of stationarity). Thus it was proposed that UEA would investigate the extent to which it is possible to apply relationships between the present-day large-scale and regional/local climate to the relatively extreme conditions of the BIOCLIM WP2 snapshot simulations. Potential statistical down-scaling methodologies were identified from previous work performed at UEA. Appropriate station data from the case

  10. Organizational Influences on Interdisciplinary Interactions during Research and Design of Large-Scale Complex Engineered Systems

    Science.gov (United States)

    McGowan, Anna-Maria R.; Seifert, Colleen M.; Papalambros, Panos Y.

    2012-01-01

    The design of large-scale complex engineered systems (LaCES) such as an aircraft is inherently interdisciplinary. Multiple engineering disciplines, drawing from a team of hundreds to thousands of engineers and scientists, are woven together throughout the research, development, and systems engineering processes to realize one system. Though research and development (R&D) is typically focused in single disciplines, the interdependencies involved in LaCES require interdisciplinary R&D efforts. This study investigates the interdisciplinary interactions that take place during the R&D and early conceptual design phases in the design of LaCES. Our theoretical framework is informed by both engineering practices and social science research on complex organizations. This paper provides preliminary perspective on some of the organizational influences on interdisciplinary interactions based on organization theory (specifically sensemaking), data from a survey of LaCES experts, and the authors experience in the research and design. The analysis reveals couplings between the engineered system and the organization that creates it. Survey respondents noted the importance of interdisciplinary interactions and their significant benefit to the engineered system, such as innovation and problem mitigation. Substantial obstacles to interdisciplinarity are uncovered beyond engineering that include communication and organizational challenges. Addressing these challenges may ultimately foster greater efficiencies in the design and development of LaCES and improved system performance by assisting with the collective integration of interdependent knowledge bases early in the R&D effort. This research suggests that organizational and human dynamics heavily influence and even constrain the engineering effort for large-scale complex systems.

  11. Complexities and uncertainties in transitioning small-scale coral reef fisheries

    Directory of Open Access Journals (Sweden)

    Pierre eLeenhardt

    2016-05-01

    Full Text Available Coral reef fisheries support the development of local and national economies and are the basis of important cultural practices and worldviews. Transitioning economies, human development and environmental stress can harm this livelihood. Here we focus on a transitioning social-ecological system as case study (Moorea, French Polynesia. We review fishing practices and three decades of effort and landing estimates with the broader goal of informing management. Fishery activities in Moorea are quite challenging to quantify because of the diversity of gears used, the lack of centralized access points or markets, the high participation rates of the population in the fishery, and the overlapping cultural and economic motivations to catch fish. Compounding this challenging diversity, we lack a basic understanding of the complex interplay between the cultural, subsistence, and commercial use of Moorea's reefs. In Moorea, we found an order of magnitude gap between estimates of fishery yield produced by catch monitoring methods (~2 t km-2 year-1 and estimates produced using consumption or participatory socioeconomic consumer surveys (~24 t km-2 year-1. Several lines of evidence suggest reef resources may be overexploited and stakeholders have a diversity of opinions as to whether trends in the stocks are a cause for concern. The reefs, however, remain ecologically resilient. The relative health of the reef is striking given the socio-economic context. Moorea has a relatively high population density, a modern economic system linked into global flows of trade and travel, and the fishery has little remaining traditional or customary management. Other islands in the Pacific that continue to develop economically may have small-scale fisheries that increasingly resemble Moorea. Therefore, understanding Moorea's reef fisheries may provide insight into their future.

  12. Application of Lattice Boltzmann Methods in Complex Mass Transfer Systems

    Science.gov (United States)

    Sun, Ning

    Lattice Boltzmann Method (LBM) is a novel computational fluid dynamics method that can easily handle complex and dynamic boundaries, couple local or interfacial interactions/reactions, and be easily parallelized allowing for simulation of large systems. While most of the current studies in LBM mainly focus on fluid dynamics, however, the inherent power of this method makes it an ideal candidate for the study of mass transfer systems involving complex/dynamic microstructures and local reactions. In this thesis, LBM is introduced to be an alternative computational method for the study of electrochemical energy storage systems (Li-ion batteries (LIBs) and electric double layer capacitors (EDLCs)) and transdermal drug design on mesoscopic scale. Based on traditional LBM, the following in-depth studies have been carried out: (1) For EDLCs, the simulation of diffuse charge dynamics is carried out for both the charge and the discharge processes on 2D systems of complex random electrode geometries (pure random, random spheres and random fibers). Steric effect of concentrated solutions is considered by using modified Poisson-Nernst-Plank (MPNP) equations and compared with regular Poisson-Nernst-Plank (PNP) systems. The effects of electrode microstructures (electrode density, electrode filler morphology, filler size, etc.) on the net charge distribution and charge/discharge time are studied in detail. The influence of applied potential during discharging process is also discussed. (2) For the study of dendrite formation on the anode of LIBs, it is shown that the Lattice Boltzmann model can capture all the experimentally observed features of microstructure evolution at the anode, from smooth to mossy to dendritic. The mechanism of dendrite formation process in mesoscopic scale is discussed in detail and compared with the traditional Sand's time theories. It shows that dendrite formation is closely related to the inhomogeneous reactively at the electrode-electrolyte interface

  13. Optimization of large-scale heterogeneous system-of-systems models.

    Energy Technology Data Exchange (ETDEWEB)

    Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Lee, Herbert K. H. (University of California, Santa Cruz, Santa Cruz, CA); Hart, William Eugene; Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Woodruff, David L. (University of California, Davis, Davis, CA)

    2012-01-01

    Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

  14. Surface temperature and evapotranspiration: application of local scale methods to regional scales using satellite data

    International Nuclear Information System (INIS)

    Seguin, B.; Courault, D.; Guerif, M.

    1994-01-01

    Remotely sensed surface temperatures have proven useful for monitoring evapotranspiration (ET) rates and crop water use because of their direct relationship with sensible and latent energy exchange processes. Procedures for using the thermal infrared (IR) obtained with hand-held radiometers deployed at ground level are now well established and even routine for many agricultural research and management purposes. The availability of IR from meteorological satellites at scales from 1 km (NOAA-AVHRR) to 5 km (METEOSAT) permits extension of local, ground-based approaches to larger scale crop monitoring programs. Regional observations of surface minus air temperature (i.e., the stress degree day) and remote estimates of daily ET were derived from satellite data over sites in France, the Sahel, and North Africa and summarized here. Results confirm that similar approaches can be applied at local and regional scales despite differences in pixel size and heterogeneity. This article analyzes methods for obtaining these data and outlines the potential utility of satellite data for operational use at the regional scale. (author)

  15. Outlier-resilient complexity analysis of heartbeat dynamics

    Science.gov (United States)

    Lo, Men-Tzung; Chang, Yi-Chung; Lin, Chen; Young, Hsu-Wen Vincent; Lin, Yen-Hung; Ho, Yi-Lwun; Peng, Chung-Kang; Hu, Kun

    2015-03-01

    Complexity in physiological outputs is believed to be a hallmark of healthy physiological control. How to accurately quantify the degree of complexity in physiological signals with outliers remains a major barrier for translating this novel concept of nonlinear dynamic theory to clinical practice. Here we propose a new approach to estimate the complexity in a signal by analyzing the irregularity of the sign time series of its coarse-grained time series at different time scales. Using surrogate data, we show that the method can reliably assess the complexity in noisy data while being highly resilient to outliers. We further apply this method to the analysis of human heartbeat recordings. Without removing any outliers due to ectopic beats, the method is able to detect a degradation of cardiac control in patients with congestive heart failure and a more degradation in critically ill patients whose life continuation relies on extracorporeal membrane oxygenator (ECMO). Moreover, the derived complexity measures can predict the mortality of ECMO patients. These results indicate that the proposed method may serve as a promising tool for monitoring cardiac function of patients in clinical settings.

  16. Detailed Simulation of Complex Hydraulic Problems with Macroscopic and Mesoscopic Mathematical Methods

    Directory of Open Access Journals (Sweden)

    Chiara Biscarini

    2013-01-01

    Full Text Available The numerical simulation of fast-moving fronts originating from dam or levee breaches is a challenging task for small scale engineering projects. In this work, the use of fully three-dimensional Navier-Stokes (NS equations and lattice Boltzmann method (LBM is proposed for testing the validity of, respectively, macroscopic and mesoscopic mathematical models. Macroscopic simulations are performed employing an open-source computational fluid dynamics (CFD code that solves the NS combined with the volume of fluid (VOF multiphase method to represent free-surface flows. The mesoscopic model is a front-tracking experimental variant of the LBM. In the proposed LBM the air-gas interface is represented as a surface with zero thickness that handles the passage of the density field from the light to the dense phase and vice versa. A single set of LBM equations represents the liquid phase, while the free surface is characterized by an additional variable, the liquid volume fraction. Case studies show advantages and disadvantages of the proposed LBM and NS with specific regard to the computational efficiency and accuracy in dealing with the simulation of flows through complex geometries. In particular, the validation of the model application is developed by simulating the flow propagating through a synthetic urban setting and comparing results with analytical and experimental laboratory measurements.

  17. Volterra representation enables modeling of complex synaptic nonlinear dynamics in large-scale simulations.

    Science.gov (United States)

    Hu, Eric Y; Bouteiller, Jean-Marie C; Song, Dong; Baudry, Michel; Berger, Theodore W

    2015-01-01

    Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations.

  18. Mitigation of Power frequency Magnetic Fields. Using Scale Invariant and Shape Optimization Methods

    Energy Technology Data Exchange (ETDEWEB)

    Salinas, Ener; Yueqiang Liu; Daalder, Jaap; Cruz, Pedro; Antunez de Souza, Paulo Roberto Jr; Atalaya, Juan Carlos; Paula Marciano, Fabianna de; Eskinasy, Alexandre

    2006-10-15

    The present report describes the development and application of two novel methods for implementing mitigation techniques of magnetic fields at power frequencies. The first method makes use of scaling rules for electromagnetic quantities, while the second one applies a 2D shape optimization algorithm based on gradient methods. Before this project, the first method had already been successfully applied (by some of the authors of this report) to electromagnetic designs involving pure conductive Material (e.g. copper, aluminium) which implied a linear formulation. Here we went beyond this approach and tried to develop a formulation involving ferromagnetic (i.e. non-linear) Materials. Surprisingly, we obtained good equivalent replacement for test-transformers by varying the input current. In spite of the validity of this equivalence constrained to regions not too close to the source, the results can still be considered useful, as most field mitigation techniques are precisely developed for reducing the magnetic field in regions relatively far from the sources. The shape optimization method was applied in this project to calculate the optimal geometry of a pure conductive plate to mitigate the magnetic field originated from underground cables. The objective function was a weighted combination of magnetic energy at the region of interest and dissipated heat at the shielding Material. To our surprise, shapes of complex structure, difficult to interpret (and probably even harder to anticipate) were the results of the applied process. However, the practical implementation (using some approximation of these shapes) gave excellent experimental mitigation factors.

  19. Real-time simulation of large-scale floods

    Science.gov (United States)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  20. Multi-scale method for the resolution of the neutronic kinetics equations

    International Nuclear Information System (INIS)

    Chauvet, St.

    2008-10-01

    In this PhD thesis and in order to improve the time/precision ratio of the numerical simulation calculations, we investigate multi-scale techniques for the resolution of the reactor kinetics equations. We choose to focus on the mixed dual diffusion approximation and the quasi-static methods. We introduce a space dependency for the amplitude function which only depends on the time variable in the standard quasi-static context. With this new factorization, we develop two mixed dual problems which can be solved with Cea's solver MINOS. An algorithm is implemented, performing the resolution of these problems defined on different scales (for time and space). We name this approach: the Local Quasi-Static method. We present here this new multi-scale approach and its implementation. The inherent details of amplitude and shape treatments are discussed and justified. Results and performances, compared to MINOS, are studied. They illustrate the improvement on the time/precision ratio for kinetics calculations. Furthermore, we open some new possibilities to parallelize computations with MINOS. For the future, we also introduce some improvement tracks with adaptive scales. (author)

  1. Approximate method for stochastic chemical kinetics with two-time scales by chemical Langevin equations

    International Nuclear Information System (INIS)

    Wu, Fuke; Tian, Tianhai; Rawlings, James B.; Yin, George

    2016-01-01

    The frequently used reduction technique is based on the chemical master equation for stochastic chemical kinetics with two-time scales, which yields the modified stochastic simulation algorithm (SSA). For the chemical reaction processes involving a large number of molecular species and reactions, the collection of slow reactions may still include a large number of molecular species and reactions. Consequently, the SSA is still computationally expensive. Because the chemical Langevin equations (CLEs) can effectively work for a large number of molecular species and reactions, this paper develops a reduction method based on the CLE by the stochastic averaging principle developed in the work of Khasminskii and Yin [SIAM J. Appl. Math. 56, 1766–1793 (1996); ibid. 56, 1794–1819 (1996)] to average out the fast-reacting variables. This reduction method leads to a limit averaging system, which is an approximation of the slow reactions. Because in the stochastic chemical kinetics, the CLE is seen as the approximation of the SSA, the limit averaging system can be treated as the approximation of the slow reactions. As an application, we examine the reduction of computation complexity for the gene regulatory networks with two-time scales driven by intrinsic noise. For linear and nonlinear protein production functions, the simulations show that the sample average (expectation) of the limit averaging system is close to that of the slow-reaction process based on the SSA. It demonstrates that the limit averaging system is an efficient approximation of the slow-reaction process in the sense of the weak convergence.

  2. A comparison of multidimensional scaling methods for perceptual mapping

    NARCIS (Netherlands)

    Bijmolt, T.H.A.; Wedel, M.

    Multidimensional scaling has been applied to a wide range of marketing problems, in particular to perceptual mapping based on dissimilarity judgments. The introduction of methods based on the maximum likelihood principle is one of the most important developments. In this article, the authors compare

  3. A novel fruit shape classification method based on multi-scale analysis

    Science.gov (United States)

    Gui, Jiangsheng; Ying, Yibin; Rao, Xiuqin

    2005-11-01

    Shape is one of the major concerns and which is still a difficult problem in automated inspection and sorting of fruits. In this research, we proposed the multi-scale energy distribution (MSED) for object shape description, the relationship between objects shape and its boundary energy distribution at multi-scale was explored for shape extraction. MSED offers not only the mainly energy which represent primary shape information at the lower scales, but also subordinate energy which represent local shape information at higher differential scales. Thus, it provides a natural tool for multi resolution representation and can be used as a feature for shape classification. We addressed the three main processing steps in the MSED-based shape classification. They are namely, 1) image preprocessing and citrus shape extraction, 2) shape resample and shape feature normalization, 3) energy decomposition by wavelet and classification by BP neural network. Hereinto, shape resample is resample 256 boundary pixel from a curve which is approximated original boundary by using cubic spline in order to get uniform raw data. A probability function was defined and an effective method to select a start point was given through maximal expectation, which overcame the inconvenience of traditional methods in order to have a property of rotation invariants. The experiment result is relatively well normal citrus and serious abnormality, with a classification rate superior to 91.2%. The global correct classification rate is 89.77%, and our method is more effective than traditional method. The global result can meet the request of fruit grading.

  4. Framing scales and scaling frames

    NARCIS (Netherlands)

    van Lieshout, M.; Dewulf, A.; Aarts, N.; Termeer, K.

    2009-01-01

    Policy problems are not just out there. Actors highlight different aspects of a situation as problematic and situate the problem on different scales. In this study we will analyse the way actors apply scales in their talk (or texts) to frame the complex decision-making process of the establishment

  5. EVALUATING THE NOVEL METHODS ON SPECIES DISTRIBUTION MODELING IN COMPLEX FOREST

    Directory of Open Access Journals (Sweden)

    C. H. Tu

    2012-07-01

    Full Text Available The prediction of species distribution has become a focus in ecology. For predicting a result more effectively and accurately, some novel methods have been proposed recently, like support vector machine (SVM and maximum entropy (MAXENT. However, high complexity in the forest, like that in Taiwan, will make the modeling become even harder. In this study, we aim to explore which method is more applicable to species distribution modeling in the complex forest. Castanopsis carlesii (long-leaf chinkapin, LLC, growing widely in Taiwan, was chosen as the target species because its seeds are an important food source for animals. We overlaid the tree samples on the layers of altitude, slope, aspect, terrain position, and vegetation index derived from SOPT-5 images, and developed three models, MAXENT, SVM, and decision tree (DT, to predict the potential habitat of LLCs. We evaluated these models by two sets of independent samples in different site and the effect on the complexity of forest by changing the background sample size (BSZ. In the forest with low complex (small BSZ, the accuracies of SVM (kappa = 0.87 and DT (0.86 models were slightly higher than that of MAXENT (0.84. In the more complex situation (large BSZ, MAXENT kept high kappa value (0.85, whereas SVM (0.61 and DT (0.57 models dropped significantly due to limiting the habitat close to samples. Therefore, MAXENT model was more applicable to predict species’ potential habitat in the complex forest; whereas SVM and DT models would tend to underestimate the potential habitat of LLCs.

  6. Transition Manifolds of Complex Metastable Systems

    Science.gov (United States)

    Bittracher, Andreas; Koltai, Péter; Klus, Stefan; Banisch, Ralf; Dellnitz, Michael; Schütte, Christof

    2018-04-01

    We consider complex dynamical systems showing metastable behavior, but no local separation of fast and slow time scales. The article raises the question of whether such systems exhibit a low-dimensional manifold supporting its effective dynamics. For answering this question, we aim at finding nonlinear coordinates, called reaction coordinates, such that the projection of the dynamics onto these coordinates preserves the dominant time scales of the dynamics. We show that, based on a specific reducibility property, the existence of good low-dimensional reaction coordinates preserving the dominant time scales is guaranteed. Based on this theoretical framework, we develop and test a novel numerical approach for computing good reaction coordinates. The proposed algorithmic approach is fully local and thus not prone to the curse of dimension with respect to the state space of the dynamics. Hence, it is a promising method for data-based model reduction of complex dynamical systems such as molecular dynamics.

  7. Identifying deterministic signals in simulated gravitational wave data: algorithmic complexity and the surrogate data method

    International Nuclear Information System (INIS)

    Zhao Yi; Small, Michael; Coward, David; Howell, Eric; Zhao Chunnong; Ju Li; Blair, David

    2006-01-01

    We describe the application of complexity estimation and the surrogate data method to identify deterministic dynamics in simulated gravitational wave (GW) data contaminated with white and coloured noises. The surrogate method uses algorithmic complexity as a discriminating statistic to decide if noisy data contain a statistically significant level of deterministic dynamics (the GW signal). The results illustrate that the complexity method is sensitive to a small amplitude simulated GW background (SNR down to 0.08 for white noise and 0.05 for coloured noise) and is also more robust than commonly used linear methods (autocorrelation or Fourier analysis)

  8. Methods for forming complex oxidation reaction products including superconducting articles

    International Nuclear Information System (INIS)

    Rapp, R.A.; Urquhart, A.W.; Nagelberg, A.S.; Newkirk, M.S.

    1992-01-01

    This patent describes a method for producing a superconducting complex oxidation reaction product of two or more metals in an oxidized state. It comprises positioning at least one parent metal source comprising one of the metals adjacent to a permeable mass comprising at least one metal-containing compound capable of reaction to form the complex oxidation reaction product in step below, the metal component of the at least one metal-containing compound comprising at least a second of the two or more metals, and orienting the parent metal source and the permeable mass relative to each other so that formation of the complex oxidation reaction product will occur in a direction towards and into the permeable mass; and heating the parent metal source in the presence of an oxidant to a temperature region above its melting point to form a body of molten parent metal to permit infiltration and reaction of the molten parent metal into the permeable mass and with the oxidant and the at least one metal-containing compound to form the complex oxidation reaction product, and progressively drawing the molten parent metal source through the complex oxidation reaction product towards the oxidant and towards and into the adjacent permeable mass so that fresh complex oxidation reaction product continues to form within the permeable mass; and recovering the resulting complex oxidation reaction product

  9. Correlates of the Rosenberg Self-Esteem Scale Method Effects

    Science.gov (United States)

    Quilty, Lena C.; Oakman, Jonathan M.; Risko, Evan

    2006-01-01

    Investigators of personality assessment are becoming aware that using positively and negatively worded items in questionnaires to prevent acquiescence may negatively impact construct validity. The Rosenberg Self-Esteem Scale (RSES) has demonstrated a bifactorial structure typically proposed to result from these method effects. Recent work suggests…

  10. Quantitative analysis of scaling error compensation methods in dimensional X-ray computed tomography

    DEFF Research Database (Denmark)

    Müller, P.; Hiller, Jochen; Dai, Y.

    2015-01-01

    X-ray Computed Tomography (CT) has become an important technology for quality control of industrial components. As with other technologies, e.g., tactile coordinate measurements or optical measurements, CT is influenced by numerous quantities which may have negative impact on the accuracy...... errors of the manipulator system (magnification axis). This article also introduces a new compensation method for scaling errors using a database of reference scaling factors and discusses its advantages and disadvantages. In total, three methods for the correction of scaling errors – using the CT ball...

  11. Network reliability analysis of complex systems using a non-simulation-based method

    International Nuclear Information System (INIS)

    Kim, Youngsuk; Kang, Won-Hee

    2013-01-01

    Civil infrastructures such as transportation, water supply, sewers, telecommunications, and electrical and gas networks often establish highly complex networks, due to their multiple source and distribution nodes, complex topology, and functional interdependence between network components. To understand the reliability of such complex network system under catastrophic events such as earthquakes and to provide proper emergency management actions under such situation, efficient and accurate reliability analysis methods are necessary. In this paper, a non-simulation-based network reliability analysis method is developed based on the Recursive Decomposition Algorithm (RDA) for risk assessment of generic networks whose operation is defined by the connections of multiple initial and terminal node pairs. The proposed method has two separate decomposition processes for two logical functions, intersection and union, and combinations of these processes are used for the decomposition of any general system event with multiple node pairs. The proposed method is illustrated through numerical network examples with a variety of system definitions, and is applied to a benchmark gas transmission pipe network in Memphis TN to estimate the seismic performance and functional degradation of the network under a set of earthquake scenarios.

  12. Complex Hand Dexterity: A Review of Biomechanical Methods for Measuring Musical Performance

    Directory of Open Access Journals (Sweden)

    Cheryl Diane Metcalf

    2014-05-01

    Full Text Available Complex hand dexterity is fundamental to our interactions with the physical, social and cultural environment. Dexterity can be an expression of creativity and precision in a range of activities, including musical performance. Little is understood about complex hand dexterity or how virtuoso expertise is acquired, due to the versatility of movement combinations available to complete any given task. This has historically limited progress of the field because of difficulties in measuring movements of the hand. Recent developments in methods of motion capture and analysis mean it is now possible to explore the intricate movements of the hand and fingers. These methods allow us insights into the neurophysiological mechanisms underpinning complex hand dexterity and motor learning. They also allow investigation into the key factors that contribute to injury, recovery and functional compensation.The application of such analytical techniques within musical performance provides a multidisciplinary framework for purposeful investigation into the process of learning and skill acquisition in instrumental performance. These highly skilled manual and cognitive tasks present the ultimate achievement in complex hand dexterity. This paper will review methods of assessing instrumental performance in music, focusing specifically on biomechanical measurement and the associated technical challenges faced when measuring highly dexterous activities.

  13. Iterative methods for the solution of very large complex symmetric linear systems of equations in electrodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Clemens, M.; Weiland, T. [Technische Hochschule Darmstadt (Germany)

    1996-12-31

    In the field of computational electrodynamics the discretization of Maxwell`s equations using the Finite Integration Theory (FIT) yields very large, sparse, complex symmetric linear systems of equations. For this class of complex non-Hermitian systems a number of conjugate gradient-type algorithms is considered. The complex version of the biconjugate gradient (BiCG) method by Jacobs can be extended to a whole class of methods for complex-symmetric algorithms SCBiCG(T, n), which only require one matrix vector multiplication per iteration step. In this class the well-known conjugate orthogonal conjugate gradient (COCG) method for complex-symmetric systems corresponds to the case n = 0. The case n = 1 yields the BiCGCR method which corresponds to the conjugate residual algorithm for the real-valued case. These methods in combination with a minimal residual smoothing process are applied separately to practical 3D electro-quasistatical and eddy-current problems in electrodynamics. The practical performance of the SCBiCG methods is compared with other methods such as QMR and TFQMR.

  14. Scattering methods in complex fluids

    CERN Document Server

    Chen, Sow-Hsin

    2015-01-01

    Summarising recent research on the physics of complex liquids, this in-depth analysis examines the topic of complex liquids from a modern perspective, addressing experimental, computational and theoretical aspects of the field. Selecting only the most interesting contemporary developments in this rich field of research, the authors present multiple examples including aggregation, gel formation and glass transition, in systems undergoing percolation, at criticality, or in supercooled states. Connecting experiments and simulation with key theoretical principles, and covering numerous systems including micelles, micro-emulsions, biological systems, and cement pastes, this unique text is an invaluable resource for graduate students and researchers looking to explore and understand the expanding field of complex fluids.

  15. Multi-scale approach to radiation damage induced by ion beams: complex DNA damage and effects of thermal spikes

    International Nuclear Information System (INIS)

    Surdutovich, E.; Yakubovich, A.V.; Solov'yov, A.V.; Surdutovich, E.; Yakubovich, A.V.; Solov'yov, A.V.

    2010-01-01

    We present the latest advances of the multi-scale approach to radiation damage caused by irradiation of a tissue with energetic ions and report the calculations of complex DNA damage and the effects of thermal spikes on biomolecules. The multi-scale approach aims to quantify the most important physical, chemical, and biological phenomena taking place during and following irradiation with ions and provide a better means for clinically-necessary calculations with adequate accuracy. We suggest a way of quantifying the complex clustered damage, one of the most important features of the radiation damage caused by ions. This quantification allows the studying of how the clusterization of DNA lesions affects the lethality of damage. We discuss the first results of molecular dynamics simulations of ubiquitin in the environment of thermal spikes, predicted to occur in tissue for a short time after an ion's passage in the vicinity of the ions' tracks. (authors)

  16. Method for data compression by associating complex numbers with files of data values

    Science.gov (United States)

    Feo, John Thomas; Hanks, David Carlton; Kraay, Thomas Arthur

    1998-02-10

    A method for compressing data for storage or transmission. Given a complex polynomial and a value assigned to each root, a root generated data file (RGDF) is created, one entry at a time. Each entry is mapped to a point in a complex plane. An iterative root finding technique is used to map the coordinates of the point to the coordinates of one of the roots of the polynomial. The value associated with that root is assigned to the entry. An equational data compression (EDC) method reverses this procedure. Given a target data file, the EDC method uses a search algorithm to calculate a set of m complex numbers and a value map that will generate the target data file. The error between a simple target data file and generated data file is typically less than 10%. Data files can be transmitted or stored without loss by transmitting the m complex numbers, their associated values, and an error file whose size is at most one-tenth of the size of the input data file.

  17. Comparison of association mapping methods in a complex pedigreed population

    DEFF Research Database (Denmark)

    Sahana, Goutam; Guldbrandtsen, Bernt; Janss, Luc

    2010-01-01

    to collect SNP signals in intervals, to avoid the scattering of a QTL signal over multiple neighboring SNPs. Methods not accounting for genetic background (full pedigree information) performed worse, and methods using haplotypes were considerably worse with a high false-positive rate, probably due...... to the presence of low-frequency haplotypes. It was necessary to account for full relationships among individuals to avoid excess false discovery. Although the methods were tested on a cattle pedigree, the results are applicable to any population with a complex pedigree structure...

  18. Research of the effectiveness of parallel multithreaded realizations of interpolation methods for scaling raster images

    Science.gov (United States)

    Vnukov, A. A.; Shershnev, M. B.

    2018-01-01

    The aim of this work is the software implementation of three image scaling algorithms using parallel computations, as well as the development of an application with a graphical user interface for the Windows operating system to demonstrate the operation of algorithms and to study the relationship between system performance, algorithm execution time and the degree of parallelization of computations. Three methods of interpolation were studied, formalized and adapted to scale images. The result of the work is a program for scaling images by different methods. Comparison of the quality of scaling by different methods is given.

  19. Macro-scale complexity of nano- to micro-scale architecture of ...

    Indian Academy of Sciences (India)

    mobile, due to the lack of correlation between the silicon oxide layer and the final olivine particles, leading ... (olivine) systems. .... A branched forsterite crystal system (scale bar = .... therefore, that no template mechanism is operating between.

  20. Comparison of topotactic fluorination methods for complex oxide films

    Science.gov (United States)

    Moon, E. J.; Choquette, A. K.; Huon, A.; Kulesa, S. Z.; Barbash, D.; May, S. J.

    2015-06-01

    We have investigated the synthesis of SrFeO3-αFγ (α and γ ≤ 1) perovskite films using topotactic fluorination reactions utilizing poly(vinylidene fluoride) as a fluorine source. Two different fluorination methods, a spin-coating and a vapor transport approach, were performed on as-grown SrFeO2.5 films. We highlight differences in the structural, compositional, and optical properties of the oxyfluoride films obtained via the two methods, providing insight into how fluorination reactions can be used to modify electronic and optical behavior in complex oxide heterostructures.

  1. Characterizing scaling properties of complex signals with missed data segments using the multifractal analysis

    Science.gov (United States)

    Pavlov, A. N.; Pavlova, O. N.; Abdurashitov, A. S.; Sindeeva, O. A.; Semyachkina-Glushkovskaya, O. V.; Kurths, J.

    2018-01-01

    The scaling properties of complex processes may be highly influenced by the presence of various artifacts in experimental recordings. Their removal produces changes in the singularity spectra and the Hölder exponents as compared with the original artifacts-free data, and these changes are significantly different for positively correlated and anti-correlated signals. While signals with power-law correlations are nearly insensitive to the loss of significant parts of data, the removal of fragments of anti-correlated signals is more crucial for further data analysis. In this work, we study the ability of characterizing scaling features of chaotic and stochastic processes with distinct correlation properties using a wavelet-based multifractal analysis, and discuss differences between the effect of missed data for synchronous and asynchronous oscillatory regimes. We show that even an extreme data loss allows characterizing physiological processes such as the cerebral blood flow dynamics.

  2. Critical initial-slip scaling for the noisy complex Ginzburg–Landau equation

    International Nuclear Information System (INIS)

    Liu, Weigang; Täuber, Uwe C

    2016-01-01

    We employ the perturbative fieldtheoretic renormalization group method to investigate the universal critical behavior near the continuous non-equilibrium phase transition in the complex Ginzburg–Landau equation with additive white noise. This stochastic partial differential describes a remarkably wide range of physical systems: coupled nonlinear oscillators subject to external noise near a Hopf bifurcation instability; spontaneous structure formation in non-equilibrium systems, e.g., in cyclically competing populations; and driven-dissipative Bose–Einstein condensation, realized in open systems on the interface of quantum optics and many-body physics, such as cold atomic gases and exciton-polaritons in pumped semiconductor quantum wells in optical cavities. Our starting point is a noisy, dissipative Gross–Pitaevski or nonlinear Schrödinger equation, or equivalently purely relaxational kinetics originating from a complex-valued Landau–Ginzburg functional, which generalizes the standard equilibrium model A critical dynamics of a non-conserved complex order parameter field. We study the universal critical behavior of this system in the early stages of its relaxation from a Gaussian-weighted fully randomized initial state. In this critical aging regime, time translation invariance is broken, and the dynamics is characterized by the stationary static and dynamic critical exponents, as well as an independent ‘initial-slip’ exponent. We show that to first order in the dimensional expansion about the upper critical dimension, this initial-slip exponent in the complex Ginzburg–Landau equation is identical to its equilibrium model A counterpart. We furthermore employ the renormalization group flow equations as well as construct a suitable complex spherical model extension to argue that this conclusion likely remains true to all orders in the perturbation expansion. (paper)

  3. Complexity Quantification for Overhead Transmission Line Emergency Repair Scheme via a Graph Entropy Method Improved with Petri Net and AHP Weighting Method

    Directory of Open Access Journals (Sweden)

    Jing Zhou

    2014-01-01

    Full Text Available According to the characteristics of emergency repair in overhead transmission line accidents, a complexity quantification method for emergency repair scheme is proposed based on the entropy method in software engineering, which is improved by using group AHP (analytical hierarchical process method and Petri net. Firstly, information structure chart model and process control flowchart model could be built by Petri net. Then impact factors on complexity of emergency repair scheme could be quantified into corresponding entropy values, respectively. Finally, by using group AHP method, weight coefficient of each entropy value would be given before calculating the overall entropy value for the whole emergency repair scheme. By comparing group AHP weighting method with average weighting method, experiment results for the former showed a stronger correlation between quantified entropy values of complexity and the actual consumed time in repair, which indicates that this new method is more valid.

  4. Trends in large-scale testing of reactor structures

    International Nuclear Information System (INIS)

    Blejwas, T.E.

    2003-01-01

    Large-scale tests of reactor structures have been conducted at Sandia National Laboratories since the late 1970s. This paper describes a number of different large-scale impact tests, pressurization tests of models of containment structures, and thermal-pressure tests of models of reactor pressure vessels. The advantages of large-scale testing are evident, but cost, in particular limits its use. As computer models have grown in size, such as number of degrees of freedom, the advent of computer graphics has made possible very realistic representation of results - results that may not accurately represent reality. A necessary condition to avoiding this pitfall is the validation of the analytical methods and underlying physical representations. Ironically, the immensely larger computer models sometimes increase the need for large-scale testing, because the modeling is applied to increasing more complex structural systems and/or more complex physical phenomena. Unfortunately, the cost of large-scale tests is a disadvantage that will likely severely limit similar testing in the future. International collaborations may provide the best mechanism for funding future programs with large-scale tests. (author)

  5. Interacting price model and fluctuation behavior analysis from Lempel–Ziv complexity and multi-scale weighted-permutation entropy

    Energy Technology Data Exchange (ETDEWEB)

    Li, Rui, E-mail: lirui1401@bjtu.edu.cn; Wang, Jun

    2016-01-08

    A financial price model is developed based on the voter interacting system in this work. The Lempel–Ziv complexity is introduced to analyze the complex behaviors of the stock market. Some stock market stylized facts including fat tails, absence of autocorrelation and volatility clustering are investigated for the proposed price model firstly. Then the complexity of fluctuation behaviors of the real stock markets and the proposed price model are mainly explored by Lempel–Ziv complexity (LZC) analysis and multi-scale weighted-permutation entropy (MWPE) analysis. A series of LZC analyses of the returns and the absolute returns of daily closing prices and moving average prices are performed. Moreover, the complexity of the returns, the absolute returns and their corresponding intrinsic mode functions (IMFs) derived from the empirical mode decomposition (EMD) with MWPE is also investigated. The numerical empirical study shows similar statistical and complex behaviors between the proposed price model and the real stock markets, which exhibits that the proposed model is feasible to some extent. - Highlights: • A financial price dynamical model is developed based on the voter interacting system. • Lempel–Ziv complexity is the firstly applied to investigate the stock market dynamics system. • MWPE is employed to explore the complexity fluctuation behaviors of the stock market. • Empirical results show the feasibility of the proposed financial model.

  6. Interacting price model and fluctuation behavior analysis from Lempel–Ziv complexity and multi-scale weighted-permutation entropy

    International Nuclear Information System (INIS)

    Li, Rui; Wang, Jun

    2016-01-01

    A financial price model is developed based on the voter interacting system in this work. The Lempel–Ziv complexity is introduced to analyze the complex behaviors of the stock market. Some stock market stylized facts including fat tails, absence of autocorrelation and volatility clustering are investigated for the proposed price model firstly. Then the complexity of fluctuation behaviors of the real stock markets and the proposed price model are mainly explored by Lempel–Ziv complexity (LZC) analysis and multi-scale weighted-permutation entropy (MWPE) analysis. A series of LZC analyses of the returns and the absolute returns of daily closing prices and moving average prices are performed. Moreover, the complexity of the returns, the absolute returns and their corresponding intrinsic mode functions (IMFs) derived from the empirical mode decomposition (EMD) with MWPE is also investigated. The numerical empirical study shows similar statistical and complex behaviors between the proposed price model and the real stock markets, which exhibits that the proposed model is feasible to some extent. - Highlights: • A financial price dynamical model is developed based on the voter interacting system. • Lempel–Ziv complexity is the firstly applied to investigate the stock market dynamics system. • MWPE is employed to explore the complexity fluctuation behaviors of the stock market. • Empirical results show the feasibility of the proposed financial model.

  7. Digital reef rugosity estimates coral reef habitat complexity.

    Science.gov (United States)

    Dustan, Phillip; Doherty, Orla; Pardede, Shinta

    2013-01-01

    Ecological habitats with greater structural complexity contain more species due to increased niche diversity. This is especially apparent on coral reefs where individual coral colonies aggregate to give a reef its morphology, species zonation, and three dimensionality. Structural complexity is classically measured with a reef rugosity index, which is the ratio of a straight line transect to the distance a flexible chain of equal length travels when draped over the reef substrate; yet, other techniques from visual categories to remote sensing have been used to characterize structural complexity at scales from microhabitats to reefscapes. Reef-scale methods either lack quantitative precision or are too time consuming to be routinely practical, while remotely sensed indices are mismatched to the finer scale morphology of coral colonies and reef habitats. In this communication a new digital technique, Digital Reef Rugosity (DRR) is described which utilizes a self-contained water level gauge enabling a diver to quickly and accurately characterize rugosity with non-invasive millimeter scale measurements of coral reef surface height at decimeter intervals along meter scale transects. The precise measurements require very little post-processing and are easily imported into a spreadsheet for statistical analyses and modeling. To assess its applicability we investigated the relationship between DRR and fish community structure at four coral reef sites on Menjangan Island off the northwest corner of Bali, Indonesia and one on mainland Bali to the west of Menjangan Island; our findings show a positive relationship between DRR and fish diversity. Since structural complexity drives key ecological processes on coral reefs, we consider that DRR may become a useful quantitative community-level descriptor to characterize reef complexity.

  8. A simple method for determining polymeric IgA-containing immune complexes.

    Science.gov (United States)

    Sancho, J; Egido, J; González, E

    1983-06-10

    A simplified assay to measure polymeric IgA-immune complexes in biological fluids is described. The assay is based upon the specific binding of a secretory component for polymeric IgA. In the first step, multimeric IgA (monomeric and polymeric) immune complexes are determined by the standard Raji cell assay. Secondly, labeled secretory component added to the assay is bound to polymeric IgA-immune complexes previously fixed to Raji cells, but not to monomeric IgA immune complexes. To avoid false positives due to possible complement-fixing IgM immune complexes, prior IgM immunoadsorption is performed. Using anti-IgM antiserum coupled to CNBr-activated Sepharose 4B this step is not time-consuming. Polymeric IgA has a low affinity constant and binds weakly to Raji cells, as Scatchard analysis of the data shows. Thus, polymeric IgA immune complexes do not bind to Raji cells directly through Fc receptors, but through complement breakdown products, as with IgG-immune complexes. Using this method, we have been successful in detecting specific polymeric-IgA immune complexes in patients with IgA nephropathy (Berger's disease) and alcoholic liver disease, as well as in normal subjects after meals of high protein content. This new, simple, rapid and reproducible assay might help to study the physiopathological role of polymeric IgA immune complexes in humans and animals.

  9. Estimating Catchment-Scale Snowpack Variability in Complex Forested Terrain, Valles Caldera National Preserve, NM

    Science.gov (United States)

    Harpold, A. A.; Brooks, P. D.; Biederman, J. A.; Swetnam, T.

    2011-12-01

    Difficulty estimating snowpack variability across complex forested terrain currently hinders the prediction of water resources in the semi-arid Southwestern U.S. Catchment-scale estimates of snowpack variability are necessary for addressing ecological, hydrological, and water resources issues, but are often interpolated from a small number of point-scale observations. In this study, we used LiDAR-derived distributed datasets to investigate how elevation, aspect, topography, and vegetation interact to control catchment-scale snowpack variability. The study area is the Redondo massif in the Valles Caldera National Preserve, NM, a resurgent dome that varies from 2500 to 3430 m and drains from all aspects. Mean LiDAR-derived snow depths from four catchments (2.2 to 3.4 km^2) draining different aspects of the Redondo massif varied by 30%, despite similar mean elevations and mixed conifer forest cover. To better quantify this variability in snow depths we performed a multiple linear regression (MLR) at a 7.3 by 7.3 km study area (5 x 106 snow depth measurements) comprising the four catchments. The MLR showed that elevation explained 45% of the variability in snow depths across the study area, aspect explained 18% (dominated by N-S aspect), and vegetation 2% (canopy density and height). This linear relationship was not transferable to the catchment-scale however, where additional MLR analyses showed the influence of aspect and elevation differed between the catchments. The strong influence of North-South aspect in most catchments indicated that the solar radiation is an important control on snow depth variability. To explore the role of solar radiation, a model was used to generate winter solar forcing index (SFI) values based on the local and remote topography. The SFI was able to explain a large amount of snow depth variability in areas with similar elevation and aspect. Finally, the SFI was modified to include the effects of shading from vegetation (in and out of

  10. A Method to Predict the Structure and Stability of RNA/RNA Complexes.

    Science.gov (United States)

    Xu, Xiaojun; Chen, Shi-Jie

    2016-01-01

    RNA/RNA interactions are essential for genomic RNA dimerization and regulation of gene expression. Intermolecular loop-loop base pairing is a widespread and functionally important tertiary structure motif in RNA machinery. However, computational prediction of intermolecular loop-loop base pairing is challenged by the entropy and free energy calculation due to the conformational constraint and the intermolecular interactions. In this chapter, we describe a recently developed statistical mechanics-based method for the prediction of RNA/RNA complex structures and stabilities. The method is based on the virtual bond RNA folding model (Vfold). The main emphasis in the method is placed on the evaluation of the entropy and free energy for the loops, especially tertiary kissing loops. The method also uses recursive partition function calculations and two-step screening algorithm for large, complicated structures of RNA/RNA complexes. As case studies, we use the HIV-1 Mal dimer and the siRNA/HIV-1 mutant (T4) to illustrate the method.

  11. Control protocol: large scale implementation at the CERN PS complex - a first assessment

    International Nuclear Information System (INIS)

    Abie, H.; Benincasa, G.; Coudert, G.; Davydenko, Y.; Dehavay, C.; Gavaggio, R.; Gelato, G.; Heinze, W.; Legras, M.; Lustig, H.; Merard, L.; Pearson, T.; Strubin, P.; Tedesco, J.

    1994-01-01

    The Control Protocol is a model-based, uniform access procedure from a control system to accelerator equipment. It was proposed at CERN about 5 years ago and prototypes were developed in the following years. More recently, this procedure has been finalized and implemented at a large scale in the PS Complex. More than 300 pieces of equipment are now using this protocol in normal operation and another 300 are under implementation. These include power converters, vacuum systems, beam instrumentation devices, RF equipment, etc. This paper describes how the single general procedure is applied to the different kinds of equipment. The advantages obtained are also discussed. ((orig.))

  12. Evaluating a complex system-wide intervention using the difference in differences method: the Delivering Choice Programme.

    Science.gov (United States)

    Round, Jeff; Drake, Robyn; Kendall, Edward; Addicott, Rachael; Agelopoulos, Nicky; Jones, Louise

    2015-03-01

    We report the use of difference in differences (DiD) methodology to evaluate a complex, system-wide healthcare intervention. We use the worked example of evaluating the Marie Curie Delivering Choice Programme (DCP) for advanced illness in a large urban healthcare economy. DiD was selected because a randomised controlled trial was not feasible. The method allows for before and after comparison of changes that occur in an intervention site with a matched control site. This enables analysts to control for the effect of the intervention in the absence of a local control. Any policy, seasonal or other confounding effects over the test period are assumed to have occurred in a balanced way at both sites. Data were obtained from primary care trusts. Outcomes were place of death, inpatient admissions, length of stay and costs. Small changes were identified between pre- and post-DCP outputs in the intervention site. The proportion of home deaths and median cost increased slightly, while the number of admissions per patient and the average length of stay per admission decreased slightly. None of these changes was statistically significant. Effects estimates were limited by small numbers accessing new services and selection bias in sample population and comparator site. In evaluating the effect of a complex healthcare intervention, the choice of analysis method and output measures is crucial. Alternatives to randomised controlled trials may be required for evaluating large scale complex interventions and the DiD approach is suitable, subject to careful selection of measured outputs and control population. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  13. BRAND program complex for neutron-physical experiment simulation by the Monte-Carlo method

    International Nuclear Information System (INIS)

    Androsenko, A.A.; Androsenko, P.A.

    1984-01-01

    Possibilities of the BRAND program complex for neutron and γ-radiation transport simulation by the Monte-Carlo method are described in short. The complex includes the following modules: geometric module, source module, detector module, modules of simulation of a vector of particle motion direction after interaction and a free path. The complex is written in the FORTRAN langauage and realized by the BESM-6 computer

  14. Grey Language Hesitant Fuzzy Group Decision Making Method Based on Kernel and Grey Scale.

    Science.gov (United States)

    Li, Qingsheng; Diao, Yuzhu; Gong, Zaiwu; Hu, Aqin

    2018-03-02

    Based on grey language multi-attribute group decision making, a kernel and grey scale scoring function is put forward according to the definition of grey language and the meaning of the kernel and grey scale. The function introduces grey scale into the decision-making method to avoid information distortion. This method is applied to the grey language hesitant fuzzy group decision making, and the grey correlation degree is used to sort the schemes. The effectiveness and practicability of the decision-making method are further verified by the industry chain sustainable development ability evaluation example of a circular economy. Moreover, its simplicity and feasibility are verified by comparing it with the traditional grey language decision-making method and the grey language hesitant fuzzy weighted arithmetic averaging (GLHWAA) operator integration method after determining the index weight based on the grey correlation.

  15. Assessment of the methods for determining net radiation at different time-scales of meteorological variables

    Directory of Open Access Journals (Sweden)

    Ni An

    2017-04-01

    Full Text Available When modeling the soil/atmosphere interaction, it is of paramount importance to determine the net radiation flux. There are two common calculation methods for this purpose. Method 1 relies on use of air temperature, while Method 2 relies on use of both air and soil temperatures. Nowadays, there has been no consensus on the application of these two methods. In this study, the half-hourly data of solar radiation recorded at an experimental embankment are used to calculate the net radiation and long-wave radiation at different time-scales (half-hourly, hourly, and daily using the two methods. The results show that, compared with Method 2 which has been widely adopted in agronomical, geotechnical and geo-environmental applications, Method 1 is more feasible for its simplicity and accuracy at shorter time-scale. Moreover, in case of longer time-scale, daily for instance, less variations of net radiation and long-wave radiation are obtained, suggesting that no detailed soil temperature variations can be obtained. In other words, shorter time-scales are preferred in determining net radiation flux.

  16. One-fiftieth scale model studies of 40-by 80-foot and 80-by 120-foot wind tunnel complex at NASA Ames Research Center

    Science.gov (United States)

    Schmidt, Gene I.; Rossow, Vernon J.; Vanaken, Johannes M.; Parrish, Cynthia L.

    1987-01-01

    The features of a 1/50-scale model of the National Full-Scale Aerodynamics Complex are first described. An overview is then given of some results from the various tests conducted with the model to aid in the design of the full-scale facility. It was found that the model tunnel simulated accurately many of the operational characteristics of the full-scale circuits. Some characteristics predicted by the model were, however, noted to differ from previous full-scale results by about 10%.

  17. Structure of the automated uchebno-methodical complex on technical disciplines

    Directory of Open Access Journals (Sweden)

    Вячеслав Михайлович Дмитриев

    2010-12-01

    Full Text Available In article it is put and the problem of automation and information of process of training of students on the basis of the entered system-organizational forms which have received in aggregate the name of education methodical complexes on discipline dares.

  18. Complexity in the scaling of velocity fluctuations in the high-latitude F-region ionosphere

    Directory of Open Access Journals (Sweden)

    M. L. Parkinson

    2008-09-01

    Full Text Available The temporal scaling properties of F-region velocity fluctuations, δvlos, were characterised over 17 octaves of temporal scale from τ=1 s to <1 day using a new data base of 1-s time resolution SuperDARN radar measurements. After quality control, 2.9 (1.9 million fluctuations were recorded during 31.5 (40.4 days of discretionary mode soundings using the Tasmanian (New Zealand radars. If the fluctuations were statistically self-similar, the probability density functions (PDFs of δvlos would collapse onto the same PDF using the scaling Psvs, τ=ταPvlos, τ and δvsvlosτ−α where α is the scaling exponent. The variations in scaling exponents α and multi-fractal behaviour were estimated using peak scaling and generalised structure function (GSF analyses, and a new method based upon minimising the differences between re-scaled probability density functions (PDFs. The efficiency of this method enabled calculation of "α spectra", the temporal spectra of scaling exponents from τ=1 s to ~2048 s. The large number of samples enabled calculation of α spectra for data separated into 2-h bins of MLT as well as two main physical regimes: Population A echoes with Doppler spectral width <75 m s−1 concentrated on closed field lines, and Population B echoes with spectral width >150 m s−1 concentrated on open field lines. For all data there was a scaling break at τ~10 s and the similarity of the fluctuations beneath this scale may be related to the large spatial averaging (~100 km×45 km employed by SuperDARN radars. For Tasmania Population B, the velocity fluctuations exhibited approximately mono fractal power law scaling between τ~8 s and 2048 s (34 min, and probably up to several hours. The scaling exponents were generally less than that expected for basic MHD

  19. A multiple-scale power series method for solving nonlinear ordinary differential equations

    Directory of Open Access Journals (Sweden)

    Chein-Shan Liu

    2016-02-01

    Full Text Available The power series solution is a cheap and effective method to solve nonlinear problems, like the Duffing-van der Pol oscillator, the Volterra population model and the nonlinear boundary value problems. A novel power series method by considering the multiple scales $R_k$ in the power term $(t/R_k^k$ is developed, which are derived explicitly to reduce the ill-conditioned behavior in the data interpolation. In the method a huge value times a tiny value is avoided, such that we can decrease the numerical instability and which is the main reason to cause the failure of the conventional power series method. The multiple scales derived from an integral can be used in the power series expansion, which provide very accurate numerical solutions of the problems considered in this paper.

  20. Comparison of topotactic fluorination methods for complex oxide films

    Energy Technology Data Exchange (ETDEWEB)

    Moon, E. J., E-mail: em582@drexel.edu; Choquette, A. K.; Huon, A.; Kulesa, S. Z.; May, S. J., E-mail: smay@coe.drexel.edu [Department of Materials Science and Engineering, Drexel University, Philadelphia, Pennsylvania 19104 (United States); Barbash, D. [Centralized Research Facilities, Drexel University, Philadelphia, Pennsylvania 19104 (United States)

    2015-06-01

    We have investigated the synthesis of SrFeO{sub 3−α}F{sub γ} (α and γ ≤ 1) perovskite films using topotactic fluorination reactions utilizing poly(vinylidene fluoride) as a fluorine source. Two different fluorination methods, a spin-coating and a vapor transport approach, were performed on as-grown SrFeO{sub 2.5} films. We highlight differences in the structural, compositional, and optical properties of the oxyfluoride films obtained via the two methods, providing insight into how fluorination reactions can be used to modify electronic and optical behavior in complex oxide heterostructures.

  1. Comparison of topotactic fluorination methods for complex oxide films

    Directory of Open Access Journals (Sweden)

    E. J. Moon

    2015-06-01

    Full Text Available We have investigated the synthesis of SrFeO3−αFγ (α and γ ≤ 1 perovskite films using topotactic fluorination reactions utilizing poly(vinylidene fluoride as a fluorine source. Two different fluorination methods, a spin-coating and a vapor transport approach, were performed on as-grown SrFeO2.5 films. We highlight differences in the structural, compositional, and optical properties of the oxyfluoride films obtained via the two methods, providing insight into how fluorination reactions can be used to modify electronic and optical behavior in complex oxide heterostructures.

  2. Complexity in built environment, health, and destination walking: a neighborhood-scale analysis.

    Science.gov (United States)

    Carlson, Cynthia; Aytur, Semra; Gardner, Kevin; Rogers, Shannon

    2012-04-01

    This study investigates the relationships between the built environment, the physical attributes of the neighborhood, and the residents' perceptions of those attributes. It focuses on destination walking and self-reported health, and does so at the neighborhood scale. The built environment, in particular sidewalks, road connectivity, and proximity of local destinations, correlates with destination walking, and similarly destination walking correlates with physical health. It was found, however, that the built environment and health metrics may not be simply, directly correlated but rather may be correlated through a series of feedback loops that may regulate risk in different ways in different contexts. In particular, evidence for a feedback loop between physical health and destination walking is observed, as well as separate feedback loops between destination walking and objective metrics of the built environment, and destination walking and perception of the built environment. These feedback loops affect the ability to observe how the built environment correlates with residents' physical health. Previous studies have investigated pieces of these associations, but are potentially missing the more complex relationships present. This study proposes a conceptual model describing complex feedback relationships between destination walking and public health, with the built environment expected to increase or decrease the strength of the feedback loop. Evidence supporting these feedback relationships is presented.

  3. Optimization of large-scale industrial systems : an emerging method

    Energy Technology Data Exchange (ETDEWEB)

    Hammache, A.; Aube, F.; Benali, M.; Cantave, R. [Natural Resources Canada, Varennes, PQ (Canada). CANMET Energy Technology Centre

    2006-07-01

    This paper reviewed optimization methods of large-scale industrial production systems and presented a novel systematic multi-objective and multi-scale optimization methodology. The methodology was based on a combined local optimality search with global optimality determination, and advanced system decomposition and constraint handling. The proposed method focused on the simultaneous optimization of the energy, economy and ecology aspects of industrial systems (E{sup 3}-ISO). The aim of the methodology was to provide guidelines for decision-making strategies. The approach was based on evolutionary algorithms (EA) with specifications including hybridization of global optimality determination with a local optimality search; a self-adaptive algorithm to account for the dynamic changes of operating parameters and design variables occurring during the optimization process; interactive optimization; advanced constraint handling and decomposition strategy; and object-oriented programming and parallelization techniques. Flowcharts of the working principles of the basic EA were presented. It was concluded that the EA uses a novel decomposition and constraint handling technique to enhance the Pareto solution search procedure for multi-objective problems. 6 refs., 9 figs.

  4. Micro-and/or nano-scale patterned porous membranes, methods of making membranes, and methods of using membranes

    KAUST Repository

    Wang, Xianbin; Chen, Wei; Wang, Zhihong; Zhang, Xixiang; Yue, Weisheng; Lai, Zhiping

    2015-01-01

    Embodiments of the present disclosure provide for materials that include a pre-designed patterned, porous membrane (e.g., micro- and/or nano-scale patterned), structures or devices that include a pre-designed patterned, porous membrane, methods of making pre-designed patterned, porous membranes, methods of separation, and the like.

  5. Micro-and/or nano-scale patterned porous membranes, methods of making membranes, and methods of using membranes

    KAUST Repository

    Wang, Xianbin

    2015-01-22

    Embodiments of the present disclosure provide for materials that include a pre-designed patterned, porous membrane (e.g., micro- and/or nano-scale patterned), structures or devices that include a pre-designed patterned, porous membrane, methods of making pre-designed patterned, porous membranes, methods of separation, and the like.

  6. Measurement of complex permittivity of composite materials using waveguide method

    NARCIS (Netherlands)

    Tereshchenko, O.V.; Buesink, Frederik Johannes Karel; Leferink, Frank Bernardus Johannes

    2011-01-01

    Complex dielectric permittivity of 4 different composite materials has been measured using the transmissionline method. A waveguide fixture in L, S, C and X band was used for the measurements. Measurement accuracy is influenced by air gaps between test fixtures and the materials tested. One of the

  7. Rapid high temperature field test method for evaluation of geothermal calcite scale inhibitors

    Energy Technology Data Exchange (ETDEWEB)

    Asperger, R.G.

    1982-08-01

    A test method is described which allows the rapid field testing of calcite scale inhibitors in high- temperature geothermal brines. Five commercial formulations, chosen on the basis of laboratory screening tests, were tested in brines with low total dissolved solids at ca 500 F. Four were found to be effective; of these, 2 were found to be capable of removing recently deposited scale. One chemical was tested in the full-flow brine line for 6 wks. It was shown to stop a severe surface scaling problem at the well's control valve, thus proving the viability of the rapid test method. (12 refs.)

  8. Generation of new solutions of the stationary axisymmetric Einstein equations by a double complex function method

    International Nuclear Information System (INIS)

    Zhong, Z.

    1985-01-01

    A new approach to the solution of certain differential equations, the double complex function method, is developed, combining ordinary complex numbers and hyperbolic complex numbers. This method is applied to the theory of stationary axisymmetric Einstein equations in general relativity. A family of exact double solutions, double transformation groups, and n-soliton double solutions are obtained

  9. Methods for large-scale international studies on ICT in education

    NARCIS (Netherlands)

    Pelgrum, W.J.; Plomp, T.; Voogt, Joke; Knezek, G.A.

    2008-01-01

    International comparative assessment is a research method applied for describing and analyzing educational processes and outcomes. They are used to ‘describe the status quo’ in educational systems from an international comparative perspective. This chapter reviews different large scale international

  10. Complex networks with scale-free nature and hierarchical modularity

    Science.gov (United States)

    Shekatkar, Snehal M.; Ambika, G.

    2015-09-01

    Generative mechanisms which lead to empirically observed structure of networked systems from diverse fields like biology, technology and social sciences form a very important part of study of complex networks. The structure of many networked systems like biological cell, human society and World Wide Web markedly deviate from that of completely random networks indicating the presence of underlying processes. Often the main process involved in their evolution is the addition of links between existing nodes having a common neighbor. In this context we introduce an important property of the nodes, which we call mediating capacity, that is generic to many networks. This capacity decreases rapidly with increase in degree, making hubs weak mediators of the process. We show that this property of nodes provides an explanation for the simultaneous occurrence of the observed scale-free structure and hierarchical modularity in many networked systems. This also explains the high clustering and small-path length seen in real networks as well as non-zero degree-correlations. Our study also provides insight into the local process which ultimately leads to emergence of preferential attachment and hence is also important in understanding robustness and control of real networks as well as processes happening on real networks.

  11. Rahman Prize Lecture: Lattice Boltzmann simulation of complex states of flowing matter

    Science.gov (United States)

    Succi, Sauro

    Over the last three decades, the Lattice Boltzmann (LB) method has gained a prominent role in the numerical simulation of complex flows across an impressively broad range of scales, from fully-developed turbulence in real-life geometries, to multiphase flows in micro-fluidic devices, all the way down to biopolymer translocation in nanopores and lately, even quark-gluon plasmas. After a brief introduction to the main ideas behind the LB method and its historical developments, we shall present a few selected applications to complex flow problems at various scales of motion. Finally, we shall discuss prospects for extreme-scale LB simulations of outstanding problems in the physics of fluids and its interfaces with material sciences and biology, such as the modelling of fluid turbulence, the optimal design of nanoporous gold catalysts and protein folding/aggregation in crowded environments.

  12. A New Scale Factor Adjustment Method for Magnetic Force Feedback Accelerometer

    Directory of Open Access Journals (Sweden)

    Xiangqing Huang

    2017-10-01

    Full Text Available A new and simple method to adjust the scale factor of a magnetic force feedback accelerometer is presented, which could be used in developing a rotating accelerometer gravity gradient instrument (GGI. Adjusting and matching the acceleration-to-current transfer function of the four accelerometers automatically is one of the basic and necessary technologies for rejecting the common mode accelerations in the development of GGI. In order to adjust the scale factor of the magnetic force rebalance accelerometer, an external current is injected and combined with the normal feedback current; they are then applied together to the torque coil of the magnetic actuator. The injected current could be varied proportionally according to the external adjustment needs, and the change in the acceleration-to-current transfer function then realized dynamically. The new adjustment method has the advantages of no extra assembly and ease of operation. Changes in the scale factors range from 33% smaller to 100% larger are verified experimentally by adjusting the different external coefficients. The static noise of the used accelerometer is compared under conditions with and without the injecting current, and the experimental results find no change at the current noise level, which further confirms the validity of the presented method.

  13. A New Scale Factor Adjustment Method for Magnetic Force Feedback Accelerometer.

    Science.gov (United States)

    Huang, Xiangqing; Deng, Zhongguang; Xie, Yafei; Li, Zhu; Fan, Ji; Tu, Liangcheng

    2017-10-27

    A new and simple method to adjust the scale factor of a magnetic force feedback accelerometer is presented, which could be used in developing a rotating accelerometer gravity gradient instrument (GGI). Adjusting and matching the acceleration-to-current transfer function of the four accelerometers automatically is one of the basic and necessary technologies for rejecting the common mode accelerations in the development of GGI. In order to adjust the scale factor of the magnetic force rebalance accelerometer, an external current is injected and combined with the normal feedback current; they are then applied together to the torque coil of the magnetic actuator. The injected current could be varied proportionally according to the external adjustment needs, and the change in the acceleration-to-current transfer function then realized dynamically. The new adjustment method has the advantages of no extra assembly and ease of operation. Changes in the scale factors range from 33% smaller to 100% larger are verified experimentally by adjusting the different external coefficients. The static noise of the used accelerometer is compared under conditions with and without the injecting current, and the experimental results find no change at the current noise level, which further confirms the validity of the presented method.

  14. Polarized atomic orbitals for linear scaling methods

    Science.gov (United States)

    Berghold, Gerd; Parrinello, Michele; Hutter, Jürg

    2002-02-01

    We present a modified version of the polarized atomic orbital (PAO) method [M. S. Lee and M. Head-Gordon, J. Chem. Phys. 107, 9085 (1997)] to construct minimal basis sets optimized in the molecular environment. The minimal basis set derives its flexibility from the fact that it is formed as a linear combination of a larger set of atomic orbitals. This approach significantly reduces the number of independent variables to be determined during a calculation, while retaining most of the essential chemistry resulting from the admixture of higher angular momentum functions. Furthermore, we combine the PAO method with linear scaling algorithms. We use the Chebyshev polynomial expansion method, the conjugate gradient density matrix search, and the canonical purification of the density matrix. The combined scheme overcomes one of the major drawbacks of standard approaches for large nonorthogonal basis sets, namely numerical instabilities resulting from ill-conditioned overlap matrices. We find that the condition number of the PAO overlap matrix is independent from the condition number of the underlying extended basis set, and consequently no numerical instabilities are encountered. Various applications are shown to confirm this conclusion and to compare the performance of the PAO method with extended basis-set calculations.

  15. A Method of Vector Map Multi-scale Representation Considering User Interest on Subdivision Gird

    Directory of Open Access Journals (Sweden)

    YU Tong

    2016-12-01

    Full Text Available Compared with the traditional spatial data model and method, global subdivision grid show a great advantage in the organization and expression of massive spatial data. In view of this, a method of vector map multi-scale representation considering user interest on subdivision gird is proposed. First, the spatial interest field is built using a large number POI data to describe the spatial distribution of the user interest in geographic information. Second, spatial factor is classified and graded, and its representation scale range can be determined. Finally, different levels of subdivision surfaces are divided based on GeoSOT subdivision theory, and the corresponding relation of subdivision level and scale is established. According to the user interest of subdivision surfaces, the spatial feature can be expressed in different degree of detail. It can realize multi-scale representation of spatial data based on user interest. The experimental results show that this method can not only satisfy general-to-detail and important-to-secondary space cognitive demands of users, but also achieve better multi-scale representation effect.

  16. The Language Teaching Methods Scale: Reliability and Validity Studies

    Science.gov (United States)

    Okmen, Burcu; Kilic, Abdurrahman

    2016-01-01

    The aim of this research is to develop a scale to determine the language teaching methods used by English teachers. The research sample consisted of 300 English teachers who taught at Duzce University and in primary schools, secondary schools and high schools in the Provincial Management of National Education in the city of Duzce in 2013-2014…

  17. A heuristic method for simulating open-data of arbitrary complexity that can be used to compare and evaluate machine learning methods.

    Science.gov (United States)

    Moore, Jason H; Shestov, Maksim; Schmitt, Peter; Olson, Randal S

    2018-01-01

    A central challenge of developing and evaluating artificial intelligence and machine learning methods for regression and classification is access to data that illuminates the strengths and weaknesses of different methods. Open data plays an important role in this process by making it easy for computational researchers to easily access real data for this purpose. Genomics has in some examples taken a leading role in the open data effort starting with DNA microarrays. While real data from experimental and observational studies is necessary for developing computational methods it is not sufficient. This is because it is not possible to know what the ground truth is in real data. This must be accompanied by simulated data where that balance between signal and noise is known and can be directly evaluated. Unfortunately, there is a lack of methods and software for simulating data with the kind of complexity found in real biological and biomedical systems. We present here the Heuristic Identification of Biological Architectures for simulating Complex Hierarchical Interactions (HIBACHI) method and prototype software for simulating complex biological and biomedical data. Further, we introduce new methods for developing simulation models that generate data that specifically allows discrimination between different machine learning methods.

  18. The meganism behind internally generated centennial-to-millennial scale climate variability in an earth system model of intermediate complexity

    NARCIS (Netherlands)

    Friedrich, T.; Timmermann, A.; Menviel, L.; Elison Timm, O.; Mouchet, A.; Roche, D.M.V.A.P.

    2010-01-01

    The mechanism triggering centennial-to-millennial-scale variability of the Atlantic Meridional Overturning Circulation (AMOC) in the earth system model of intermediate complexity LOVECLIM is investigated. It is found that for several climate boundary conditions such as low obliquity values (∼22.1 )

  19. Solution of generalized shifted linear systems with complex symmetric matrices

    International Nuclear Information System (INIS)

    Sogabe, Tomohiro; Hoshi, Takeo; Zhang, Shao-Liang; Fujiwara, Takeo

    2012-01-01

    We develop the shifted COCG method [R. Takayama, T. Hoshi, T. Sogabe, S.-L. Zhang, T. Fujiwara, Linear algebraic calculation of Green’s function for large-scale electronic structure theory, Phys. Rev. B 73 (165108) (2006) 1–9] and the shifted WQMR method [T. Sogabe, T. Hoshi, S.-L. Zhang, T. Fujiwara, On a weighted quasi-residual minimization strategy of the QMR method for solving complex symmetric shifted linear systems, Electron. Trans. Numer. Anal. 31 (2008) 126–140] for solving generalized shifted linear systems with complex symmetric matrices that arise from the electronic structure theory. The complex symmetric Lanczos process with a suitable bilinear form plays an important role in the development of the methods. The numerical examples indicate that the methods are highly attractive when the inner linear systems can efficiently be solved.

  20. Complexation of biological ligands with lanthanides(III) for MRI: Structure, thermodynamic and methods; Complexation des cations lanthanides trivalents par des ligands d'origine biologique pour l'IRM: Structure, thermodynamique et methodes

    Energy Technology Data Exchange (ETDEWEB)

    Bonnet, C

    2006-07-15

    New cyclic ligands derived from sugars and amino-acids form a scaffold carrying a coordination sphere of oxygen atoms suitable to complex Ln(III) ions. In spite of their rather low molecular weights, the complexes display surprisingly high relaxivity values, especially at high field. The ACX and BCX ligands, which are acidic derivatives of modified and cyclo-dextrins, form mono and bimetallic complexes with Ln(III). The LnACX and LnBCX complexes show affinities towards Ln(III) similar to those of tri-acidic ligands. In the bimetallic Lu2ACX complex, the cations are deeply embedded in the cavity of the ligand, as shown by the X-ray structure. In aqueous solution, the number of water molecules coordinated to the cation in the LnACX complex depends on the nature and concentration of the alkali ions of the supporting electrolyte, as shown by luminescence and relaxometric measurements. There is only one water molecule coordinated in the LnBCX complex, which enables us to highlight an important second sphere contribution to relaxivity. The NMR study of the RAFT peptidic ligand shows the complexation of Ln(III), with an affinity similar to those of natural ligands derived from calmodulin. The relaxometric study also shows an important second sphere contribution to relaxivity. To better understand the intricate molecular factors affecting relaxivity, we developed new relaxometric methods based on probe solutes. These methods allow us to determine the charge of the complex, weak affinity constants, trans-metallation constants, and the electronic relaxation rate. (author)

  1. Kinematical simulation of robotic complex operation for implementing full-scale additive technologies of high-end materials, composites, structures, and buildings

    Science.gov (United States)

    Antsiferov, S. I.; Eltsov, M. Iu; Khakhalev, P. A.

    2018-03-01

    This paper considers a newly designed electronic digital model of a robotic complex for implementing full-scale additive technologies, funded under a Federal Target Program. The electronic and digital model was used to solve the problem of simulating the movement of a robotic complex using the NX CAD/CAM/CAE system. The virtual mechanism was built and the main assemblies, joints, and drives were identified as part of solving the problem. In addition, the maximum allowed printable area size was identified for the robotic complex, and a simulation of printing a rectangular-shaped article was carried out.

  2. A family of conjugate gradient methods for large-scale nonlinear equations.

    Science.gov (United States)

    Feng, Dexiang; Sun, Min; Wang, Xueyong

    2017-01-01

    In this paper, we present a family of conjugate gradient projection methods for solving large-scale nonlinear equations. At each iteration, it needs low storage and the subproblem can be easily solved. Compared with the existing solution methods for solving the problem, its global convergence is established without the restriction of the Lipschitz continuity on the underlying mapping. Preliminary numerical results are reported to show the efficiency of the proposed method.

  3. Homogenization of Large-Scale Movement Models in Ecology

    Science.gov (United States)

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  4. Assessment of exposure to the Penicillium glabrum complex in cork industry using complementing methods.

    Science.gov (United States)

    Viegas, Carla; Sabino, Raquel; Botelho, Daniel; dos Santos, Mateus; Gomes, Anita Quintal

    2015-09-01

    Cork oak is the second most dominant forest species in Portugal and makes this country the world leader in cork export. Occupational exposure to Chrysonilia sitophila and the Penicillium glabrum complex in cork industry is common, and the latter fungus is associated with suberosis. However, as conventional methods seem to underestimate its presence in occupational environments, the aim of our study was to see whether information obtained by polymerase chain reaction (PCR), a molecular-based method, can complement conventional findings and give a better insight into occupational exposure of cork industry workers. We assessed fungal contamination with the P. glabrum complex in three cork manufacturing plants in the outskirts of Lisbon using both conventional and molecular methods. Conventional culturing failed to detect the fungus at six sampling sites in which PCR did detect it. This confirms our assumption that the use of complementing methods can provide information for a more accurate assessment of occupational exposure to the P. glabrum complex in cork industry.

  5. Managing Complexity

    Energy Technology Data Exchange (ETDEWEB)

    Chassin, David P.; Posse, Christian; Malard, Joel M.

    2004-08-01

    Physical analogs have shown considerable promise for understanding the behavior of complex adaptive systems, including macroeconomics, biological systems, social networks, and electric power markets. Many of today’s most challenging technical and policy questions can be reduced to a distributed economic control problem. Indeed, economically-based control of large-scale systems is founded on the conjecture that the price-based regulation (e.g., auctions, markets) results in an optimal allocation of resources and emergent optimal system control. This paper explores the state of the art in the use physical analogs for understanding the behavior of some econophysical systems and to deriving stable and robust control strategies for them. In particular we review and discussion applications of some analytic methods based on the thermodynamic metaphor according to which the interplay between system entropy and conservation laws gives rise to intuitive and governing global properties of complex systems that cannot be otherwise understood.

  6. Measuring the black hole mass in ultraluminous X-ray sources with the X-ray scaling method

    Science.gov (United States)

    Jang, I.; Gliozzi, M.; Satyapal, S.; Titarchuk, L.

    2018-01-01

    In our recent work, we demonstrated that a novel X-ray scaling method, originally introduced for Galactic black holes (BH), could be reliably extended to estimate the mass of supermassive black holes accreting at moderate to high level. Here, we apply this X-ray scaling method to ultraluminous X-ray sources (ULXs) to constrain their MBH. Using 49 ULXs with multiple XMM-Newton observations, we infer that ULXs host both stellar mass BHs and intermediate mass BHs. The majority of the sources of our sample seem to be consistent with the hypothesis of highly accreting massive stellar BHs with MBH ∼ 100 M⊙. Our results are in general agreement with the MBH values obtained with alternative methods, including model-independent variability methods. This suggests that the X-ray scaling method is an actual scale-independent method that can be applied to all BH systems accreting at moderate-high rate.

  7. Advantages of complex scaling only the most diffuse basis functions in simultaneous description of both resonances and bound states

    Czech Academy of Sciences Publication Activity Database

    Landau, A.; Haritan, I.; Kaprálová-Žďánská, Petra Ruth; Moiseyev, N.

    2015-01-01

    Roč. 113, 19-20 (2015), s. 3141-3146 ISSN 0026-8976 R&D Projects: GA MŠk(CZ) LG13029 Institutional support: RVO:68378271 Keywords : resonance * complex scaling * non-Hermitian * ab-initio Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.837, year: 2015

  8. Measurement methods on the complexity of network

    Institute of Scientific and Technical Information of China (English)

    LIN Lin; DING Gang; CHEN Guo-song

    2010-01-01

    Based on the size of network and the number of paths in the network,we proposed a model of topology complexity of a network to measure the topology complexity of the network.Based on the analyses of the effects of the number of the equipment,the types of equipment and the processing time of the node on the complexity of the network with the equipment-constrained,a complexity model of equipment-constrained network was constructed to measure the integrated complexity of the equipment-constrained network.The algorithms for the two models were also developed.An automatic generator of the random single label network was developed to test the models.The results show that the models can correctly evaluate the topology complexity and the integrated complexity of the networks.

  9. Studies on the complexation of diclofenac sodium with β-cyclodextrin: Influence of method of preparation

    Science.gov (United States)

    Das, Subhraseema; Subuddhi, Usharani

    2015-11-01

    Inclusion complexes of diclofenac sodium (DS) with β-cyclodextrin (β-CD) were prepared in order to improve the solubility, dissolution and oral bioavailability of the poorly water soluble drug. The effect of method of preparation of the DS/β-CD inclusion complexes (ICs) was investigated. The ICs were prepared by microwave irradiation and also by the conventional methods of kneading, co-precipitation and freeze drying. Though freeze drying method is usually referred to as the gold standard among all the conventional methods, its long processing time limits the utility. Microwave irradiation accomplishes the process in a very short span of time and is a more environmentally benign method. Better efficacy of the microwaved inclusion product (MW) was observed in terms of dissolution, antimicrobial activity and antibiofilm properties of the drug. Thus microwave irradiation can be utilized as an improved, time-saving and cost-effective method for the generation of DS/β-CD inclusion complexes.

  10. Impact of model complexity and multi-scale data integration on the estimation of hydrogeological parameters in a dual-porosity aquifer

    Science.gov (United States)

    Tamayo-Mas, Elena; Bianchi, Marco; Mansour, Majdi

    2018-03-01

    This study investigates the impact of model complexity and multi-scale prior hydrogeological data on the interpretation of pumping test data in a dual-porosity aquifer (the Chalk aquifer in England, UK). In order to characterize the hydrogeological properties, different approaches ranging from a traditional analytical solution (Theis approach) to more sophisticated numerical models with automatically calibrated input parameters are applied. Comparisons of results from the different approaches show that neither traditional analytical solutions nor a numerical model assuming a homogenous and isotropic aquifer can adequately explain the observed drawdowns. A better reproduction of the observed drawdowns in all seven monitoring locations is instead achieved when medium and local-scale prior information about the vertical hydraulic conductivity (K) distribution is used to constrain the model calibration process. In particular, the integration of medium-scale vertical K variations based on flowmeter measurements lead to an improvement in the goodness-of-fit of the simulated drawdowns of about 30%. Further improvements (up to 70%) were observed when a simple upscaling approach was used to integrate small-scale K data to constrain the automatic calibration process of the numerical model. Although the analysis focuses on a specific case study, these results provide insights about the representativeness of the estimates of hydrogeological properties based on different interpretations of pumping test data, and promote the integration of multi-scale data for the characterization of heterogeneous aquifers in complex hydrogeological settings.

  11. The relationship between the Wigner-Weyl kinetic formalism and the complex geometrical optics method

    OpenAIRE

    Maj, Omar

    2004-01-01

    The relationship between two different asymptotic techniques developed in order to describe the propagation of waves beyond the standard geometrical optics approximation, namely, the Wigner-Weyl kinetic formalism and the complex geometrical optics method, is addressed. More specifically, a solution of the wave kinetic equation, relevant to the Wigner-Weyl formalism, is obtained which yields the same wavefield intensity as the complex geometrical optics method. Such a relationship is also disc...

  12. Local-scaling density-functional method: Intraorbit and interorbit density optimizations

    International Nuclear Information System (INIS)

    Koga, T.; Yamamoto, Y.; Ludena, E.V.

    1991-01-01

    The recently proposed local-scaling density-functional theory provides us with a practical method for the direct variational determination of the electron density function ρ(r). The structure of ''orbits,'' which ensures the one-to-one correspondence between the electron density ρ(r) and the N-electron wave function Ψ({r k }), is studied in detail. For the realization of the local-scaling density-functional calculations, procedures for intraorbit and interorbit optimizations of the electron density function are proposed. These procedures are numerically illustrated for the helium atom in its ground state at the beyond-Hartree-Fock level

  13. A family of conjugate gradient methods for large-scale nonlinear equations

    Directory of Open Access Journals (Sweden)

    Dexiang Feng

    2017-09-01

    Full Text Available Abstract In this paper, we present a family of conjugate gradient projection methods for solving large-scale nonlinear equations. At each iteration, it needs low storage and the subproblem can be easily solved. Compared with the existing solution methods for solving the problem, its global convergence is established without the restriction of the Lipschitz continuity on the underlying mapping. Preliminary numerical results are reported to show the efficiency of the proposed method.

  14. Simultaneous analysis of qualitative parameters of solid fuel using complex neutron gamma method

    International Nuclear Information System (INIS)

    Dombrovskij, V.P.; Ajtsev, N.I.; Ryashchikov, V.I.; Frolov, V.K.

    1983-01-01

    A study was made on complex neutron gamma method for simultaneous analysis of carbon content, ash content and humidity of solid fuel according to gamma radiation of inelastic fast neutron scattering and radiation capture of thermal neutrons. Metrological characteristics of pulse and stationary neutron gamma methods for determination of qualitative solid fuel parameters were analyzed, taking coke breeze as an example. Optimal energy ranges of gamma radiation detection (2-8 MeV) were determined. The advantages of using pulse neutron generator for complex analysis of qualitative parameters of solid fuel in large masses were shown

  15. Max-Min SINR in Large-Scale Single-Cell MU-MIMO: Asymptotic Analysis and Low Complexity Transceivers

    KAUST Repository

    Sifaou, Houssem

    2016-12-28

    This work focuses on the downlink and uplink of large-scale single-cell MU-MIMO systems in which the base station (BS) endowed with M antennas communicates with K single-antenna user equipments (UEs). Particularly, we aim at reducing the complexity of the linear precoder and receiver that maximize the minimum signal-to-interference-plus-noise ratio subject to a given power constraint. To this end, we consider the asymptotic regime in which M and K grow large with a given ratio. Tools from random matrix theory (RMT) are then used to compute, in closed form, accurate approximations for the parameters of the optimal precoder and receiver, when imperfect channel state information (modeled by the generic Gauss-Markov formulation form) is available at the BS. The asymptotic analysis allows us to derive the asymptotically optimal linear precoder and receiver that are characterized by a lower complexity (due to the dependence on the large scale components of the channel) and, possibly, by a better resilience to imperfect channel state information. However, the implementation of both is still challenging as it requires fast inversions of large matrices in every coherence period. To overcome this issue, we apply the truncated polynomial expansion (TPE) technique to the precoding and receiving vector of each UE and make use of RMT to determine the optimal weighting coefficients on a per- UE basis that asymptotically solve the max-min SINR problem. Numerical results are used to validate the asymptotic analysis in the finite system regime and to show that the proposed TPE transceivers efficiently mimic the optimal ones, while requiring much lower computational complexity.

  16. A mixed-methods study of system-level sustainability of evidence-based practices in 12 large-scale implementation initiatives.

    Science.gov (United States)

    Scudder, Ashley T; Taber-Thomas, Sarah M; Schaffner, Kristen; Pemberton, Joy R; Hunter, Leah; Herschell, Amy D

    2017-12-07

    In recent decades, evidence-based practices (EBPs) have been broadly promoted in community behavioural health systems in the United States of America, yet reported EBP penetration rates remain low. Determining how to systematically sustain EBPs in complex, multi-level service systems has important implications for public health. This study examined factors impacting the sustainability of parent-child interaction therapy (PCIT) in large-scale initiatives in order to identify potential predictors of sustainment. A mixed-methods approach to data collection was used. Qualitative interviews and quantitative surveys examining sustainability processes and outcomes were completed by participants from 12 large-scale initiatives. Sustainment strategies fell into nine categories, including infrastructure, training, marketing, integration and building partnerships. Strategies involving integration of PCIT into existing practices and quality monitoring predicted sustainment, while financing also emerged as a key factor. The reported factors and strategies impacting sustainability varied across initiatives; however, integration into existing practices, monitoring quality and financing appear central to high levels of sustainability of PCIT in community-based systems. More detailed examination of the progression of specific activities related to these strategies may aide in identifying priorities to include in strategic planning of future large-scale initiatives. ClinicalTrials.gov ID NCT02543359 ; Protocol number PRO12060529.

  17. Multi-scale image segmentation method with visual saliency constraints and its application

    Science.gov (United States)

    Chen, Yan; Yu, Jie; Sun, Kaimin

    2018-03-01

    Object-based image analysis method has many advantages over pixel-based methods, so it is one of the current research hotspots. It is very important to get the image objects by multi-scale image segmentation in order to carry out object-based image analysis. The current popular image segmentation methods mainly share the bottom-up segmentation principle, which is simple to realize and the object boundaries obtained are accurate. However, the macro statistical characteristics of the image areas are difficult to be taken into account, and fragmented segmentation (or over-segmentation) results are difficult to avoid. In addition, when it comes to information extraction, target recognition and other applications, image targets are not equally important, i.e., some specific targets or target groups with particular features worth more attention than the others. To avoid the problem of over-segmentation and highlight the targets of interest, this paper proposes a multi-scale image segmentation method with visually saliency graph constraints. Visual saliency theory and the typical feature extraction method are adopted to obtain the visual saliency information, especially the macroscopic information to be analyzed. The visual saliency information is used as a distribution map of homogeneity weight, where each pixel is given a weight. This weight acts as one of the merging constraints in the multi- scale image segmentation. As a result, pixels that macroscopically belong to the same object but are locally different can be more likely assigned to one same object. In addition, due to the constraint of visual saliency model, the constraint ability over local-macroscopic characteristics can be well controlled during the segmentation process based on different objects. These controls will improve the completeness of visually saliency areas in the segmentation results while diluting the controlling effect for non- saliency background areas. Experiments show that this method works

  18. Evaluation of statistical methods for quantifying fractal scaling in water-quality time series with irregular sampling

    Science.gov (United States)

    Zhang, Qian; Harman, Ciaran J.; Kirchner, James W.

    2018-02-01

    River water-quality time series often exhibit fractal scaling, which here refers to autocorrelation that decays as a power law over some range of scales. Fractal scaling presents challenges to the identification of deterministic trends because (1) fractal scaling has the potential to lead to false inference about the statistical significance of trends and (2) the abundance of irregularly spaced data in water-quality monitoring networks complicates efforts to quantify fractal scaling. Traditional methods for estimating fractal scaling - in the form of spectral slope (β) or other equivalent scaling parameters (e.g., Hurst exponent) - are generally inapplicable to irregularly sampled data. Here we consider two types of estimation approaches for irregularly sampled data and evaluate their performance using synthetic time series. These time series were generated such that (1) they exhibit a wide range of prescribed fractal scaling behaviors, ranging from white noise (β = 0) to Brown noise (β = 2) and (2) their sampling gap intervals mimic the sampling irregularity (as quantified by both the skewness and mean of gap-interval lengths) in real water-quality data. The results suggest that none of the existing methods fully account for the effects of sampling irregularity on β estimation. First, the results illustrate the danger of using interpolation for gap filling when examining autocorrelation, as the interpolation methods consistently underestimate or overestimate β under a wide range of prescribed β values and gap distributions. Second, the widely used Lomb-Scargle spectral method also consistently underestimates β. A previously published modified form, using only the lowest 5 % of the frequencies for spectral slope estimation, has very poor precision, although the overall bias is small. Third, a recent wavelet-based method, coupled with an aliasing filter, generally has the smallest bias and root-mean-squared error among all methods for a wide range of

  19. The Effect of Pressure and Temperature on Separation of Free Gadolinium(III) From Gd-DTPA Complex by Nanofiltration-Complexation Method

    Science.gov (United States)

    Rahayu, Iman; Anggraeni, Anni; Ukun, MSS; Bahti, Husein H.

    2017-05-01

    Nowdays, the utilization of rare earth elements has been carried out widely in industry and medicine, one of them is gadolinium in Gd-DTPA complex is used as a contrast agent in a magnetic resonance imaging (MRI) diagnostic to increase the visual contrast between normal tissue and diseased. Although the stability of a given complex may be high enough, the complexation step couldnot have been completed, so there is possible to gadolinium(III) in the complex compound. Therefore, the function of that compounds should be dangerous because of the toxicity of gadolinium(III) in human body. So, it is necessarry to separate free gadolinium(III) from Gd-DTPA complex by nanofiltration-complexation. The method of this study is complexing of Gd2O3 with DTPA ligand by reflux and separation of Gd-DTPA complex from gadolinium(III) with a nanofiltration membrane on the variation of pressures(2, 3, 4, 5, 6 bars) and temperature (25, 30, 35, 40 °C) and determined the flux and rejection. The results of this study are the higher of pressures and temperatures, permeation flux are increasing and ion rejections are decreasing and gave the free gadolinium(III) rejection until 86.26%.

  20. Pathological mechanisms underlying single large‐scale mitochondrial DNA deletions

    Science.gov (United States)

    Rocha, Mariana C.; Rosa, Hannah S.; Grady, John P.; Blakely, Emma L.; He, Langping; Romain, Nadine; Haller, Ronald G.; Newman, Jane; McFarland, Robert; Ng, Yi Shiau; Gorman, Grainne S.; Schaefer, Andrew M.; Tuppen, Helen A.; Taylor, Robert W.

    2018-01-01

    Objective Single, large‐scale deletions in mitochondrial DNA (mtDNA) are a common cause of mitochondrial disease. This study aimed to investigate the relationship between the genetic defect and molecular phenotype to improve understanding of pathogenic mechanisms associated with single, large‐scale mtDNA deletions in skeletal muscle. Methods We investigated 23 muscle biopsies taken from adult patients (6 males/17 females with a mean age of 43 years) with characterized single, large‐scale mtDNA deletions. Mitochondrial respiratory chain deficiency in skeletal muscle biopsies was quantified by immunoreactivity levels for complex I and complex IV proteins. Single muscle fibers with varying degrees of deficiency were selected from 6 patient biopsies for determination of mtDNA deletion level and copy number by quantitative polymerase chain reaction. Results We have defined 3 “classes” of single, large‐scale deletion with distinct patterns of mitochondrial deficiency, determined by the size and location of the deletion. Single fiber analyses showed that fibers with greater respiratory chain deficiency harbored higher levels of mtDNA deletion with an increase in total mtDNA copy number. For the first time, we have demonstrated that threshold levels for complex I and complex IV deficiency differ based on deletion class. Interpretation Combining genetic and immunofluorescent assays, we conclude that thresholds for complex I and complex IV deficiency are modulated by the deletion of complex‐specific protein‐encoding genes. Furthermore, removal of mt‐tRNA genes impacts specific complexes only at high deletion levels, when complex‐specific protein‐encoding genes remain. These novel findings provide valuable insight into the pathogenic mechanisms associated with these mutations. Ann Neurol 2018;83:115–130 PMID:29283441

  1. Localization Algorithm Based on a Spring Model (LASM for Large Scale Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Shuai Li

    2008-03-01

    Full Text Available A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1 for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.

  2. Scaling as an Organizational Method

    DEFF Research Database (Denmark)

    Papazu, Irina Maria Clara Hansen; Nelund, Mette

    2018-01-01

    Organization studies have shown limited interest in the part that scaling plays in organizational responses to climate change and sustainability. Moreover, while scales are viewed as central to the diagnosis of the organizational challenges posed by climate change and sustainability, the role...... turn something as immense as the climate into a small and manageable problem, thus making abstract concepts part of concrete, organizational practice....

  3. MOCC: A Fast and Robust Correlation-Based Method for Interest Point Matching under Large Scale Changes

    Science.gov (United States)

    Zhao, Feng; Huang, Qingming; Wang, Hao; Gao, Wen

    2010-12-01

    Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC) is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matching image pairs with scale changes up to a factor of 7. Moreover, MOCC is much faster in comparison with the state-of-the-art matching methods. Experimental results on real images show the robustness and effectiveness of the proposed method.

  4. An image overall complexity evaluation method based on LSD line detection

    Science.gov (United States)

    Li, Jianan; Duan, Jin; Yang, Xu; Xiao, Bo

    2017-04-01

    In the artificial world, whether it is the city's traffic roads or engineering buildings contain a lot of linear features. Therefore, the research on the image complexity of linear information has become an important research direction in digital image processing field. This paper, by detecting the straight line information in the image and using the straight line as the parameter index, establishing the quantitative and accurate mathematics relationship. In this paper, we use LSD line detection algorithm which has good straight-line detection effect to detect the straight line, and divide the detected line by the expert consultation strategy. Then we use the neural network to carry on the weight training and get the weight coefficient of the index. The image complexity is calculated by the complexity calculation model. The experimental results show that the proposed method is effective. The number of straight lines in the image, the degree of dispersion, uniformity and so on will affect the complexity of the image.

  5. Second-order wave diffraction by a circular cylinder using scaled boundary finite element method

    International Nuclear Information System (INIS)

    Song, H; Tao, L

    2010-01-01

    The scaled boundary finite element method (SBFEM) has achieved remarkable success in structural mechanics and fluid mechanics, combing the advantage of both FEM and BEM. Most of the previous works focus on linear problems, in which superposition principle is applicable. However, many physical problems in the real world are nonlinear and are described by nonlinear equations, challenging the application of the existing SBFEM model. A popular idea to solve a nonlinear problem is decomposing the nonlinear equation to a number of linear equations, and then solves them individually. In this paper, second-order wave diffraction by a circular cylinder is solved by SBFEM. By splitting the forcing term into two parts, the physical problem is described as two second-order boundary-value problems with different asymptotic behaviour at infinity. Expressing the velocity potentials as a series of depth-eigenfunctions, both of the 3D boundary-value problems are decomposed to a number of 2D boundary-value sub-problems, which are solved semi-analytically by SBFEM. Only the cylinder boundary is discretised with 1D curved finite-elements on the circumference of the cylinder, while the radial differential equation is solved completely analytically. The method can be extended to solve more complex wave-structure interaction problems resulting in direct engineering applications.

  6. Rapid, high-temperature, field test method for evaluation of geothermal calcium carbonate scale inhibitors

    Energy Technology Data Exchange (ETDEWEB)

    Asperger, R.G.

    1986-09-01

    A new test method is described that allows the rapid field testing of calcium carbonate scale inhibitors at 500/sup 0/F (260/sup 0/C). The method evolved from use of a full-flow test loop on a well with a mass flow rate of about 1 x 10/sup 6/ lbm/hr (126 kg/s). It is a simple, effective way to evaluate the effectiveness of inhibitors under field conditions. Five commercial formulations were chosen for field evaluation on the basis of nonflowing, laboratory screening tests at 500/sup 0/F (260/sup 0/C). Four of these formulations from different suppliers controlled calcium carbonate scale deposition as measured by the test method. Two of these could dislodge recently deposited scale that had not age-hardened. Performance-profile diagrams, which were measured for these four effective inhibitors, show the concentration interrelationship between brine calcium and inhibitor concentrations at which the formulations will and will not stop scale formation in the test apparatus. With these diagrams, one formulation was chosen for testing on the full-flow brine line. The composition was tested for 6 weeks and showed a dramatic decrease in the scaling occurring at the flow-control valve. This scaling was about to force a shutdown of a major, long-term flow test being done for reservoir economic evaluations. The inhibitor stopped the scaling, and the test was performed without interruption.

  7. Biocoordination chemistry. pH-metry titration method during study of biometal complexing with bioligands

    International Nuclear Information System (INIS)

    Dobrynina, N.A.

    1992-01-01

    Position of bioinorganic chemistry in the system of naturl science, as well as relations between bioinorganic and biocoordination chemistry, were considered. The content of chemical elements in geosphere and biosphere was analyzed. Characteristic features of biometal complexing with bioligands were pointed out. By way of example complex equilibria in solution were studie by the method of pH-metric titration using mathematical simulation. Advantages of the methods totality, when studying biosystems, were emphasized

  8. A direct algebraic method applied to obtain complex solutions of some nonlinear partial differential equations

    International Nuclear Information System (INIS)

    Zhang Huiqun

    2009-01-01

    By using some exact solutions of an auxiliary ordinary differential equation, a direct algebraic method is described to construct the exact complex solutions for nonlinear partial differential equations. The method is implemented for the NLS equation, a new Hamiltonian amplitude equation, the coupled Schrodinger-KdV equations and the Hirota-Maccari equations. New exact complex solutions are obtained.

  9. A new sub-equation method applied to obtain exact travelling wave solutions of some complex nonlinear equations

    International Nuclear Information System (INIS)

    Zhang Huiqun

    2009-01-01

    By using a new coupled Riccati equations, a direct algebraic method, which was applied to obtain exact travelling wave solutions of some complex nonlinear equations, is improved. And the exact travelling wave solutions of the complex KdV equation, Boussinesq equation and Klein-Gordon equation are investigated using the improved method. The method presented in this paper can also be applied to construct exact travelling wave solutions for other nonlinear complex equations.

  10. The efficiency of parameter estimation of latent path analysis using summated rating scale (SRS) and method of successive interval (MSI) for transformation of score to scale

    Science.gov (United States)

    Solimun, Fernandes, Adji Achmad Rinaldo; Arisoesilaningsih, Endang

    2017-12-01

    Research in various fields generally investigates systems and involves latent variables. One method to analyze the model representing the system is path analysis. The data of latent variables measured using questionnaires by applying attitude scale model yields data in the form of score, before analyzed should be transformation so that it becomes data of scale. Path coefficient, is parameter estimator, calculated from scale data using method of successive interval (MSI) and summated rating scale (SRS). In this research will be identifying which data transformation method is better. Path coefficients have smaller varieties are said to be more efficient. The transformation method that produces scaled data and used in path analysis capable of producing path coefficients (parameter estimators) with smaller varieties is said to be better. The result of analysis using real data shows that on the influence of Attitude variable to Intention Entrepreneurship, has relative efficiency (ER) = 1, where it shows that the result of analysis using data transformation of MSI and SRS as efficient. On the other hand, for simulation data, at high correlation between items (0.7-0.9), MSI method is more efficient 1.3 times better than SRS method.

  11. Modeling complex biological flows in multi-scale systems using the APDEC framework

    Science.gov (United States)

    Trebotich, David

    2006-09-01

    We have developed advanced numerical algorithms to model biological fluids in multiscale flow environments using the software framework developed under the SciDAC APDEC ISIC. The foundation of our computational effort is an approach for modeling DNA laden fluids as ''bead-rod'' polymers whose dynamics are fully coupled to an incompressible viscous solvent. The method is capable of modeling short range forces and interactions between particles using soft potentials and rigid constraints. Our methods are based on higher-order finite difference methods in complex geometry with adaptivity, leveraging algorithms and solvers in the APDEC Framework. Our Cartesian grid embedded boundary approach to incompressible viscous flow in irregular geometries has also been interfaced to a fast and accurate level-sets method within the APDEC Framework for extracting surfaces from volume renderings of medical image data and used to simulate cardio-vascular and pulmonary flows in critical anatomies.

  12. Platinum clusters with precise numbers of atoms for preparative-scale catalysis.

    Science.gov (United States)

    Imaoka, Takane; Akanuma, Yuki; Haruta, Naoki; Tsuchiya, Shogo; Ishihara, Kentaro; Okayasu, Takeshi; Chun, Wang-Jae; Takahashi, Masaki; Yamamoto, Kimihisa

    2017-09-25

    Subnanometer noble metal clusters have enormous potential, mainly for catalytic applications. Because a difference of only one atom may cause significant changes in their reactivity, a preparation method with atomic-level precision is essential. Although such a precision with enough scalability has been achieved by gas-phase synthesis, large-scale preparation is still at the frontier, hampering practical applications. We now show the atom-precise and fully scalable synthesis of platinum clusters on a milligram scale from tiara-like platinum complexes with various ring numbers (n = 5-13). Low-temperature calcination of the complexes on a carbon support under hydrogen stream affords monodispersed platinum clusters, whose atomicity is equivalent to that of the precursor complex. One of the clusters (Pt 10 ) exhibits high catalytic activity in the hydrogenation of styrene compared to that of the other clusters. This method opens an avenue for the application of these clusters to preparative-scale catalysis.The catalytic activity of a noble metal nanocluster is tied to its atomicity. Here, the authors report an atom-precise, fully scalable synthesis of platinum clusters from molecular ring precursors, and show that a variation of only one atom can dramatically change a cluster's reactivity.

  13. A meta-analysis of crop pest and natural enemy response to landscape complexity.

    Science.gov (United States)

    Chaplin-Kramer, Rebecca; O'Rourke, Megan E; Blitzer, Eleanor J; Kremen, Claire

    2011-09-01

    Many studies in recent years have investigated the relationship between landscape complexity and pests, natural enemies and/or pest control. However, no quantitative synthesis of this literature beyond simple vote-count methods yet exists. We conducted a meta-analysis of 46 landscape-level studies, and found that natural enemies have a strong positive response to landscape complexity. Generalist enemies show consistent positive responses to landscape complexity across all scales measured, while specialist enemies respond more strongly to landscape complexity at smaller scales. Generalist enemy response to natural habitat also tends to occur at larger spatial scales than for specialist enemies, suggesting that land management strategies to enhance natural pest control should differ depending on whether the dominant enemies are generalists or specialists. The positive response of natural enemies does not necessarily translate into pest control, since pest abundances show no significant response to landscape complexity. Very few landscape-scale studies have estimated enemy impact on pest populations, however, limiting our understanding of the effects of landscape on pest control. We suggest focusing future research efforts on measuring population dynamics rather than static counts to better characterise the relationship between landscape complexity and pest control services from natural enemies. © 2011 Blackwell Publishing Ltd/CNRS.

  14. H.264 SVC Complexity Reduction Based on Likelihood Mode Decision.

    Science.gov (United States)

    Balaji, L; Thyagharajan, K K

    2015-01-01

    H.264 Advanced Video Coding (AVC) was prolonged to Scalable Video Coding (SVC). SVC executes in different electronics gadgets such as personal computer, HDTV, SDTV, IPTV, and full-HDTV in which user demands various scaling of the same content. The various scaling is resolution, frame rate, quality, heterogeneous networks, bandwidth, and so forth. Scaling consumes more encoding time and computational complexity during mode selection. In this paper, to reduce encoding time and computational complexity, a fast mode decision algorithm based on likelihood mode decision (LMD) is proposed. LMD is evaluated in both temporal and spatial scaling. From the results, we conclude that LMD performs well, when compared to the previous fast mode decision algorithms. The comparison parameters are time, PSNR, and bit rate. LMD achieve time saving of 66.65% with 0.05% detriment in PSNR and 0.17% increment in bit rate compared with the full search method.

  15. A recursive method for calculating the total number of spanning trees and its applications in self-similar small-world scale-free network models

    Science.gov (United States)

    Ma, Fei; Su, Jing; Yao, Bing

    2018-05-01

    The problem of determining and calculating the number of spanning trees of any finite graph (model) is a great challenge, and has been studied in various fields, such as discrete applied mathematics, theoretical computer science, physics, chemistry and the like. In this paper, firstly, thank to lots of real-life systems and artificial networks built by all kinds of functions and combinations among some simpler and smaller elements (components), we discuss some helpful network-operation, including link-operation and merge-operation, to design more realistic and complicated network models. Secondly, we present a method for computing the total number of spanning trees. As an accessible example, we apply this method to space of trees and cycles respectively, and our results suggest that it is indeed a better one for such models. In order to reflect more widely practical applications and potentially theoretical significance, we study the enumerating method in some existing scale-free network models. On the other hand, we set up a class of new models displaying scale-free feature, that is to say, following P(k) k-γ, where γ is the degree exponent. Based on detailed calculation, the degree exponent γ of our deterministic scale-free models satisfies γ > 3. In the rest of our discussions, we not only calculate analytically the solutions of average path length, which indicates our models have small-world property being prevailing in amounts of complex systems, but also derive the number of spanning trees by means of the recursive method described in this paper, which clarifies our method is convenient to research these models.

  16. Vulnerability of complex networks under intentional attack with incomplete information

    International Nuclear Information System (INIS)

    Wu, J; Deng, H Z; Tan, Y J; Zhu, D Z

    2007-01-01

    We study the vulnerability of complex networks under intentional attack with incomplete information, which means that one can only preferentially attack the most important nodes among a local region of a network. The known random failure and the intentional attack are two extreme cases of our study. Using the generating function method, we derive the exact value of the critical removal fraction f c of nodes for the disintegration of networks and the size of the giant component. To validate our model and method, we perform simulations of intentional attack with incomplete information in scale-free networks. We show that the attack information has an important effect on the vulnerability of scale-free networks. We also demonstrate that hiding a fraction of the nodes information is a cost-efficient strategy for enhancing the robustness of complex networks

  17. Functional inference of complex anatomical tendinous networks at a macroscopic scale via sparse experimentation.

    Science.gov (United States)

    Saxena, Anupam; Lipson, Hod; Valero-Cuevas, Francisco J

    2012-01-01

    In systems and computational biology, much effort is devoted to functional identification of systems and networks at the molecular-or cellular scale. However, similarly important networks exist at anatomical scales such as the tendon network of human fingers: the complex array of collagen fibers that transmits and distributes muscle forces to finger joints. This network is critical to the versatility of the human hand, and its function has been debated since at least the 16(th) century. Here, we experimentally infer the structure (both topology and parameter values) of this network through sparse interrogation with force inputs. A population of models representing this structure co-evolves in simulation with a population of informative future force inputs via the predator-prey estimation-exploration algorithm. Model fitness depends on their ability to explain experimental data, while the fitness of future force inputs depends on causing maximal functional discrepancy among current models. We validate our approach by inferring two known synthetic Latex networks, and one anatomical tendon network harvested from a cadaver's middle finger. We find that functionally similar but structurally diverse models can exist within a narrow range of the training set and cross-validation errors. For the Latex networks, models with low training set error [functional structure of complex anatomical networks. This work expands current bioinformatics inference approaches by demonstrating that sparse, yet informative interrogation of biological specimens holds significant computational advantages in accurate and efficient inference over random testing, or assuming model topology and only inferring parameters values. These findings also hold clues to both our evolutionary history and the development of versatile machines.

  18. Studying the properties of photonic quasi-crystals by the scaling convergence method

    International Nuclear Information System (INIS)

    Ho, I-Lin; Ng, Ming-Yaw; Mai, Chien Chin; Ko, Peng Yu; Chang, Yia-Chung

    2013-01-01

    This work introduces the iterative scaling (or inflation) method to systematically approach and analyse the infinite structure of quasi-crystals. The resulting structures preserve local geometric orderings in order to prevent artificial disclination across the boundaries of super-cells, with realistic quasi-crystals coming out under high iteration (infinite super-cell). The method provides an easy way for decorations of quasi-crystalline lattices, and for compact reliefs with a quasi-periodic arrangement to underlying applications. Numerical examples for in-plane and off-plane properties of square-triangle quasi-crystals show fast convergence during iteratively geometric scaling, revealing characteristics that do not appear on regular crystals. (paper)

  19. ComplexContact: a web server for inter-protein contact prediction using deep learning

    KAUST Repository

    Zeng, Hong; Wang, Sheng; Zhou, Tianming; Zhao, Feifeng; Li, Xiufeng; Wu, Qing; Xu, Jinbo

    2018-01-01

    ComplexContact (http://raptorx2.uchicago.edu/ComplexContact/) is a web server for sequence-based interfacial residue-residue contact prediction of a putative protein complex. Interfacial residue-residue contacts are critical for understanding how proteins form complex and interact at residue level. When receiving a pair of protein sequences, ComplexContact first searches for their sequence homologs and builds two paired multiple sequence alignments (MSA), then it applies co-evolution analysis and a CASP-winning deep learning (DL) method to predict interfacial contacts from paired MSAs and visualizes the prediction as an image. The DL method was originally developed for intra-protein contact prediction and performed the best in CASP12. Our large-scale experimental test further shows that ComplexContact greatly outperforms pure co-evolution methods for inter-protein contact prediction, regardless of the species.

  20. ComplexContact: a web server for inter-protein contact prediction using deep learning

    KAUST Repository

    Zeng, Hong

    2018-05-20

    ComplexContact (http://raptorx2.uchicago.edu/ComplexContact/) is a web server for sequence-based interfacial residue-residue contact prediction of a putative protein complex. Interfacial residue-residue contacts are critical for understanding how proteins form complex and interact at residue level. When receiving a pair of protein sequences, ComplexContact first searches for their sequence homologs and builds two paired multiple sequence alignments (MSA), then it applies co-evolution analysis and a CASP-winning deep learning (DL) method to predict interfacial contacts from paired MSAs and visualizes the prediction as an image. The DL method was originally developed for intra-protein contact prediction and performed the best in CASP12. Our large-scale experimental test further shows that ComplexContact greatly outperforms pure co-evolution methods for inter-protein contact prediction, regardless of the species.

  1. ComplexContact: a web server for inter-protein contact prediction using deep learning.

    Science.gov (United States)

    Zeng, Hong; Wang, Sheng; Zhou, Tianming; Zhao, Feifeng; Li, Xiufeng; Wu, Qing; Xu, Jinbo

    2018-05-22

    ComplexContact (http://raptorx2.uchicago.edu/ComplexContact/) is a web server for sequence-based interfacial residue-residue contact prediction of a putative protein complex. Interfacial residue-residue contacts are critical for understanding how proteins form complex and interact at residue level. When receiving a pair of protein sequences, ComplexContact first searches for their sequence homologs and builds two paired multiple sequence alignments (MSA), then it applies co-evolution analysis and a CASP-winning deep learning (DL) method to predict interfacial contacts from paired MSAs and visualizes the prediction as an image. The DL method was originally developed for intra-protein contact prediction and performed the best in CASP12. Our large-scale experimental test further shows that ComplexContact greatly outperforms pure co-evolution methods for inter-protein contact prediction, regardless of the species.

  2. Comparison of Single and Multi-Scale Method for Leaf and Wood Points Classification from Terrestrial Laser Scanning Data

    Science.gov (United States)

    Wei, Hongqiang; Zhou, Guiyun; Zhou, Junjie

    2018-04-01

    The classification of leaf and wood points is an essential preprocessing step for extracting inventory measurements and canopy characterization of trees from the terrestrial laser scanning (TLS) data. The geometry-based approach is one of the widely used classification method. In the geometry-based method, it is common practice to extract salient features at one single scale before the features are used for classification. It remains unclear how different scale(s) used affect the classification accuracy and efficiency. To assess the scale effect on the classification accuracy and efficiency, we extracted the single-scale and multi-scale salient features from the point clouds of two oak trees of different sizes and conducted the classification on leaf and wood. Our experimental results show that the balanced accuracy of the multi-scale method is higher than the average balanced accuracy of the single-scale method by about 10 % for both trees. The average speed-up ratio of single scale classifiers over multi-scale classifier for each tree is higher than 30.

  3. Methods of Scientific Research: Teaching Scientific Creativity at Scale

    Science.gov (United States)

    Robbins, Dennis; Ford, K. E. Saavik

    2016-01-01

    We present a scaling-up plan for AstroComNYC's Methods of Scientific Research (MSR), a course designed to improve undergraduate students' understanding of science practices. The course format and goals, notably the open-ended, hands-on, investigative nature of the curriculum are reviewed. We discuss how the course's interactive pedagogical techniques empower students to learn creativity within the context of experimental design and control of variables thinking. To date the course has been offered to a limited numbers of students in specific programs. The goals of broadly implementing MSR is to reach more students and early in their education—with the specific purpose of supporting and improving retention of students pursuing STEM careers. However, we also discuss challenges in preserving the effectiveness of the teaching and learning experience at scale.

  4. Modern methods of surveyor observations in opencast mining under complex hydrogeological conditions.

    Science.gov (United States)

    Usoltseva, L. A.; Lushpei, V. P.; Mursin, VA

    2017-10-01

    The article considers the possibility of linking the modern methods of surveying security of open mining works to improve industrial safety in the Primorsky Territory, as well as their use in the educational process. Industrial Safety in the management of Surface Mining depends largely on the applied assessment methods and methods of stability of pit walls and slopes of dumps in the complex mining and hydro-geological conditions.

  5. MOCC: A Fast and Robust Correlation-Based Method for Interest Point Matching under Large Scale Changes

    Directory of Open Access Journals (Sweden)

    Wang Hao

    2010-01-01

    Full Text Available Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matching image pairs with scale changes up to a factor of 7. Moreover, MOCC is much faster in comparison with the state-of-the-art matching methods. Experimental results on real images show the robustness and effectiveness of the proposed method.

  6. Atomic scale modelling of materials of the nuclear fuel cycle

    International Nuclear Information System (INIS)

    Bertolus, M.

    2011-10-01

    This document written to obtain the French accreditation to supervise research presents the research I conducted at CEA Cadarache since 1999 on the atomic scale modelling of non-metallic materials involved in the nuclear fuel cycle: host materials for radionuclides from nuclear waste (apatites), fuel (in particular uranium dioxide) and ceramic cladding materials (silicon carbide). These are complex materials at the frontier of modelling capabilities since they contain heavy elements (rare earths or actinides), exhibit complex structures or chemical compositions and/or are subjected to irradiation effects: creation of point defects and fission products, amorphization. The objective of my studies is to bring further insight into the physics and chemistry of the elementary processes involved using atomic scale modelling and its coupling with higher scale models and experimental studies. This work is organised in two parts: on the one hand the development, adaptation and implementation of atomic scale modelling methods and validation of the approximations used; on the other hand the application of these methods to the investigation of nuclear materials under irradiation. This document contains a synthesis of the studies performed, orientations for future research, a detailed resume and a list of publications and communications. (author)

  7. What Is a Complex Innovation System?

    Science.gov (United States)

    Katz, J. Sylvan

    2016-01-01

    Innovation systems are sometimes referred to as complex systems, something that is intuitively understood but poorly defined. A complex system dynamically evolves in non-linear ways giving it unique properties that distinguish it from other systems. In particular, a common signature of complex systems is scale-invariant emergent properties. A scale-invariant property can be identified because it is solely described by a power law function, f(x) = kxα, where the exponent, α, is a measure of scale-invariance. The focus of this paper is to describe and illustrate that innovation systems have properties of a complex adaptive system. In particular scale-invariant emergent properties indicative of their complex nature that can be quantified and used to inform public policy. The global research system is an example of an innovation system. Peer-reviewed publications containing knowledge are a characteristic output. Citations or references to these articles are an indirect measure of the impact the knowledge has on the research community. Peer-reviewed papers indexed in Scopus and in the Web of Science were used as data sources to produce measures of sizes and impact. These measures are used to illustrate how scale-invariant properties can be identified and quantified. It is demonstrated that the distribution of impact has a reasonable likelihood of being scale-invariant with scaling exponents that tended toward a value of less than 3.0 with the passage of time and decreasing group sizes. Scale-invariant correlations are shown between the evolution of impact and size with time and between field impact and sizes at points in time. The recursive or self-similar nature of scale-invariance suggests that any smaller innovation system within the global research system is likely to be complex with scale-invariant properties too. PMID:27258040

  8. What Is a Complex Innovation System?

    Directory of Open Access Journals (Sweden)

    J Sylvan Katz

    Full Text Available Innovation systems are sometimes referred to as complex systems, something that is intuitively understood but poorly defined. A complex system dynamically evolves in non-linear ways giving it unique properties that distinguish it from other systems. In particular, a common signature of complex systems is scale-invariant emergent properties. A scale-invariant property can be identified because it is solely described by a power law function, f(x = kxα, where the exponent, α, is a measure of scale-invariance. The focus of this paper is to describe and illustrate that innovation systems have properties of a complex adaptive system. In particular scale-invariant emergent properties indicative of their complex nature that can be quantified and used to inform public policy. The global research system is an example of an innovation system. Peer-reviewed publications containing knowledge are a characteristic output. Citations or references to these articles are an indirect measure of the impact the knowledge has on the research community. Peer-reviewed papers indexed in Scopus and in the Web of Science were used as data sources to produce measures of sizes and impact. These measures are used to illustrate how scale-invariant properties can be identified and quantified. It is demonstrated that the distribution of impact has a reasonable likelihood of being scale-invariant with scaling exponents that tended toward a value of less than 3.0 with the passage of time and decreasing group sizes. Scale-invariant correlations are shown between the evolution of impact and size with time and between field impact and sizes at points in time. The recursive or self-similar nature of scale-invariance suggests that any smaller innovation system within the global research system is likely to be complex with scale-invariant properties too.

  9. Evaluation of statistical methods for quantifying fractal scaling in water-quality time series with irregular sampling

    Directory of Open Access Journals (Sweden)

    Q. Zhang

    2018-02-01

    Full Text Available River water-quality time series often exhibit fractal scaling, which here refers to autocorrelation that decays as a power law over some range of scales. Fractal scaling presents challenges to the identification of deterministic trends because (1 fractal scaling has the potential to lead to false inference about the statistical significance of trends and (2 the abundance of irregularly spaced data in water-quality monitoring networks complicates efforts to quantify fractal scaling. Traditional methods for estimating fractal scaling – in the form of spectral slope (β or other equivalent scaling parameters (e.g., Hurst exponent – are generally inapplicable to irregularly sampled data. Here we consider two types of estimation approaches for irregularly sampled data and evaluate their performance using synthetic time series. These time series were generated such that (1 they exhibit a wide range of prescribed fractal scaling behaviors, ranging from white noise (β  =  0 to Brown noise (β  =  2 and (2 their sampling gap intervals mimic the sampling irregularity (as quantified by both the skewness and mean of gap-interval lengths in real water-quality data. The results suggest that none of the existing methods fully account for the effects of sampling irregularity on β estimation. First, the results illustrate the danger of using interpolation for gap filling when examining autocorrelation, as the interpolation methods consistently underestimate or overestimate β under a wide range of prescribed β values and gap distributions. Second, the widely used Lomb–Scargle spectral method also consistently underestimates β. A previously published modified form, using only the lowest 5 % of the frequencies for spectral slope estimation, has very poor precision, although the overall bias is small. Third, a recent wavelet-based method, coupled with an aliasing filter, generally has the smallest bias and root-mean-squared error among

  10. Crustal-Scale Fault Interaction at Rifted Margins and the Formation of Domain-Bounding Breakaway Complexes: Insights From Offshore Norway

    Science.gov (United States)

    Osmundsen, P. T.; Péron-Pinvidic, G.

    2018-03-01

    The large-magnitude faults that control crustal thinning and excision at rifted margins combine into laterally persistent structural boundaries that separate margin domains of contrasting morphology and structure. We term them breakaway complexes. At the Mid-Norwegian margin, we identify five principal breakaway complexes that separate the proximal, necking, distal, and outer margin domains. Downdip and lateral interactions between the faults that constitute breakaway complexes became fundamental to the evolution of the 3-D margin architecture. Different types of fault interaction are observed along and between these faults, but simple models for fault growth will not fully describe their evolution. These structures operate on the crustal scale, cut large thicknesses of heterogeneously layered lithosphere, and facilitate fundamental margin processes such as deformation coupling and exhumation. Variations in large-magnitude fault geometry, erosional footwall incision, and subsequent differential subsidence along the main breakaway complexes likely record the variable efficiency of these processes.

  11. Complexity Index as Applied to Magnetic Resonance: Study Based on a Scale of Relative Units

    International Nuclear Information System (INIS)

    Capelastegui, A.; Villanua, J.

    2003-01-01

    To analyze the merit and repercussions of measuring magnetic resonance (MR) activity in units of radiological activity, and of using complexity index (CI) as an activity indicator. We studied the MR activity of Osatek, Inc. during an 8-year period (1994-2001). We measured this activity both in number of MR procedures performed and in units of radiological activity, such units being based on the scale of relative units published in the Radiological Services Administration Guidelines published by the Spanish Society or Medical Radiology. We calculated the annual complexity index, this being a quotient between the number of MR procedures performed and corresponding value in units of radiological activity. We also analyzed factors that can have an impact on the CI: type of exploration and power of the equipment's magnetic field. The CL stayed practically stable during the first 4 years of the study, while it increased during the second 4 years. There exists a direct relationship between this increase and the percentage of explorations that we term complex (basically, body-and angio-MR). The increasing complexity of MR studies in the last years is evident from a consideration of CI. MR productivity is more realistically expressed in units of radiological activity than in number of procedures performed by any one center. It also allows for making external comparisons. CI is a useful indicator that can be utilized as an administrative tool. (Author) 13 refs

  12. Fractional Complex Transform and exp-Function Methods for Fractional Differential Equations

    Directory of Open Access Journals (Sweden)

    Ahmet Bekir

    2013-01-01

    Full Text Available The exp-function method is presented for finding the exact solutions of nonlinear fractional equations. New solutions are constructed in fractional complex transform to convert fractional differential equations into ordinary differential equations. The fractional derivatives are described in Jumarie's modified Riemann-Liouville sense. We apply the exp-function method to both the nonlinear time and space fractional differential equations. As a result, some new exact solutions for them are successfully established.

  13. Identifying Hierarchical and Overlapping Protein Complexes Based on Essential Protein-Protein Interactions and “Seed-Expanding” Method

    Directory of Open Access Journals (Sweden)

    Jun Ren

    2014-01-01

    Full Text Available Many evidences have demonstrated that protein complexes are overlapping and hierarchically organized in PPI networks. Meanwhile, the large size of PPI network wants complex detection methods have low time complexity. Up to now, few methods can identify overlapping and hierarchical protein complexes in a PPI network quickly. In this paper, a novel method, called MCSE, is proposed based on λ-module and “seed-expanding.” First, it chooses seeds as essential PPIs or edges with high edge clustering values. Then, it identifies protein complexes by expanding each seed to a λ-module. MCSE is suitable for large PPI networks because of its low time complexity. MCSE can identify overlapping protein complexes naturally because a protein can be visited by different seeds. MCSE uses the parameter λ_th to control the range of seed expanding and can detect a hierarchical organization of protein complexes by tuning the value of λ_th. Experimental results of S. cerevisiae show that this hierarchical organization is similar to that of known complexes in MIPS database. The experimental results also show that MCSE outperforms other previous competing algorithms, such as CPM, CMC, Core-Attachment, Dpclus, HC-PIN, MCL, and NFC, in terms of the functional enrichment and matching with known protein complexes.

  14. Large-Scale Transport Model Uncertainty and Sensitivity Analysis: Distributed Sources in Complex Hydrogeologic Systems

    International Nuclear Information System (INIS)

    Sig Drellack, Lance Prothro

    2007-01-01

    simulations are challenged by the distributed sources in each of the Corrective Action Units, by complex mass transfer processes, and by the size and complexity of the field-scale flow models. An efficient methodology utilizing particle tracking results and convolution integrals provides in situ concentrations appropriate for Monte Carlo analysis. Uncertainty in source releases and transport parameters including effective porosity, fracture apertures and spacing, matrix diffusion coefficients, sorption coefficients, and colloid load and mobility are considered. With the distributions of input uncertainties and output plume volumes, global analysis methods including stepwise regression, contingency table analysis, and classification tree analysis are used to develop sensitivity rankings of parameter uncertainties for each model considered, thus assisting a variety of decisions

  15. Seismic detection method for small-scale discontinuities based on dictionary learning and sparse representation

    Science.gov (United States)

    Yu, Caixia; Zhao, Jingtao; Wang, Yanfei

    2017-02-01

    Studying small-scale geologic discontinuities, such as faults, cavities and fractures, plays a vital role in analyzing the inner conditions of reservoirs, as these geologic structures and elements can provide storage spaces and migration pathways for petroleum. However, these geologic discontinuities have weak energy and are easily contaminated with noises, and therefore effectively extracting them from seismic data becomes a challenging problem. In this paper, a method for detecting small-scale discontinuities using dictionary learning and sparse representation is proposed that can dig up high-resolution information by sparse coding. A K-SVD (K-means clustering via Singular Value Decomposition) sparse representation model that contains two stage of iteration procedure: sparse coding and dictionary updating, is suggested for mathematically expressing these seismic small-scale discontinuities. Generally, the orthogonal matching pursuit (OMP) algorithm is employed for sparse coding. However, the method can only update one dictionary atom at one time. In order to improve calculation efficiency, a regularized version of OMP algorithm is presented for simultaneously updating a number of atoms at one time. Two numerical experiments demonstrate the validity of the developed method for clarifying and enhancing small-scale discontinuities. The field example of carbonate reservoirs further demonstrates its effectiveness in revealing masked tiny faults and small-scale cavities.

  16. High-throughput preparation of complex multi-scale patterns from block copolymer/homopolymer blend films

    Science.gov (United States)

    Park, Hyungmin; Kim, Jae-Up; Park, Soojin

    2012-02-01

    A simple, straightforward process for fabricating multi-scale micro- and nanostructured patterns from polystyrene-block-poly(2-vinylpyridine) (PS-b-P2VP)/poly(methyl methacrylate) (PMMA) homopolymer in a preferential solvent for PS and PMMA is demonstrated. When the PS-b-P2VP/PMMA blend films were spin-coated onto a silicon wafer, PS-b-P2VP micellar arrays consisting of a PS corona and a P2VP core were formed, while the PMMA macrodomains were isolated, due to the macrophase separation caused by the incompatibility between block copolymer micelles and PMMA homopolymer during the spin-coating process. With an increase of PMMA composition, the size of PMMA macrodomains increased. Moreover, the P2VP blocks have a strong interaction with a native oxide of the surface of the silicon wafer, so that the P2VP wetting layer was first formed during spin-coating, and PS nanoclusters were observed on the PMMA macrodomains beneath. Whereas when a silicon surface was modified with a PS brush layer, the PS nanoclusters underlying PMMA domains were not formed. The multi-scale patterns prepared from copolymer micelle/homopolymer blend films are used as templates for the fabrication of gold nanoparticle arrays by incorporating the gold precursor into the P2VP chains. The combination of nanostructures prepared from block copolymer micellar arrays and macrostructures induced by incompatibility between the copolymer and the homopolymer leads to the formation of complex, multi-scale surface patterns by a simple casting process.A simple, straightforward process for fabricating multi-scale micro- and nanostructured patterns from polystyrene-block-poly(2-vinylpyridine) (PS-b-P2VP)/poly(methyl methacrylate) (PMMA) homopolymer in a preferential solvent for PS and PMMA is demonstrated. When the PS-b-P2VP/PMMA blend films were spin-coated onto a silicon wafer, PS-b-P2VP micellar arrays consisting of a PS corona and a P2VP core were formed, while the PMMA macrodomains were isolated, due to the

  17. Methods for deconvoluting and interpreting complex gamma- and x-ray spectral regions

    International Nuclear Information System (INIS)

    Gunnink, R.

    1983-06-01

    Germanium and silicon detectors are now widely used for the detection and measurement of x and gamma radiation. However, some analysis situations and spectral regions have heretofore been too complex to deconvolute and interpret by techniques in general use. One example is the L x-ray spectrum of an element taken with a Ge or Si detector. This paper describes some new tools and methods that were developed to analyze complex spectral regions; they are illustrated with examples

  18. Method for Hot Real-Time Analysis of Pyrolysis Vapors at Pilot Scale

    Energy Technology Data Exchange (ETDEWEB)

    Pomeroy, Marc D [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-29

    Pyrolysis oils contain more than 400 compounds, up to 60% of which do not re-volatilize for subsequent chemical analysis. Vapor chemical composition is also complicated as additional condensation reactions occur during quenching and collection of the product. Due to the complexity of the pyrolysis oil, and a desire to catalytically upgrade the vapor composition before condensation, online real-time analytical techniques such as Molecular Beam Mass Spectrometry (MBMS) are of great use. However, in order to properly sample hot pyrolysis vapors at the pilot scale, many challenges must be overcome.

  19. Purohit's spectrophotometric method for determination of stability constants of complexes using Job's curves

    International Nuclear Information System (INIS)

    Purohit, D.N.; Goswami, A.K.; Chauhan, R.S.; Ressalan, S.

    1999-01-01

    A spectrophotometric method for determination of stability constants making use of Job's curves has been developed. Using this method stability constants of Zn(II), Cd(II), Mo(VI) and V(V) complexes of hydroxytriazenes have been determined. For the sake of comparison, values of the stability constants were also determined using Harvey and Manning's method. The values of the stability constants developed by two methods compare well. This new method has been named as Purohit's method. (author)

  20. Complexity and Pilot Workload Metrics for the Evaluation of Adaptive Flight Controls on a Full Scale Piloted Aircraft

    Science.gov (United States)

    Hanson, Curt; Schaefer, Jacob; Burken, John J.; Larson, David; Johnson, Marcus

    2014-01-01

    Flight research has shown the effectiveness of adaptive flight controls for improving aircraft safety and performance in the presence of uncertainties. The National Aeronautics and Space Administration's (NASA)'s Integrated Resilient Aircraft Control (IRAC) project designed and conducted a series of flight experiments to study the impact of variations in adaptive controller design complexity on performance and handling qualities. A novel complexity metric was devised to compare the degrees of simplicity achieved in three variations of a model reference adaptive controller (MRAC) for NASA's F-18 (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) Full-Scale Advanced Systems Testbed (Gen-2A) aircraft. The complexity measures of these controllers are also compared to that of an earlier MRAC design for NASA's Intelligent Flight Control System (IFCS) project and flown on a highly modified F-15 aircraft (McDonnell Douglas, now The Boeing Company, Chicago, Illinois). Pilot comments during the IRAC research flights pointed to the importance of workload on handling qualities ratings for failure and damage scenarios. Modifications to existing pilot aggressiveness and duty cycle metrics are presented and applied to the IRAC controllers. Finally, while adaptive controllers may alleviate the effects of failures or damage on an aircraft's handling qualities, they also have the potential to introduce annoying changes to the flight dynamics or to the operation of aircraft systems. A nuisance rating scale is presented for the categorization of nuisance side-effects of adaptive controllers.

  1. Equivalence of the generalized and complex Kohn variational methods

    Energy Technology Data Exchange (ETDEWEB)

    Cooper, J N; Armour, E A G [School of Mathematical Sciences, University Park, Nottingham NG7 2RD (United Kingdom); Plummer, M, E-mail: pmxjnc@googlemail.co [STFC Daresbury Laboratory, Daresbury, Warrington, Cheshire WA4 4AD (United Kingdom)

    2010-04-30

    For Kohn variational calculations on low energy (e{sup +} - H{sub 2}) elastic scattering, we prove that the phase shift approximation, obtained using the complex Kohn method, is precisely equal to a value which can be obtained immediately via the real-generalized Kohn method. Our treatment is sufficiently general to be applied directly to arbitrary potential scattering or single open channel scattering problems, with exchange if required. In the course of our analysis, we develop a framework formally to describe the anomalous behaviour of our generalized Kohn calculations in the regions of the well-known Schwartz singularities. This framework also explains the mathematical origin of the anomaly-free singularities we reported in a previous article. Moreover, we demonstrate a novelty: that explicit solutions of the Kohn equations are not required in order to calculate optimal phase shift approximations. We relate our rigorous framework to earlier descriptions of the Kohn-type methods.

  2. Equivalence of the generalized and complex Kohn variational methods

    International Nuclear Information System (INIS)

    Cooper, J N; Armour, E A G; Plummer, M

    2010-01-01

    For Kohn variational calculations on low energy (e + - H 2 ) elastic scattering, we prove that the phase shift approximation, obtained using the complex Kohn method, is precisely equal to a value which can be obtained immediately via the real-generalized Kohn method. Our treatment is sufficiently general to be applied directly to arbitrary potential scattering or single open channel scattering problems, with exchange if required. In the course of our analysis, we develop a framework formally to describe the anomalous behaviour of our generalized Kohn calculations in the regions of the well-known Schwartz singularities. This framework also explains the mathematical origin of the anomaly-free singularities we reported in a previous article. Moreover, we demonstrate a novelty: that explicit solutions of the Kohn equations are not required in order to calculate optimal phase shift approximations. We relate our rigorous framework to earlier descriptions of the Kohn-type methods.

  3. Learning with Generalization Capability by Kernel Methods of Bounded Complexity

    Czech Academy of Sciences Publication Activity Database

    Kůrková, Věra; Sanguineti, M.

    2005-01-01

    Roč. 21, č. 3 (2005), s. 350-367 ISSN 0885-064X R&D Projects: GA AV ČR 1ET100300419 Institutional research plan: CEZ:AV0Z10300504 Keywords : supervised learning * generalization * model complexity * kernel methods * minimization of regularized empirical errors * upper bounds on rates of approximate optimization Subject RIV: BA - General Mathematics Impact factor: 1.186, year: 2005

  4. Stress Intensity Factor for Interface Cracks in Bimaterials Using Complex Variable Meshless Manifold Method

    Directory of Open Access Journals (Sweden)

    Hongfen Gao

    2014-01-01

    Full Text Available This paper describes the application of the complex variable meshless manifold method (CVMMM to stress intensity factor analyses of structures containing interface cracks between dissimilar materials. A discontinuous function and the near-tip asymptotic displacement functions are added to the CVMMM approximation using the framework of complex variable moving least-squares (CVMLS approximation. This enables the domain to be modeled by CVMMM without explicitly meshing the crack surfaces. The enriched crack-tip functions are chosen as those that span the asymptotic displacement fields for an interfacial crack. The complex stress intensity factors for bimaterial interfacial cracks were numerically evaluated using the method. Good agreement between the numerical results and the reference solutions for benchmark interfacial crack problems is realized.

  5. A method of reconstructing complex stratigraphic surfaces with multitype fault constraints

    Science.gov (United States)

    Deng, Shi-Wu; Jia, Yu; Yao, Xing-Miao; Liu, Zhi-Ning

    2017-06-01

    The construction of complex stratigraphic surfaces is widely employed in many fields, such as petroleum exploration, geological modeling, and geological structure analysis. It also serves as an important foundation for data visualization and visual analysis in these fields. The existing surface construction methods have several deficiencies and face various difficulties, such as the presence of multitype faults and roughness of resulting surfaces. In this paper, a surface modeling method that uses geometric partial differential equations (PDEs) is introduced for the construction of stratigraphic surfaces. It effectively solves the problem of surface roughness caused by the irregularity of stratigraphic data distribution. To cope with the presence of multitype complex faults, a two-way projection algorithm between threedimensional space and a two-dimensional plane is proposed. Using this algorithm, a unified method based on geometric PDEs is developed for dealing with multitype faults. Moreover, the corresponding geometric PDE is derived, and an algorithm based on an evolutionary solution is developed. The algorithm proposed for constructing spatial surfaces with real data verifies its computational efficiency and its ability to handle irregular data distribution. In particular, it can reconstruct faulty surfaces, especially those with overthrust faults.

  6. Complexation of biological ligands with lanthanides(III) for MRI: Structure, thermodynamic and methods; Complexation des cations lanthanides trivalents par des ligands d'origine biologique pour l'IRM: Structure, thermodynamique et methodes

    Energy Technology Data Exchange (ETDEWEB)

    Bonnet, C

    2006-07-15

    New cyclic ligands derived from sugars and amino-acids form a scaffold carrying a coordination sphere of oxygen atoms suitable to complex Ln(III) ions. In spite of their rather low molecular weights, the complexes display surprisingly high relaxivity values, especially at high field. The ACX and BCX ligands, which are acidic derivatives of modified and cyclo-dextrins, form mono and bimetallic complexes with Ln(III). The LnACX and LnBCX complexes show affinities towards Ln(III) similar to those of tri-acidic ligands. In the bimetallic Lu2ACX complex, the cations are deeply embedded in the cavity of the ligand, as shown by the X-ray structure. In aqueous solution, the number of water molecules coordinated to the cation in the LnACX complex depends on the nature and concentration of the alkali ions of the supporting electrolyte, as shown by luminescence and relaxometric measurements. There is only one water molecule coordinated in the LnBCX complex, which enables us to highlight an important second sphere contribution to relaxivity. The NMR study of the RAFT peptidic ligand shows the complexation of Ln(III), with an affinity similar to those of natural ligands derived from calmodulin. The relaxometric study also shows an important second sphere contribution to relaxivity. To better understand the intricate molecular factors affecting relaxivity, we developed new relaxometric methods based on probe solutes. These methods allow us to determine the charge of the complex, weak affinity constants, trans-metallation constants, and the electronic relaxation rate. (author)

  7. Systems approach to monitoring and evaluation guides scale up of the Standard Days Method of family planning in Rwanda

    Science.gov (United States)

    Igras, Susan; Sinai, Irit; Mukabatsinda, Marie; Ngabo, Fidele; Jennings, Victoria; Lundgren, Rebecka

    2014-01-01

    There is no guarantee that a successful pilot program introducing a reproductive health innovation can also be expanded successfully to the national or regional level, because the scaling-up process is complex and multilayered. This article describes how a successful pilot program to integrate the Standard Days Method (SDM) of family planning into existing Ministry of Health services was scaled up nationally in Rwanda. Much of the success of the scale-up effort was due to systematic use of monitoring and evaluation (M&E) data from several sources to make midcourse corrections. Four lessons learned illustrate this crucially important approach. First, ongoing M&E data showed that provider training protocols and client materials that worked in the pilot phase did not work at scale; therefore, we simplified these materials to support integration into the national program. Second, triangulation of ongoing monitoring data with national health facility and population-based surveys revealed serious problems in supply chain mechanisms that affected SDM (and the accompanying CycleBeads client tool) availability and use; new procedures for ordering supplies and monitoring stockouts were instituted at the facility level. Third, supervision reports and special studies revealed that providers were imposing unnecessary medical barriers to SDM use; refresher training and revised supervision protocols improved provider practices. Finally, informal environmental scans, stakeholder interviews, and key events timelines identified shifting political and health policy environments that influenced scale-up outcomes; ongoing advocacy efforts are addressing these issues. The SDM scale-up experience in Rwanda confirms the importance of monitoring and evaluating programmatic efforts continuously, using a variety of data sources, to improve program outcomes. PMID:25276581

  8. H.264 SVC Complexity Reduction Based on Likelihood Mode Decision

    Directory of Open Access Journals (Sweden)

    L. Balaji

    2015-01-01

    Full Text Available H.264 Advanced Video Coding (AVC was prolonged to Scalable Video Coding (SVC. SVC executes in different electronics gadgets such as personal computer, HDTV, SDTV, IPTV, and full-HDTV in which user demands various scaling of the same content. The various scaling is resolution, frame rate, quality, heterogeneous networks, bandwidth, and so forth. Scaling consumes more encoding time and computational complexity during mode selection. In this paper, to reduce encoding time and computational complexity, a fast mode decision algorithm based on likelihood mode decision (LMD is proposed. LMD is evaluated in both temporal and spatial scaling. From the results, we conclude that LMD performs well, when compared to the previous fast mode decision algorithms. The comparison parameters are time, PSNR, and bit rate. LMD achieve time saving of 66.65% with 0.05% detriment in PSNR and 0.17% increment in bit rate compared with the full search method.

  9. Complexity of the AdS soliton

    Science.gov (United States)

    Reynolds, Alan P.; Ross, Simon F.

    2018-05-01

    We consider the holographic complexity conjectures in the context of the AdS soliton, which is the holographic dual of the ground state of a field theory on a torus with antiperiodic boundary conditions for fermions on one cycle. The complexity is a non-trivial function of the size of the circle with antiperiodic boundary conditions, which sets an IR scale in the dual geometry. We find qualitative differences between the calculations of complexity from spatial volume and action (CV and CA). In the CV calculation, the complexity for antiperiodic boundary conditions is smaller than for periodic, and decreases monotonically with increasing IR scale. In the CA calculation, the complexity for antiperiodic boundary conditions is larger than for periodic, and initially increases with increasing IR scale, eventually decreasing to zero as the IR scale becomes of order the UV cutoff. We compare these results to a simple calculation for free fermions on a lattice, where we find the complexity for antiperiodic boundary conditions is larger than for periodic.

  10. Determining the multi-scale hedge ratios of stock index futures using the lower partial moments method

    Science.gov (United States)

    Dai, Jun; Zhou, Haigang; Zhao, Shaoquan

    2017-01-01

    This paper considers a multi-scale future hedge strategy that minimizes lower partial moments (LPM). To do this, wavelet analysis is adopted to decompose time series data into different components. Next, different parametric estimation methods with known distributions are applied to calculate the LPM of hedged portfolios, which is the key to determining multi-scale hedge ratios over different time scales. Then these parametric methods are compared with the prevailing nonparametric kernel metric method. Empirical results indicate that in the China Securities Index 300 (CSI 300) index futures and spot markets, hedge ratios and hedge efficiency estimated by the nonparametric kernel metric method are inferior to those estimated by parametric hedging model based on the features of sequence distributions. In addition, if minimum-LPM is selected as a hedge target, the hedging periods, degree of risk aversion, and target returns can affect the multi-scale hedge ratios and hedge efficiency, respectively.

  11. Classroom-oriented research from a complex systems perspective

    Directory of Open Access Journals (Sweden)

    Diane Larsen-Freeman

    2016-09-01

    Full Text Available Bringing a complex systems perspective to bear on classroom-oriented research challenges researchers to think differently, seeing the classroom ecology as one dynamic system nested in a hierarchy of such systems at different levels of scale, all of which are spatially and temporally situated. This article begins with an introduction to complex dynamic systems theory, in which challenges to traditional ways of conducting classroom research are interwoven. It concludes with suggestions for research methods that are more consistent with the theory. Research does not become easier when approached from a complex systems perspective, but it has the virtue of reflecting the way the world works.

  12. Argument Complexity: Teaching Undergraduates to Make Better Arguments

    Science.gov (United States)

    Kelly, Matthew A.; West, Robert L.

    2017-01-01

    The task of turning undergrads into academics requires teaching them to reason about the world in a more complex way. We present the Argument Complexity Scale, a tool for analysing the complexity of argumentation, based on the Integrative Complexity and Conceptual Complexity Scales from, respectively, political psychology and personality theory.…

  13. Large scale simulation of liquid water transport in a gas diffusion layer of polymer electrolyte membrane fuel cells using the lattice Boltzmann method

    Science.gov (United States)

    Sakaida, Satoshi; Tabe, Yutaka; Chikahisa, Takemi

    2017-09-01

    A method for the large-scale simulation with the lattice Boltzmann method (LBM) is proposed for liquid water movement in a gas diffusion layer (GDL) of polymer electrolyte membrane fuel cells. The LBM is able to analyze two-phase flows in complex structures, however the simulation domain is limited due to heavy computational loads. This study investigates a variety means to reduce computational loads and increase the simulation areas. One is applying an LBM treating two-phases as having the same density, together with keeping numerical stability with large time steps. The applicability of this approach is confirmed by comparing the results with rigorous simulations using actual density. The second is establishing the maximum limit of the Capillary number that maintains flow patterns similar to the precise simulation; this is attempted as the computational load is inversely proportional to the Capillary number. The results show that the Capillary number can be increased to 3.0 × 10-3, where the actual operation corresponds to Ca = 10-5∼10-8. The limit is also investigated experimentally using an enlarged scale model satisfying similarity conditions for the flow. Finally, a demonstration is made of the effects of pore uniformity in GDL as an example of a large-scale simulation covering a channel.

  14. Models, methods and software tools for building complex adaptive traffic systems

    International Nuclear Information System (INIS)

    Alyushin, S.A.

    2011-01-01

    The paper studies the modern methods and tools to simulate the behavior of complex adaptive systems (CAS), the existing systems of traffic modeling in simulators and their characteristics; proposes requirements for assessing the suitability of the system to simulate the CAS behavior in simulators. The author has developed a model of adaptive agent representation and its functioning environment to meet certain requirements set above, and has presented methods of agents' interactions and methods of conflict resolution in simulated traffic situations. A simulation system realizing computer modeling for simulating the behavior of CAS in traffic situations has been created [ru

  15. Variable scaling method and Stark effect in hydrogen atom

    International Nuclear Information System (INIS)

    Choudhury, R.K.R.; Ghosh, B.

    1983-09-01

    By relating the Stark effect problem in hydrogen-like atoms to that of the spherical anharmonic oscillator we have found simple formulas for energy eigenvalues for the Stark effect. Matrix elements have been calculated using 0(2,1) algebra technique after Armstrong and then the variable scaling method has been used to find optimal solutions. Our numerical results are compared with those of Hioe and Yoo and also with the results obtained by Lanczos. (author)

  16. Investigation of electron-atom/molecule scattering resonances: Two complex multiconfigurational self-consistent field approaches

    Energy Technology Data Exchange (ETDEWEB)

    Samanta, Kousik [Department of Chemistry, Rice University, Houston, TX 77005 (United States); Yeager, Danny L. [Department of Chemistry, Texas A and M University, College Station, TX 77843 (United States)

    2015-01-22

    Resonances are temporarily bound states which lie in the continuum part of the Hamiltonian. If the electronic coordinates of the Hamiltonian are scaled (“dilated”) by a complex parameter, η = αe{sup iθ} (α, θ real), then its complex eigenvalues represent the scattering states (resonant and non-resonant) while the eigenvalues corresponding to the bound states and the ionization and the excitation thresholds remain real and unmodified. These make the study of these transient species amenable to the bound state methods. We developed a quadratically convergent multiconfigurational self-consistent field method (MCSCF), a well-established bound-state technique, combined with a dilated Hamiltonian to investigate resonances. This is made possible by the adoption of a second quantization algebra suitable for a set of “complex conjugate biorthonormal” spin orbitals and a modified step-length constraining algorithm to control the walk on the complex energy hypersurface while searching for the stationary point using a multidimensional Newton-Raphson scheme. We present our computational results for the {sup 2}PBe{sup −} shape resonances using two different computationally efficient methods that utilize complex scaled MCSCF (i.e., CMCSCF). These two methods are to straightforwardly use CMCSCF energy differences and to obtain energy differences using an approximation to the complex multiconfigurational electron propagator. It is found that, differing from previous computational studies by others, there are actually two {sup 2}PBe{sup −} shape resonances very close in energy. In addition, N{sub 2} resonances are examined using one of these methods.

  17. Glycosaminoglycan-resistant and pH-sensitive lipid-coated DNA complexes produced by detergent removal method.

    Science.gov (United States)

    Lehtinen, Julia; Hyvönen, Zanna; Subrizi, Astrid; Bunjes, Heike; Urtti, Arto

    2008-10-21

    Cationic polymers are efficient gene delivery vectors in in vitro conditions, but these carriers can fail in vivo due to interactions with extracellular polyanions, i.e. glycosaminoglycans (GAG). The aim of this study was to develop a stable gene delivery vector that is activated at the acidic endosomal pH. Cationic DNA/PEI complexes were coated by 1,2-dioleylphosphatidylethanolamine (DOPE) and cholesteryl hemisuccinate (CHEMS) (3:2 mol/mol) using two coating methods: detergent removal and mixing with liposomes prepared by ethanol injection. Only detergent removal produced lipid-coated DNA complexes that were stable against GAGs, but were membrane active at low pH towards endosome mimicking liposomes. In relation to the low cellular uptake of the coated complexes, their transfection efficacy was relatively high. PEGylation of the coated complexes increased their cellular uptake but reduced the pH-sensitivity. Detergent removal was thus a superior method for the production of stable, but acid activatable, lipid-coated DNA complexes.

  18. Epidemic dynamics and endemic states in complex networks

    Science.gov (United States)

    Pastor-Satorras, Romualdo; Vespignani, Alessandro

    2001-06-01

    We study by analytical methods and large scale simulations a dynamical model for the spreading of epidemics in complex networks. In networks with exponentially bounded connectivity we recover the usual epidemic behavior with a threshold defining a critical point below that the infection prevalence is null. On the contrary, on a wide range of scale-free networks we observe the absence of an epidemic threshold and its associated critical behavior. This implies that scale-free networks are prone to the spreading and the persistence of infections whatever spreading rate the epidemic agents might possess. These results can help understanding computer virus epidemics and other spreading phenomena on communication and social networks.

  19. Epidemic dynamics and endemic states in complex networks

    International Nuclear Information System (INIS)

    Pastor-Satorras, Romualdo; Vespignani, Alessandro

    2001-01-01

    We study by analytical methods and large scale simulations a dynamical model for the spreading of epidemics in complex networks. In networks with exponentially bounded connectivity we recover the usual epidemic behavior with a threshold defining a critical point below that the infection prevalence is null. On the contrary, on a wide range of scale-free networks we observe the absence of an epidemic threshold and its associated critical behavior. This implies that scale-free networks are prone to the spreading and the persistence of infections whatever spreading rate the epidemic agents might possess. These results can help understanding computer virus epidemics and other spreading phenomena on communication and social networks

  20. Adjoint-Based Aerodynamic Design of Complex Aerospace Configurations

    Science.gov (United States)

    Nielsen, Eric J.

    2016-01-01

    An overview of twenty years of adjoint-based aerodynamic design research at NASA Langley Research Center is presented. Adjoint-based algorithms provide a powerful tool for efficient sensitivity analysis of complex large-scale computational fluid dynamics (CFD) simulations. Unlike alternative approaches for which computational expense generally scales with the number of design parameters, adjoint techniques yield sensitivity derivatives of a simulation output with respect to all input parameters at the cost of a single additional simulation. With modern large-scale CFD applications often requiring millions of compute hours for a single analysis, the efficiency afforded by adjoint methods is critical in realizing a computationally tractable design optimization capability for such applications.

  1. Detection of circulating immune complexes in hepatitis by means of a new method employing /sup 125/I-antibody. Circulating immune complexes in hepatitis

    Energy Technology Data Exchange (ETDEWEB)

    Fresco, G F [Genoa Univ. (Italy). Dept. of Internal Medicine

    1978-06-01

    A new RIA method for the detection of circulating immune complexes and antibodies arising in the course of viral hepatitis is described. It involves the use of /sup 125/I-labeled antibodies and foresees the possibility of employing immune complex-coated polypropylene tubes. This simple and sensitive procedure takes into account the possibility that the immune complexes may be absorbed by the surface of polypropylene tubes during the period in which the serum remains there.

  2. Scaling of counter-current imbibition recovery curves using artificial neural networks

    Science.gov (United States)

    Jafari, Iman; Masihi, Mohsen; Nasiri Zarandi, Masoud

    2018-06-01

    Scaling imbibition curves are of great importance in the characterization and simulation of oil production from naturally fractured reservoirs. Different parameters such as matrix porosity and permeability, oil and water viscosities, matrix dimensions, and oil/water interfacial tensions have an effective on the imbibition process. Studies on the scaling imbibition curves along with the consideration of different assumptions have resulted in various scaling equations. In this work, using an artificial neural network (ANN) method, a novel technique is presented for scaling imbibition recovery curves, which can be used for scaling the experimental and field-scale imbibition cases. The imbibition recovery curves for training and testing the neural network were gathered through the simulation of different scenarios using a commercial reservoir simulator. In this ANN-based method, six parameters were assumed to have an effect on the imbibition process and were considered as the inputs for training the network. Using the ‘Bayesian regularization’ training algorithm, the network was trained and tested. Training and testing phases showed superior results in comparison with the other scaling methods. It is concluded that using the new technique is useful for scaling imbibition recovery curves, especially for complex cases, for which the common scaling methods are not designed.

  3. Fine-Scale Bacterial Beta Diversity within a Complex Ecosystem (Zodletone Spring, OK, USA): The Role of the Rare Biosphere

    Science.gov (United States)

    Youssef, Noha H.; Couger, M. B.; Elshahed, Mostafa S.

    2010-01-01

    Background The adaptation of pyrosequencing technologies for use in culture-independent diversity surveys allowed for deeper sampling of ecosystems of interest. One extremely well suited area of interest for pyrosequencing-based diversity surveys that has received surprisingly little attention so far, is examining fine scale (e.g. micrometer to millimeter) beta diversity in complex microbial ecosystems. Methodology/Principal Findings We examined the patterns of fine scale Beta diversity in four adjacent sediment samples (1mm apart) from the source of an anaerobic sulfide and sulfur rich spring (Zodletone spring) in southwestern Oklahoma, USA. Using pyrosequencing, a total of 292,130 16S rRNA gene sequences were obtained. The beta diversity patterns within the four datasets were examined using various qualitative and quantitative similarity indices. Low levels of Beta diversity (high similarity indices) were observed between the four samples at the phylum-level. However, at a putative species (OTU0.03) level, higher levels of beta diversity (lower similarity indices) were observed. Further examination of beta diversity patterns within dominant and rare members of the community indicated that at the putative species level, beta diversity is much higher within rare members of the community. Finally, sub-classification of rare members of Zodletone spring community based on patterns of novelty and uniqueness, and further examination of fine scale beta diversity of each of these subgroups indicated that members of the community that are unique, but non novel showed the highest beta diversity within these subgroups of the rare biosphere. Conclusions/Significance The results demonstrate the occurrence of high inter-sample diversity within seemingly identical samples from a complex habitat. We reason that such unexpected diversity should be taken into consideration when exploring gamma diversity of various ecosystems, as well as planning for sequencing-intensive metagenomic

  4. Subspace Barzilai-Borwein Gradient Method for Large-Scale Bound Constrained Optimization

    International Nuclear Information System (INIS)

    Xiao Yunhai; Hu Qingjie

    2008-01-01

    An active set subspace Barzilai-Borwein gradient algorithm for large-scale bound constrained optimization is proposed. The active sets are estimated by an identification technique. The search direction consists of two parts: some of the components are simply defined; the other components are determined by the Barzilai-Borwein gradient method. In this work, a nonmonotone line search strategy that guarantees global convergence is used. Preliminary numerical results show that the proposed method is promising, and competitive with the well-known method SPG on a subset of bound constrained problems from CUTEr collection

  5. Non-Abelian Kubo formula and the multiple time-scale method

    International Nuclear Information System (INIS)

    Zhang, X.; Li, J.

    1996-01-01

    The non-Abelian Kubo formula is derived from the kinetic theory. That expression is compared with the one obtained using the eikonal for a Chern endash Simons theory. The multiple time-scale method is used to study the non-Abelian Kubo formula, and the damping rate for longitudinal color waves is computed. copyright 1996 Academic Press, Inc

  6. Computational study of formamide-water complexes using the SAPT and AIM methods

    International Nuclear Information System (INIS)

    Parreira, Renato L.T.; Valdes, Haydee; Galembeck, Sergio E.

    2006-01-01

    In this work, the complexes formed between formamide and water were studied by means of the SAPT and AIM methods. Complexation leads to significant alterations in the geometries and electronic structure of formamide. Intermolecular interactions in the complexes are intense, especially in the cases where the solvent interacts with the carbonyl and amide groups simultaneously. In the transition states, the interaction between the water molecule and the lone pair on the amide nitrogen is also important. In all the complexes studied herein, the electrostatic interactions between formamide and water are the main attractive force, and their contribution may be five times as large as the corresponding contribution from dispersion, and twice as large as the contribution from induction. However, an increase in the resonance of planar formamide with the successive addition of water molecules may suggest that the hydrogen bonds taking place between formamide and water have some covalent character

  7. Simulation As a Method To Support Complex Organizational Transformations in Healthcare

    NARCIS (Netherlands)

    Rothengatter, D.C.F.; Katsma, Christiaan; van Hillegersberg, Jos

    2010-01-01

    In this paper we study the application of simulation as a method to support information system and process design in complex organizational transitions. We apply a combined use of a collaborative workshop approach with the use of a detailed and accurate graphical simulation model in a hospital that

  8. Discrimination of Rock Fracture and Blast Events Based on Signal Complexity and Machine Learning

    Directory of Open Access Journals (Sweden)

    Zilong Zhou

    2018-01-01

    Full Text Available The automatic discrimination of rock fracture and blast events is complex and challenging due to the similar waveform characteristics. To solve this problem, a new method based on the signal complexity analysis and machine learning has been proposed in this paper. First, the permutation entropy values of signals at different scale factors are calculated to reflect complexity of signals and constructed into a feature vector set. Secondly, based on the feature vector set, back-propagation neural network (BPNN as a means of machine learning is applied to establish a discriminator for rock fracture and blast events. Then to evaluate the classification performances of the new method, the classifying accuracies of support vector machine (SVM, naive Bayes classifier, and the new method are compared, and the receiver operating characteristic (ROC curves are also analyzed. The results show the new method obtains the best classification performances. In addition, the influence of different scale factor q and number of training samples n on discrimination results is discussed. It is found that the classifying accuracy of the new method reaches the highest value when q = 8–15 or 8–20 and n=140.

  9. A Systematic Optimization Design Method for Complex Mechatronic Products Design and Development

    Directory of Open Access Journals (Sweden)

    Jie Jiang

    2018-01-01

    Full Text Available Designing a complex mechatronic product involves multiple design variables, objectives, constraints, and evaluation criteria as well as their nonlinearly coupled relationships. The design space can be very big consisting of many functional design parameters, structural design parameters, and behavioral design (or running performances parameters. Given a big design space and inexplicit relations among them, how to design a product optimally in an optimization design process is a challenging research problem. In this paper, we propose a systematic optimization design method based on design space reduction and surrogate modelling techniques. This method firstly identifies key design parameters from a very big design space to reduce the design space, secondly uses the identified key design parameters to establish a system surrogate model based on data-driven modelling principles for optimization design, and thirdly utilizes the multiobjective optimization techniques to achieve an optimal design of a product in the reduced design space. This method has been tested with a high-speed train design. With comparison to others, the research results show that this method is practical and useful for optimally designing complex mechatronic products.

  10. HKC: An Algorithm to Predict Protein Complexes in Protein-Protein Interaction Networks

    Directory of Open Access Journals (Sweden)

    Xiaomin Wang

    2011-01-01

    Full Text Available With the availability of more and more genome-scale protein-protein interaction (PPI networks, research interests gradually shift to Systematic Analysis on these large data sets. A key topic is to predict protein complexes in PPI networks by identifying clusters that are densely connected within themselves but sparsely connected with the rest of the network. In this paper, we present a new topology-based algorithm, HKC, to detect protein complexes in genome-scale PPI networks. HKC mainly uses the concepts of highest k-core and cohesion to predict protein complexes by identifying overlapping clusters. The experiments on two data sets and two benchmarks show that our algorithm has relatively high F-measure and exhibits better performance compared with some other methods.

  11. Low-Pass Filtering Approach via Empirical Mode Decomposition Improves Short-Scale Entropy-Based Complexity Estimation of QT Interval Variability in Long QT Syndrome Type 1 Patients

    Directory of Open Access Journals (Sweden)

    Vlasta Bari

    2014-09-01

    Full Text Available Entropy-based complexity of cardiovascular variability at short time scales is largely dependent on the noise and/or action of neural circuits operating at high frequencies. This study proposes a technique for canceling fast variations from cardiovascular variability, thus limiting the effect of these overwhelming influences on entropy-based complexity. The low-pass filtering approach is based on the computation of the fastest intrinsic mode function via empirical mode decomposition (EMD and its subtraction from the original variability. Sample entropy was exploited to estimate complexity. The procedure was applied to heart period (HP and QT (interval from Q-wave onset to T-wave end variability derived from 24-hour Holter recordings in 14 non-mutation carriers (NMCs and 34 mutation carriers (MCs subdivided into 11 asymptomatic MCs (AMCs and 23 symptomatic MCs (SMCs. All individuals belonged to the same family developing long QT syndrome type 1 (LQT1 via KCNQ1-A341V mutation. We found that complexity indexes computed over EMD-filtered QT variability differentiated AMCs from NMCs and detected the effect of beta-blocker therapy, while complexity indexes calculated over EMD-filtered HP variability separated AMCs from SMCs. The EMD-based filtering method enhanced features of the cardiovascular control that otherwise would have remained hidden by the dominant presence of noise and/or fast physiological variations, thus improving classification in LQT1.

  12. Pore-scale study on flow and heat transfer in 3D reconstructed porous media using micro-tomography images

    International Nuclear Information System (INIS)

    Liu, Zhenyu; Wu, Huiying

    2016-01-01

    Highlights: • The complex porous domain has been reconstructed with the micro CT scan images. • Pore-scale numerical model based on LB method has been established. • The correlations for flow and heat transfer were derived from the predictions. • The numerical approach developed in this work is suitable for complex porous media. - Abstract: This paper presents the numerical study on fluid flow and heat transfer in reconstructed porous media at the pore-scale with the double-population thermal lattice Boltzmann (LB) method. The porous geometry was reconstructed using micro-tomography images from micro-CT scanner. The thermal LB model was numerically tested before simulation and a good agreement was achieved by compared with the existing results. The detailed distributions of velocity and temperature in complex pore spaces were obtained from the pore-scale simulation. The correlations for flow and heat transfer in the specific porous media sample were derived based on the numerical results. The numerical method established in this work provides a promising approach to predict pore-scale flow and heat transfer characteristics in reconstructed porous domain with real geometrical effect, which can be extended for the continuum modeling of the transport process in porous media at macro-scale.

  13. LARGE-SCALE CO MAPS OF THE LUPUS MOLECULAR CLOUD COMPLEX

    International Nuclear Information System (INIS)

    Tothill, N. F. H.; Loehr, A.; Stark, A. A.; Lane, A. P.; Harnett, J. I.; Bourke, T. L.; Myers, P. C.; Parshley, S. C.; Wright, G. A.; Walker, C. K.

    2009-01-01

    Fully sampled degree-scale maps of the 13 CO 2-1 and CO 4-3 transitions toward three members of the Lupus Molecular Cloud Complex-Lupus I, III, and IV-trace the column density and temperature of the molecular gas. Comparison with IR extinction maps from the c2d project requires most of the gas to have a temperature of 8-10 K. Estimates of the cloud mass from 13 CO emission are roughly consistent with most previous estimates, while the line widths are higher, around 2 km s -1 . CO 4-3 emission is found throughout Lupus I, indicating widespread dense gas, and toward Lupus III and IV. Enhanced line widths at the NW end and along the edge of the B 228 ridge in Lupus I, and a coherent velocity gradient across the ridge, are consistent with interaction between the molecular cloud and an expanding H I shell from the Upper-Scorpius subgroup of the Sco-Cen OB Association. Lupus III is dominated by the effects of two HAe/Be stars, and shows no sign of external influence. Slightly warmer gas around the core of Lupus IV and a low line width suggest heating by the Upper-Centaurus-Lupus subgroup of Sco-Cen, without the effects of an H I shell.

  14. Large-Scale medical image analytics: Recent methodologies, applications and Future directions.

    Science.gov (United States)

    Zhang, Shaoting; Metaxas, Dimitris

    2016-10-01

    Despite the ever-increasing amount and complexity of annotated medical image data, the development of large-scale medical image analysis algorithms has not kept pace with the need for methods that bridge the semantic gap between images and diagnoses. The goal of this position paper is to discuss and explore innovative and large-scale data science techniques in medical image analytics, which will benefit clinical decision-making and facilitate efficient medical data management. Particularly, we advocate that the scale of image retrieval systems should be significantly increased at which interactive systems can be effective for knowledge discovery in potentially large databases of medical images. For clinical relevance, such systems should return results in real-time, incorporate expert feedback, and be able to cope with the size, quality, and variety of the medical images and their associated metadata for a particular domain. The design, development, and testing of the such framework can significantly impact interactive mining in medical image databases that are growing rapidly in size and complexity and enable novel methods of analysis at much larger scales in an efficient, integrated fashion. Copyright © 2016. Published by Elsevier B.V.

  15. A hybrid 3D SEM reconstruction method optimized for complex geologic material surfaces.

    Science.gov (United States)

    Yan, Shang; Adegbule, Aderonke; Kibbey, Tohren C G

    2017-08-01

    Reconstruction methods are widely used to extract three-dimensional information from scanning electron microscope (SEM) images. This paper presents a new hybrid reconstruction method that combines stereoscopic reconstruction with shape-from-shading calculations to generate highly-detailed elevation maps from SEM image pairs. The method makes use of an imaged glass sphere to determine the quantitative relationship between observed intensity and angles between the beam and surface normal, and the detector and surface normal. Two specific equations are derived to make use of image intensity information in creating the final elevation map. The equations are used together, one making use of intensities in the two images, the other making use of intensities within a single image. The method is specifically designed for SEM images captured with a single secondary electron detector, and is optimized to capture maximum detail from complex natural surfaces. The method is illustrated with a complex structured abrasive material, and a rough natural sand grain. Results show that the method is capable of capturing details such as angular surface features, varying surface roughness, and surface striations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. The development of quantitative determination method of organic acids in complex poly herbal extraction

    Directory of Open Access Journals (Sweden)

    I. L. Dyachok

    2016-08-01

    Full Text Available Aim. The development of sensible, economical and expressive method of quantitative determination of organic acids in complex poly herbal extraction counted on izovaleric acid with the use of digital technologies. Materials and methods. Model complex poly herbal extraction of sedative action was chosen as a research object. Extraction is composed of these medical plants: Valeriana officinalis L., Crataégus, Melissa officinalis L., Hypericum, Mentha piperita L., Húmulus lúpulus, Viburnum. Based on chemical composition of plant components, we consider that main pharmacologically active compounds, which can be found in complex poly herbal extraction are: polyphenolic substances (flavonoids, which are contained in Crataégus, Viburnum, Hypericum, Mentha piperita L., Húmulus lúpulus; also organic acids, including izovaleric acid, which are contained in Valeriana officinalis L., Mentha piperita L., Melissa officinalis L., Viburnum; the aminoacid are contained in Valeriana officinalis L. For the determination of organic acids content in low concentration we applied instrumental method of analysis, namely conductometry titration which consisted in the dependences of water solution conductivity of complex poly herbal extraction on composition of organic acids. Result. The got analytical dependences, which describes tangent lines to the conductometry curve before and after the point of equivalence, allow to determine the volume of solution expended on titration and carry out procedure of quantitative determination of organic acids in the digital mode. Conclusion. The proposed method enables to determine the point of equivalence and carry out quantitative determination of organic acids counted on izovaleric acid with the use of digital technologies, that allows to computerize the method on the whole.

  17. Complex energy system management using optimization techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bridgeman, Stuart; Hurdowar-Castro, Diana; Allen, Rick; Olason, Tryggvi; Welt, Francois

    2010-09-15

    Modern energy systems are often very complex with respect to the mix of generation sources, energy storage, transmission, and avenues to market. Historically, power was provided by government organizations to load centers, and pricing was provided in a regulatory manner. In recent years, this process has been displaced by the independent system operator (ISO). This complexity makes the operation of these systems very difficult, since the components of the system are interdependent. Consequently, computer-based large-scale simulation and optimization methods like Decision Support Systems are now being used. This paper discusses the application of a DSS to operations and planning systems.

  18. Numerical study of SNCR application to a full-scale stoker incinerator at Daejon 4th industrial complex

    International Nuclear Information System (INIS)

    Hey-Suk Kim; Mi-Soo Shin; Dong-Soon Jang; Tae-In Ohm

    2004-01-01

    Considering the rapid variation of waste composition and the more severe regulation trend of pollutant emission in this country, the importance of the development of a reliable computer program for a full-scale, stoker-type incinerator cannot be emphasized too much, especially in the view of proper design and optimal determination of operating condition of existing and future constructed facility. To this end, a comprehensive, numerical model related with the process of the waste-off gaseous combustion with the capacity of 200 tons/day is successfully made. This includes development of several phenomenological models such as municipal waste-off gaseous reaction, NO pollutant generation and destruction in turbulence-related environment. Especially in this study a number of sound assumptions have been made for the NO reaction model, 3-D geometry of incinerator and waste-bed model to achieve the efficient incorporation of the empirical models and enhancement of the stability of calculation process. First of all, the turbulence-related, complex combustion chemistry involved with NO reaction is modeled by the harmonic mean method, which is given by the relative strength of the rates of chemistry and turbulent mixing. Further, the 3-D rectangular shape of the incinerator is simply approximated by a 3-D axi-symmetric geometry with equivalent area. And the modeling of complex waste-burning process on moving grate is described by a pure gaseous combustion process of waste off-gas. The program developed in this study is successfully validated by comparing with the experimental data such as temperature and NO concentration profiles in the incinerator located at 4th industrial complex of Daejon, S. Korea. Using the program developed, a series of parametric investigations have been made for the evaluation of SNCR process and thereby evaluate various important design and the operating variables. The major parameters considered in this parametric study are heating value of

  19. Memory Indexing: A Novel Method for Tracing Memory Processes in Complex Cognitive Tasks

    Science.gov (United States)

    Renkewitz, Frank; Jahn, Georg

    2012-01-01

    We validate an eye-tracking method applicable for studying memory processes in complex cognitive tasks. The method is tested with a task on probabilistic inferences from memory. It provides valuable data on the time course of processing, thus clarifying previous results on heuristic probabilistic inference. Participants learned cue values of…

  20. A Three-Dimensional, Immersed Boundary, Finite Volume Method for the Simulation of Incompressible Heat Transfer Flows around Complex Geometries

    Directory of Open Access Journals (Sweden)

    Hassan Badreddine

    2017-01-01

    Full Text Available The current work focuses on the development and application of a new finite volume immersed boundary method (IBM to simulate three-dimensional fluid flows and heat transfer around complex geometries. First, the discretization of the governing equations based on the second-order finite volume method on Cartesian, structured, staggered grid is outlined, followed by the description of modifications which have to be applied to the discretized system once a body is immersed into the grid. To validate the new approach, the heat conduction equation with a source term is solved inside a cavity with an immersed body. The approach is then tested for a natural convection flow in a square cavity with and without circular cylinder for different Rayleigh numbers. The results computed with the present approach compare very well with the benchmark solutions. As a next step in the validation procedure, the method is tested for Direct Numerical Simulation (DNS of a turbulent flow around a surface-mounted matrix of cubes. The results computed with the present method compare very well with Laser Doppler Anemometry (LDA measurements of the same case, showing that the method can be used for scale-resolving simulations of turbulence as well.

  1. Processor farming method for multi-scale analysis of masonry structures

    Science.gov (United States)

    Krejčí, Tomáš; Koudelka, Tomáš

    2017-07-01

    This paper describes a processor farming method for a coupled heat and moisture transport in masonry using a two-level approach. The motivation for the two-level description comes from difficulties connected with masonry structures, where the size of stone blocks is much larger than the size of mortar layers and very fine finite element mesh has to be used. The two-level approach is suitable for parallel computing because nearly all computations can be performed independently with little synchronization. This approach is called processor farming. The master processor is dealing with the macro-scale level - the structure and the slave processors are dealing with a homogenization procedure on the meso-scale level which is represented by an appropriate representative volume element.

  2. Scale relativity: from quantum mechanics to chaotic dynamics.

    Science.gov (United States)

    Nottale, L.

    Scale relativity is a new approach to the problem of the origin of fundamental scales and of scaling laws in physics, which consists in generalizing Einstein's principle of relativity to the case of scale transformations of resolutions. We recall here how it leads one to the concept of fractal space-time, and to introduce a new complex time derivative operator which allows to recover the Schrödinger equation, then to generalize it. In high energy quantum physics, it leads to the introduction of a Lorentzian renormalization group, in which the Planck length is reinterpreted as a lowest, unpassable scale, invariant under dilatations. These methods are successively applied to two problems: in quantum mechanics, that of the mass spectrum of elementary particles; in chaotic dynamics, that of the distribution of planets in the Solar System.

  3. MOCC: A Fast and Robust Correlation-Based Method for Interest Point Matching under Large Scale Changes

    OpenAIRE

    Wang Hao; Gao Wen; Huang Qingming; Zhao Feng

    2010-01-01

    Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC) is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matchin...

  4. Analysis of subgrid scale mixing using a hybrid LES-Monte-Carlo PDF method

    International Nuclear Information System (INIS)

    Olbricht, C.; Hahn, F.; Sadiki, A.; Janicka, J.

    2007-01-01

    This contribution introduces a hybrid LES-Monte-Carlo method for a coupled solution of the flow and the multi-dimensional scalar joint pdf in two complex mixing devices. For this purpose an Eulerian Monte-Carlo method is used. First, a complex mixing device (jet-in-crossflow, JIC) is presented in which the stochastic convergence and the coherency between the scalar field solution obtained via finite-volume methods and that from the stochastic solution of the pdf for the hybrid method are evaluated. Results are compared to experimental data. Secondly, an extensive investigation of the micromixing on the basis of assumed shape and transported SGS-pdfs in a configuration with practical relevance is carried out. This consists of a mixing chamber with two opposite rows of jets penetrating a crossflow (multi-jet-in-crossflow, MJIC). Some numerical results are compared to available experimental data and to RANS based results. It turns out that the hybrid LES-Monte-Carlo method could achieve a detailed analysis of the mixing at the subgrid level

  5. Scaled MP3 non-covalent interaction energies agree closely with accurate CCSD(T) benchmark data.

    Science.gov (United States)

    Pitonák, Michal; Neogrády, Pavel; Cerný, Jirí; Grimme, Stefan; Hobza, Pavel

    2009-01-12

    Scaled MP3 interaction energies calculated as a sum of MP2/CBS (complete basis set limit) interaction energies and scaled third-order energy contributions obtained in small or medium size basis sets agree very closely with the estimated CCSD(T)/CBS interaction energies for the 22 H-bonded, dispersion-controlled and mixed non-covalent complexes from the S22 data set. Performance of this so-called MP2.5 (third-order scaling factor of 0.5) method has also been tested for 33 nucleic acid base pairs and two stacked conformers of porphine dimer. In all the test cases, performance of the MP2.5 method was shown to be superior to the scaled spin-component MP2 based methods, e.g. SCS-MP2, SCSN-MP2 and SCS(MI)-MP2. In particular, a very balanced treatment of hydrogen-bonded compared to stacked complexes is achieved with MP2.5. The main advantage of the approach is that it employs only a single empirical parameter and is thus biased by two rigorously defined, asymptotically correct ab-initio methods, MP2 and MP3. The method is proposed as an accurate but computationally feasible alternative to CCSD(T) for the computation of the properties of various kinds of non-covalently bound systems.

  6. Detection of circulating immune complexes in breast cancer and melanoma by three different methods

    Energy Technology Data Exchange (ETDEWEB)

    Krapf, F; Renger, D; Fricke, M; Kemper, A; Schedel, I; Deicher, H

    1982-08-01

    By the simultaneous application of three methods, C1q-binding-test (C1q-BA), a two antibody conglutinin binding ELISA and a polyethylene-glycol 6000 precipitation with subsequent quantitative determination of immunoglobulins and complement factors in the redissolved precipitates (PPLaNT), circulating immune complexes could be demonstrated in the sera of 94% of patients with malignant melanoma and of 75% of breast cancer patients. The specific detection rates of the individual methods varied between 23% (C1q-BA) and 46% (PPLaNT), presumably due to the presence of qualitatively different immune complexes in the investigated sera. Accordingly, the simultaneous use of the afore mentioned assays resulted in an increased diagnostic sensitivity and a duplication of the predictive value. Nevertheless, because of the relatively low incidence of malignant diseases in the total population, and due to the fact that circulating immune complexes occur in other non-malignant diseases with considerable frequency, tests for circulating immune complexes must be regarded as less useful parameters in the early diagnostic of cancer.

  7. Correlation analysis of the Taurus molecular cloud complex

    International Nuclear Information System (INIS)

    Kleiner, S.C.

    1985-01-01

    Autocorrelation and power spectrum methods were applied to the analysis of the density and velocity structure of the Taurus Complex and Heiles Cloud 2 as traced out by 13 CO J = 1 → 0 molecular line observations obtained with the 14m antenna of the Five College Radio Astronomy Observatory. Statistically significant correlations in the spacing of density fluctuations within the Taurus Complex and Heiles 2 were uncovered. The length scales of the observed correlations correspond in magnitude to the Jeans wavelengths characterizing gravitational instabilities with (i) interstellar atomic hydrogen gas for the case of the Taurus complex, and (ii) molecular hydrogen for Heiles 2. The observed correlations may be the signatures of past and current gravitational instabilities frozen into the structure of the molecular gas. The appendices provide a comprehensive description of the analytical and numerical methods developed for the correlation analysis of molecular clouds

  8. Comparison of Ho and Y complexation data obtained by electromigration methods, potentiometry and spectrophotometry

    International Nuclear Information System (INIS)

    Vinsova, H.; Koudelkova, M.; Ernestova, M.; Jedinakova-Krizova, V.

    2003-01-01

    Many of holmium and yttrium complex compounds of both organic and inorganic origin have been studied recently from the point of view of their radiopharmaceutical behavior. Complexes with Ho-166 and Y-90 can be either directly used as pharmaceutical preparations or they can be applied in a conjugate form with selected monoclonal antibody. Appropriate bifunctional chelation agents are necessary in the latter case for indirect binding of monoclonal antibody and selected radionuclide. Our present study has been focused on the characterization of radionuclide (metal) - ligand interaction using various analytical methods. Electromigration methods (capillary electrophoresis, capillary isotachophoresis), potentiometric titration and spectrophotometry have been tested from the point of view of their potential to determine conditional stability constants of holmium and yttrium complexes. A principle of an isotachophoretic determination of stability constants is based on the linear relation between logarithms of stability constant and a reduction of a zone of complex. For the calculation of thermodynamic constants using potentiometry it was necessary at first to determine the protonation constants of acid. Those were calculated using the computer program LETAGROP Etitr from data obtained by potentiometric acid-base titration. Consequently, the titration curves of holmium and yttrium with studied ligands and protonation constants of corresponding acid were applied for the calculation of metal-ligand stability constants. Spectrophotometric determination of stability constants of selected systems was based on the titration of holmium and yttrium nitrate solutions by Arsenazo III following by the titration of metal-Arsenazo III complex by selected ligand. Data obtained have been evaluated using the computation program OPIUM. Results obtained by all analytical methods tested in this study have been compared. It was found that direct potentiometric titration technique could not be

  9. Multi-Scale Multi-physics Methods Development for the Calculation of Hot-Spots in the NGNP

    International Nuclear Information System (INIS)

    Downar, Thomas; Seker, Volkan

    2013-01-01

    Radioactive gaseous fission products are released out of the fuel element at a significantly higher rate when the fuel temperature exceeds 1600°C in high-temperature gas-cooled reactors (HTGRs). Therefore, it is of paramount importance to accurately predict the peak fuel temperature during all operational and design-basis accident conditions. The current methods used to predict the peak fuel temperature in HTGRs, such as the Next-Generation Nuclear Plant (NGNP), estimate the average fuel temperature in a computational mesh modeling hundreds of fuel pebbles or a fuel assembly in a pebble-bed reactor (PBR) or prismatic block type reactor (PMR), respectively. Experiments conducted in operating HTGRs indicate considerable uncertainty in the current methods and correlations used to predict actual temperatures. The objective of this project is to improve the accuracy in the prediction of local 'hot' spots by developing multi-scale, multi-physics methods and implementing them within the framework of established codes used for NGNP analysis.The multi-scale approach which this project will implement begins with defining suitable scales for a physical and mathematical model and then deriving and applying the appropriate boundary conditions between scales. The macro scale is the greatest length that describes the entire reactor, whereas the meso scale models only a fuel block in a prismatic reactor and ten to hundreds of pebbles in a pebble bed reactor. The smallest scale is the micro scale--the level of a fuel kernel of the pebble in a PBR and fuel compact in a PMR--which needs to be resolved in order to calculate the peak temperature in a fuel kernel.

  10. Multi-Scale Multi-physics Methods Development for the Calculation of Hot-Spots in the NGNP

    Energy Technology Data Exchange (ETDEWEB)

    Downar, Thomas [Univ. of Michigan, Ann Arbor, MI (United States); Seker, Volkan [Univ. of Michigan, Ann Arbor, MI (United States)

    2013-04-30

    Radioactive gaseous fission products are released out of the fuel element at a significantly higher rate when the fuel temperature exceeds 1600°C in high-temperature gas-cooled reactors (HTGRs). Therefore, it is of paramount importance to accurately predict the peak fuel temperature during all operational and design-basis accident conditions. The current methods used to predict the peak fuel temperature in HTGRs, such as the Next-Generation Nuclear Plant (NGNP), estimate the average fuel temperature in a computational mesh modeling hundreds of fuel pebbles or a fuel assembly in a pebble-bed reactor (PBR) or prismatic block type reactor (PMR), respectively. Experiments conducted in operating HTGRs indicate considerable uncertainty in the current methods and correlations used to predict actual temperatures. The objective of this project is to improve the accuracy in the prediction of local "hot" spots by developing multi-scale, multi-physics methods and implementing them within the framework of established codes used for NGNP analysis.The multi-scale approach which this project will implement begins with defining suitable scales for a physical and mathematical model and then deriving and applying the appropriate boundary conditions between scales. The macro scale is the greatest length that describes the entire reactor, whereas the meso scale models only a fuel block in a prismatic reactor and ten to hundreds of pebbles in a pebble bed reactor. The smallest scale is the micro scale--the level of a fuel kernel of the pebble in a PBR and fuel compact in a PMR--which needs to be resolved in order to calculate the peak temperature in a fuel kernel.

  11. Decomposition of overlapping protein complexes: A graph theoretical method for analyzing static and dynamic protein associations

    Directory of Open Access Journals (Sweden)

    Guimarães Katia S

    2006-04-01

    Full Text Available Abstract Background Most cellular processes are carried out by multi-protein complexes, groups of proteins that bind together to perform a specific task. Some proteins form stable complexes, while other proteins form transient associations and are part of several complexes at different stages of a cellular process. A better understanding of this higher-order organization of proteins into overlapping complexes is an important step towards unveiling functional and evolutionary mechanisms behind biological networks. Results We propose a new method for identifying and representing overlapping protein complexes (or larger units called functional groups within a protein interaction network. We develop a graph-theoretical framework that enables automatic construction of such representation. We illustrate the effectiveness of our method by applying it to TNFα/NF-κB and pheromone signaling pathways. Conclusion The proposed representation helps in understanding the transitions between functional groups and allows for tracking a protein's path through a cascade of functional groups. Therefore, depending on the nature of the network, our representation is capable of elucidating temporal relations between functional groups. Our results show that the proposed method opens a new avenue for the analysis of protein interaction networks.

  12. A new method for large-scale assessment of change in ecosystem functioning in relation to land degradation

    Science.gov (United States)

    Horion, Stephanie; Ivits, Eva; Verzandvoort, Simone; Fensholt, Rasmus

    2017-04-01

    Ongoing pressures on European land are manifold with extreme climate events and non-sustainable use of land resources being amongst the most important drivers altering the functioning of the ecosystems. The protection and conservation of European natural capital is one of the key objectives of the 7th Environmental Action Plan (EAP). The EAP stipulates that European land must be managed in a sustainable way by 2020 and the UN Sustainable development goals define a Land Degradation Neutral world as one of the targets. This implies that land degradation (LD) assessment of European ecosystems must be performed repeatedly allowing for the assessment of the current state of LD as well as changes compared to a baseline adopted by the UNCCD for the objective of land degradation neutrality. However, scientifically robust methods are still lacking for large-scale assessment of LD and repeated consistent mapping of the state of terrestrial ecosystems. Historical land degradation assessments based on various methods exist, but methods are generally non-replicable or difficult to apply at continental scale (Allan et al. 2007). The current lack of research methods applicable at large spatial scales is notably caused by the non-robust definition of LD, the scarcity of field data on LD, as well as the complex inter-play of the processes driving LD (Vogt et al., 2011). Moreover, the link between LD and changes in land use (how land use changes relates to change in vegetation productivity and ecosystem functioning) is not straightforward. In this study we used the segmented trend method developed by Horion et al. (2016) for large-scale systematic assessment of hotspots of change in ecosystem functioning in relation to LD. This method alleviates shortcomings of widely used linear trend model that does not account for abrupt change, nor adequately captures the actual changes in ecosystem functioning (de Jong et al. 2013; Horion et al. 2016). Here we present a new methodology for

  13. Simple spatial scaling rules behind complex cities.

    Science.gov (United States)

    Li, Ruiqi; Dong, Lei; Zhang, Jiang; Wang, Xinran; Wang, Wen-Xu; Di, Zengru; Stanley, H Eugene

    2017-11-28

    Although most of wealth and innovation have been the result of human interaction and cooperation, we are not yet able to quantitatively predict the spatial distributions of three main elements of cities: population, roads, and socioeconomic interactions. By a simple model mainly based on spatial attraction and matching growth mechanisms, we reveal that the spatial scaling rules of these three elements are in a consistent framework, which allows us to use any single observation to infer the others. All numerical and theoretical results are consistent with empirical data from ten representative cities. In addition, our model can also provide a general explanation of the origins of the universal super- and sub-linear aggregate scaling laws and accurately predict kilometre-level socioeconomic activity. Our work opens a new avenue for uncovering the evolution of cities in terms of the interplay among urban elements, and it has a broad range of applications.

  14. PAFit: A Statistical Method for Measuring Preferential Attachment in Temporal Complex Networks.

    Directory of Open Access Journals (Sweden)

    Thong Pham

    Full Text Available Preferential attachment is a stochastic process that has been proposed to explain certain topological features characteristic of complex networks from diverse domains. The systematic investigation of preferential attachment is an important area of research in network science, not only for the theoretical matter of verifying whether this hypothesized process is operative in real-world networks, but also for the practical insights that follow from knowledge of its functional form. Here we describe a maximum likelihood based estimation method for the measurement of preferential attachment in temporal complex networks. We call the method PAFit, and implement it in an R package of the same name. PAFit constitutes an advance over previous methods primarily because we based it on a nonparametric statistical framework that enables attachment kernel estimation free of any assumptions about its functional form. We show this results in PAFit outperforming the popular methods of Jeong and Newman in Monte Carlo simulations. What is more, we found that the application of PAFit to a publically available Flickr social network dataset yielded clear evidence for a deviation of the attachment kernel from the popularly assumed log-linear form. Independent of our main work, we provide a correction to a consequential error in Newman's original method which had evidently gone unnoticed since its publication over a decade ago.

  15. Logarithmic corrections to scaling in the XY2-model

    International Nuclear Information System (INIS)

    Kenna, R.; Irving, A.C.

    1995-01-01

    We study the distribution of partition function zeroes for the XY-model in two dimensions. In particular we find the scaling behaviour of the end of the distribution of zeroes in the complex external magnetic field plane in the thermodynamic limit (the Yang-Lee edge) and the form for the density of these zeroes. Assuming that finite-size scaling holds, we show that there have to exist logarithmic corrections to the leading scaling behaviour of thermodynamic quantities in this model. These logarithmic corrections are also manifest in the finite-size scaling formulae and we identify them numerically. The method presented here can be used to check the compatibility of scaling behaviour of odd and even thermodynamic functions in other models too. ((orig.))

  16. An improved method to characterise the modulation of small-scale turbulent by large-scale structures

    Science.gov (United States)

    Agostini, Lionel; Leschziner, Michael; Gaitonde, Datta

    2015-11-01

    A key aspect of turbulent boundary layer dynamics is ``modulation,'' which refers to degree to which the intensity of coherent large-scale structures (LS) cause an amplification or attenuation of the intensity of the small-scale structures (SS) through large-scale-linkage. In order to identify the variation of the amplitude of the SS motion, the envelope of the fluctuations needs to be determined. Mathis et al. (2009) proposed to define this latter by low-pass filtering the modulus of the analytic signal built from the Hilbert transform of SS. The validity of this definition, as a basis for quantifying the modulated SS signal, is re-examined on the basis of DNS data for a channel flow. The analysis shows that the modulus of the analytic signal is very sensitive to the skewness of its PDF, which is dependent, in turn, on the sign of the LS fluctuation and thus of whether these fluctuations are associated with sweeps or ejections. The conclusion is that generating an envelope by use of a low-pass filtering step leads to an important loss of information associated with the effects of the local skewness of the PDF of the SS on the modulation process. An improved Hilbert-transform-based method is proposed to characterize the modulation of SS turbulence by LS structures

  17. SIZE SCALING RELATIONSHIPS IN FRACTURE NETWORKS

    International Nuclear Information System (INIS)

    Wilson, Thomas H.

    2000-01-01

    The research conducted under DOE grant DE-FG26-98FT40385 provides a detailed assessment of size scaling issues in natural fracture and active fault networks that extend over scales from several tens of kilometers to less than a tenth of a meter. This study incorporates analysis of data obtained from several sources, including: natural fracture patterns photographed in the Appalachian field area, natural fracture patterns presented by other workers in the published literature, patterns of active faulting in Japan mapping at a scale of 1:100,000, and lineament patterns interpreted from satellite-based radar imagery obtained over the Appalachian field area. The complexity of these patterns is always found to vary with scale. In general,but not always, patterns become less complex with scale. This tendency may reverse as can be inferred from the complexity of high-resolution radar images (8 meter pixel size) which are characterized by patterns that are less complex than those observed over smaller areas on the ground surface. Model studies reveal that changes in the complexity of a fracture pattern can be associated with dominant spacings between the fractures comprising the pattern or roughly to the rock areas bounded by fractures of a certain scale. While the results do not offer a magic number (the fractal dimension) to characterize fracture networks at all scales, the modeling and analysis provide results that can be interpreted directly in terms of the physical properties of the natural fracture or active fault complex. These breaks roughly define the size of fracture bounded regions at different scales. The larger more extensive sets of fractures will intersect and enclose regions of a certain size, whereas smaller less extensive sets will do the same--i.e. subdivide the rock into even smaller regions. The interpretation varies depending on the number of sets that are present, but the scale breaks in the logN/logr plots serve as a guide to interpreting the

  18. Multi-Sensor As-Built Models of Complex Industrial Architectures

    Directory of Open Access Journals (Sweden)

    Jean-François Hullo

    2015-12-01

    Full Text Available In the context of increased maintenance operations and generational renewal work, a nuclear owner and operator, like Electricité de France (EDF, is invested in the scaling-up of tools and methods of “as-built virtual reality” for whole buildings and large audiences. In this paper, we first present the state of the art of scanning tools and methods used to represent a very complex architecture. Then, we propose a methodology and assess it in a large experiment carried out on the most complex building of a 1300-megawatt power plant, an 11-floor reactor building. We also present several developments that made possible the acquisition, processing and georeferencing of multiple data sources (1000+ 3D laser scans and RGB panoramic, total-station surveying, 2D floor plans and the 3D reconstruction of CAD as-built models. In addition, we introduce new concepts for user interaction with complex architecture, elaborated during the development of an application that allows a painless exploration of the whole dataset by professionals, unfamiliar with such data types. Finally, we discuss the main feedback items from this large experiment, the remaining issues for the generalization of such large-scale surveys and the future technical and scientific challenges in the field of industrial “virtual reality”.

  19. A Comparison of Multidimensional Item Selection Methods in Simple and Complex Test Designs

    Directory of Open Access Journals (Sweden)

    Eren Halil ÖZBERK

    2017-03-01

    Full Text Available In contrast with the previous studies, this study employed various test designs (simple and complex which allow the evaluation of the overall ability score estimations across multiple real test conditions. In this study, four factors were manipulated, namely the test design, number of items per dimension, correlation between dimensions and item selection methods. Using the generated item and ability parameters, dichotomous item responses were generated in by using M3PL compensatory multidimensional IRT model with specified correlations. MCAT composite ability score accuracy was evaluated using absolute bias (ABSBIAS, correlation and the root mean square error (RMSE between true and estimated ability scores. The results suggest that the multidimensional test structure, number of item per dimension and correlation between dimensions had significant effect on item selection methods for the overall score estimations. For simple structure test design it was found that V1 item selection has the lowest absolute bias estimations for both long and short tests while estimating overall scores. As the model gets complex KL item selection method performed better than other two item selection method.

  20. Accurate and simple measurement method of complex decay schemes radionuclide activity

    International Nuclear Information System (INIS)

    Legrand, J.; Clement, C.; Bac, C.

    1975-01-01

    A simple method for the measurement of the activity is described. It consists of using a well-type sodium iodide crystal whose efficiency mith monoenergetic photon rays has been computed or measured. For each radionuclide with a complex decay scheme a total efficiency is computed; it is shown that the efficiency is very high, near 100%. The associated incertainty is low, in spite of the important uncertainties on the different parameters used in the computation. The method has been applied to the measurement of the 152 Eu primary reference [fr

  1. Design Analysis Method for Multidisciplinary Complex Product using SysML

    Directory of Open Access Journals (Sweden)

    Liu Jihong

    2017-01-01

    Full Text Available In the design of multidisciplinary complex products, model-based systems engineering methods are widely used. However, the methodologies only contain only modeling order and simple analysis steps, and lack integrated design analysis methods supporting the whole process. In order to solve the problem, a conceptual design analysis method with integrating modern design methods has been proposed. First, based on the requirement analysis of the quantization matrix, the user’s needs are quantitatively evaluated and translated to system requirements. Then, by the function decomposition of the function knowledge base, the total function is semi-automatically decomposed into the predefined atomic function. The function is matched into predefined structure through the behaviour layer using function-structure mapping based on the interface matching. Finally based on design structure matrix (DSM, the structure reorganization is completed. The process of analysis is implemented with SysML, and illustrated through an aircraft air conditioning system for the system validation.

  2. A multi-scale network method for two-phase flow in porous media

    Energy Technology Data Exchange (ETDEWEB)

    Khayrat, Karim, E-mail: khayratk@ifd.mavt.ethz.ch; Jenny, Patrick

    2017-08-01

    Pore-network models of porous media are useful in the study of pore-scale flow in porous media. In order to extract macroscopic properties from flow simulations in pore-networks, it is crucial the networks are large enough to be considered representative elementary volumes. However, existing two-phase network flow solvers are limited to relatively small domains. For this purpose, a multi-scale pore-network (MSPN) method, which takes into account flow-rate effects and can simulate larger domains compared to existing methods, was developed. In our solution algorithm, a large pore network is partitioned into several smaller sub-networks. The algorithm to advance the fluid interfaces within each subnetwork consists of three steps. First, a global pressure problem on the network is solved approximately using the multiscale finite volume (MSFV) method. Next, the fluxes across the subnetworks are computed. Lastly, using fluxes as boundary conditions, a dynamic two-phase flow solver is used to advance the solution in time. Simulation results of drainage scenarios at different capillary numbers and unfavourable viscosity ratios are presented and used to validate the MSPN method against solutions obtained by an existing dynamic network flow solver.

  3. A multi-scale network method for two-phase flow in porous media

    International Nuclear Information System (INIS)

    Khayrat, Karim; Jenny, Patrick

    2017-01-01

    Pore-network models of porous media are useful in the study of pore-scale flow in porous media. In order to extract macroscopic properties from flow simulations in pore-networks, it is crucial the networks are large enough to be considered representative elementary volumes. However, existing two-phase network flow solvers are limited to relatively small domains. For this purpose, a multi-scale pore-network (MSPN) method, which takes into account flow-rate effects and can simulate larger domains compared to existing methods, was developed. In our solution algorithm, a large pore network is partitioned into several smaller sub-networks. The algorithm to advance the fluid interfaces within each subnetwork consists of three steps. First, a global pressure problem on the network is solved approximately using the multiscale finite volume (MSFV) method. Next, the fluxes across the subnetworks are computed. Lastly, using fluxes as boundary conditions, a dynamic two-phase flow solver is used to advance the solution in time. Simulation results of drainage scenarios at different capillary numbers and unfavourable viscosity ratios are presented and used to validate the MSPN method against solutions obtained by an existing dynamic network flow solver.

  4. Maxwell iteration for the lattice Boltzmann method with diffusive scaling

    Science.gov (United States)

    Zhao, Weifeng; Yong, Wen-An

    2017-03-01

    In this work, we present an alternative derivation of the Navier-Stokes equations from Bhatnagar-Gross-Krook models of the lattice Boltzmann method with diffusive scaling. This derivation is based on the Maxwell iteration and can expose certain important features of the lattice Boltzmann solutions. Moreover, it will be seen to be much more straightforward and logically clearer than the existing approaches including the Chapman-Enskog expansion.

  5. Optimal Output of Distributed Generation Based On Complex Power Increment

    Science.gov (United States)

    Wu, D.; Bao, H.

    2017-12-01

    In order to meet the growing demand for electricity and improve the cleanliness of power generation, new energy generation, represented by wind power generation, photovoltaic power generation, etc has been widely used. The new energy power generation access to distribution network in the form of distributed generation, consumed by local load. However, with the increase of the scale of distribution generation access to the network, the optimization of its power output is becoming more and more prominent, which needs further study. Classical optimization methods often use extended sensitivity method to obtain the relationship between different power generators, but ignore the coupling parameter between nodes makes the results are not accurate; heuristic algorithm also has defects such as slow calculation speed, uncertain outcomes. This article proposes a method called complex power increment, the essence of this method is the analysis of the power grid under steady power flow. After analyzing the results we can obtain the complex scaling function equation between the power supplies, the coefficient of the equation is based on the impedance parameter of the network, so the description of the relation of variables to the coefficients is more precise Thus, the method can accurately describe the power increment relationship, and can obtain the power optimization scheme more accurately and quickly than the extended sensitivity method and heuristic method.

  6. Magnetic storm generation by large-scale complex structure Sheath/ICME

    Science.gov (United States)

    Grigorenko, E. E.; Yermolaev, Y. I.; Lodkina, I. G.; Yermolaev, M. Y.; Riazantseva, M.; Borodkova, N. L.

    2017-12-01

    We study temporal profiles of interplanetary plasma and magnetic field parameters as well as magnetospheric indices. We use our catalog of large-scale solar wind phenomena for 1976-2000 interval (see the catalog for 1976-2016 in web-side ftp://ftp.iki.rssi.ru/pub/omni/ prepared on basis of OMNI database (Yermolaev et al., 2009)) and the double superposed epoch analysis method (Yermolaev et al., 2010). Our analysis showed (Yermolaev et al., 2015) that average profiles of Dst and Dst* indices decrease in Sheath interval (magnetic storm activity increases) and increase in ICME interval. This profile coincides with inverted distribution of storm numbers in both intervals (Yermolaev et al., 2017). This behavior is explained by following reasons. (1) IMF magnitude in Sheath is higher than in Ejecta and closed to value in MC. (2) Sheath has 1.5 higher efficiency of storm generation than ICME (Nikolaeva et al., 2015). The most part of so-called CME-induced storms are really Sheath-induced storms and this fact should be taken into account during Space Weather prediction. The work was in part supported by the Russian Science Foundation, grant 16-12-10062. References. 1. Nikolaeva N.S., Y. I. Yermolaev and I. G. Lodkina (2015), Modeling of the corrected Dst* index temporal profile on the main phase of the magnetic storms generated by different types of solar wind, Cosmic Res., 53(2), 119-127 2. Yermolaev Yu. I., N. S. Nikolaeva, I. G. Lodkina and M. Yu. Yermolaev (2009), Catalog of Large-Scale Solar Wind Phenomena during 1976-2000, Cosmic Res., , 47(2), 81-94 3. Yermolaev, Y. I., N. S. Nikolaeva, I. G. Lodkina, and M. Y. Yermolaev (2010), Specific interplanetary conditions for CIR-induced, Sheath-induced, and ICME-induced geomagnetic storms obtained by double superposed epoch analysis, Ann. Geophys., 28, 2177-2186 4. Yermolaev Yu. I., I. G. Lodkina, N. S. Nikolaeva and M. Yu. Yermolaev (2015), Dynamics of large-scale solar wind streams obtained by the double superposed epoch

  7. Impact of a Modified Jigsaw Method for Learning an Unfamiliar, Complex Topic

    Directory of Open Access Journals (Sweden)

    Denise Kolanczyk

    2017-09-01

    Full Text Available Objective: The aim of this study was to use the jigsaw method with an unfamiliar, complex topic and to evaluate the effectiveness of the jigsaw teaching method on student learning of assigned material (“jigsaw expert” versus non-assigned material (“jigsaw learner”. Innovation: The innovation was implemented in an advanced cardiology elective. Forty students were assigned a pre-reading and one of four valvular heart disorders, a topic not previously taught in the curriculum. A pre-test and post-test evaluated overall student learning. Student performance on pre/post tests as the “jigsaw expert” and “jigsaw learner” was also compared. Critical Analysis: Overall, the post-test mean score of 85.75% was significantly higher than that of the pre-test score of 56.75% (p<0.05. There was significant improvement in scores regardless of whether the material was assigned (“jigsaw experts” pre=58.8% and post=82.5%; p<0.05 or not assigned (“jigsaw learners” pre= 56.25% and post= 86.56%, p<0.05 for pre-study. Next Steps: The use of the jigsaw method to teach unfamiliar, complex content helps students to become both teachers and active listeners, which are essential to the skills and professionalism of a health care provider. Further studies are needed to evaluate use of the jigsaw method to teach unfamiliar, complex content on long-term retention and to further examine the effects of expert vs. non-expert roles. Conflict of Interest We declare no conflicts of interest or financial interests that the authors or members of their immediate families have in any product or service discussed in the manuscript, including grants (pending or received, employment, gifts, stock holdings or options, honoraria, consultancies, expert testimony, patents and royalties.   Type: Note

  8. Landscape Aesthetics and the Scenic Drivers of Amenity Migration in the New West: Naturalness, Visual Scale, and Complexity

    Directory of Open Access Journals (Sweden)

    Jelena Vukomanovic

    2014-04-01

    Full Text Available Values associated with scenic beauty are common “pull factors” for amenity migrants, however the specific landscape features that attract amenity migration are poorly understood. In this study we focused on three visual quality metrics of the intermountain West (USA, with the objective of exploring the relationship between the location of exurban homes and aesthetic landscape preference, as exemplified through greenness, viewshed size, and terrain ruggedness. Using viewshed analysis, we compared the viewsheds of actual exurban houses to the viewsheds of randomly-distributed simulated (validation houses. We found that the actual exurban households can see significantly more vegetation and a more rugged (complex terrain than simulated houses. Actual exurban homes see a more rugged terrain, but do not necessarily see the highest peaks, suggesting that visual complexity throughout the viewshed may be more important. The viewsheds visible from the actual exurban houses were significantly larger than those visible from the simulated houses, indicating that visual scale is important to the general aesthetic experiences of exurbanites. The differences in visual quality metric values between actual exurban and simulated viewsheds call into question the use of county-level scales of analysis for the study of landscape preferences, which may miss key landscape aesthetic drivers of preference.

  9. Computational Experiment Study on Selection Mechanism of Project Delivery Method Based on Complex Factors

    Directory of Open Access Journals (Sweden)

    Xiang Ding

    2014-01-01

    Full Text Available Project delivery planning is a key stage used by the project owner (or project investor for organizing design, construction, and other operations in a construction project. The main task in this stage is to select an appropriate project delivery method. In order to analyze different factors affecting the PDM selection, this paper establishes a multiagent model mainly to show how project complexity, governance strength, and market environment affect the project owner’s decision on PDM. Experiment results show that project owner usually choose Design-Build method when the project is very complex within a certain range. Besides, this paper points out that Design-Build method will be the prior choice when the potential contractors develop quickly. This paper provides the owners with methods and suggestions in terms of showing how the factors affect PDM selection, and it may improve the project performance.

  10. The mechanism behind internally generated centennial-to-millennial scale climate variability in an earth system model of intermediate complexity

    Directory of Open Access Journals (Sweden)

    T. Friedrich

    2010-08-01

    Full Text Available The mechanism triggering centennial-to-millennial-scale variability of the Atlantic Meridional Overturning Circulation (AMOC in the earth system model of intermediate complexity LOVECLIM is investigated. It is found that for several climate boundary conditions such as low obliquity values (~22.1° or LGM-albedo, internally generated centennial-to-millennial-scale variability occurs in the North Atlantic region. Stochastic excitations of the density-driven overturning circulation in the Nordic Seas can create regional sea-ice anomalies and a subsequent reorganization of the atmospheric circulation. The resulting remote atmospheric anomalies over the Hudson Bay can release freshwater pulses into the Labrador Sea and significantly increase snow fall in this region leading to a subsequent reduction of convective activity. The millennial-scale AMOC oscillations disappear if LGM bathymetry (with closed Hudson Bay is prescribed or if freshwater pulses are suppressed artificially. Furthermore, our study documents the process of the AMOC recovery as well as the global marine and terrestrial carbon cycle response to centennial-to-millennial-scale AMOC variability.

  11. Protein complex detection in PPI networks based on data integration and supervised learning method.

    Science.gov (United States)

    Yu, Feng; Yang, Zhi; Hu, Xiao; Sun, Yuan; Lin, Hong; Wang, Jian

    2015-01-01

    Revealing protein complexes are important for understanding principles of cellular organization and function. High-throughput experimental techniques have produced a large amount of protein interactions, which makes it possible to predict protein complexes from protein-protein interaction (PPI) networks. However, the small amount of known physical interactions may limit protein complex detection. The new PPI networks are constructed by integrating PPI datasets with the large and readily available PPI data from biomedical literature, and then the less reliable PPI between two proteins are filtered out based on semantic similarity and topological similarity of the two proteins. Finally, the supervised learning protein complex detection (SLPC), which can make full use of the information of available known complexes, is applied to detect protein complex on the new PPI networks. The experimental results of SLPC on two different categories yeast PPI networks demonstrate effectiveness of the approach: compared with the original PPI networks, the best average improvements of 4.76, 6.81 and 15.75 percentage units in the F-score, accuracy and maximum matching ratio (MMR) are achieved respectively; compared with the denoising PPI networks, the best average improvements of 3.91, 4.61 and 12.10 percentage units in the F-score, accuracy and MMR are achieved respectively; compared with ClusterONE, the start-of the-art complex detection method, on the denoising extended PPI networks, the average improvements of 26.02 and 22.40 percentage units in the F-score and MMR are achieved respectively. The experimental results show that the performances of SLPC have a large improvement through integration of new receivable PPI data from biomedical literature into original PPI networks and denoising PPI networks. In addition, our protein complexes detection method can achieve better performance than ClusterONE.

  12. A digital processing method for the analysis of complex nuclear spectra

    International Nuclear Information System (INIS)

    Madan, V.K.; Abani, M.C.; Bairi, B.R.

    1994-01-01

    This paper describes a digital processing method using frequency power spectra for the analysis of complex nuclear spectra. The power spectra were estimated by employing modified discrete Fourier transform. The method was applied to observed spectral envelopes. The results for separating closely-spaced doublets in nuclear spectra of low statistical precision compared favorably with those obtained by using a popular peak fitting program SAMPO. The paper also describes limitations of the peak fitting methods. It describes the advantages of digital processing techniques for type II digital signals including nuclear spectra. A compact computer program occupying less than 2.5 kByte of memory space was written in BASIC for the processing of observed spectral envelopes. (orig.)

  13. Automated local line rolling forming and simplified deformation simulation method for complex curvature plate of ships

    Directory of Open Access Journals (Sweden)

    Y. Zhao

    2017-06-01

    Full Text Available Local line rolling forming is a common forming approach for the complex curvature plate of ships. However, the processing mode based on artificial experience is still applied at present, because it is difficult to integrally determine relational data for the forming shape, processing path, and process parameters used to drive automation equipment. Numerical simulation is currently the major approach for generating such complex relational data. Therefore, a highly precise and effective numerical computation method becomes crucial in the development of the automated local line rolling forming system for producing complex curvature plates used in ships. In this study, a three-dimensional elastoplastic finite element method was first employed to perform numerical computations for local line rolling forming, and the corresponding deformation and strain distribution features were acquired. In addition, according to the characteristics of strain distributions, a simplified deformation simulation method, based on the deformation obtained by applying strain was presented. Compared to the results of the three-dimensional elastoplastic finite element method, this simplified deformation simulation method was verified to provide high computational accuracy, and this could result in a substantial reduction in calculation time. Thus, the application of the simplified deformation simulation method was further explored in the case of multiple rolling loading paths. Moreover, it was also utilized to calculate the local line rolling forming for the typical complex curvature plate of ships. Research findings indicated that the simplified deformation simulation method was an effective tool for rapidly obtaining relationships between the forming shape, processing path, and process parameters.

  14. Collaborating CPU and GPU for large-scale high-order CFD simulations with complex grids on the TianHe-1A supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Chuanfu, E-mail: xuchuanfu@nudt.edu.cn [College of Computer Science, National University of Defense Technology, Changsha 410073 (China); Deng, Xiaogang; Zhang, Lilun [College of Computer Science, National University of Defense Technology, Changsha 410073 (China); Fang, Jianbin [Parallel and Distributed Systems Group, Delft University of Technology, Delft 2628CD (Netherlands); Wang, Guangxue; Jiang, Yi [State Key Laboratory of Aerodynamics, P.O. Box 211, Mianyang 621000 (China); Cao, Wei; Che, Yonggang; Wang, Yongxian; Wang, Zhenghua; Liu, Wei; Cheng, Xinghua [College of Computer Science, National University of Defense Technology, Changsha 410073 (China)

    2014-12-01

    Programming and optimizing complex, real-world CFD codes on current many-core accelerated HPC systems is very challenging, especially when collaborating CPUs and accelerators to fully tap the potential of heterogeneous systems. In this paper, with a tri-level hybrid and heterogeneous programming model using MPI + OpenMP + CUDA, we port and optimize our high-order multi-block structured CFD software HOSTA on the GPU-accelerated TianHe-1A supercomputer. HOSTA adopts two self-developed high-order compact definite difference schemes WCNS and HDCS that can simulate flows with complex geometries. We present a dual-level parallelization scheme for efficient multi-block computation on GPUs and perform particular kernel optimizations for high-order CFD schemes. The GPU-only approach achieves a speedup of about 1.3 when comparing one Tesla M2050 GPU with two Xeon X5670 CPUs. To achieve a greater speedup, we collaborate CPU and GPU for HOSTA instead of using a naive GPU-only approach. We present a novel scheme to balance the loads between the store-poor GPU and the store-rich CPU. Taking CPU and GPU load balance into account, we improve the maximum simulation problem size per TianHe-1A node for HOSTA by 2.3×, meanwhile the collaborative approach can improve the performance by around 45% compared to the GPU-only approach. Further, to scale HOSTA on TianHe-1A, we propose a gather/scatter optimization to minimize PCI-e data transfer times for ghost and singularity data of 3D grid blocks, and overlap the collaborative computation and communication as far as possible using some advanced CUDA and MPI features. Scalability tests show that HOSTA can achieve a parallel efficiency of above 60% on 1024 TianHe-1A nodes. With our method, we have successfully simulated an EET high-lift airfoil configuration containing 800M cells and China's large civil airplane configuration containing 150M cells. To our best knowledge, those are the largest-scale CPU–GPU collaborative simulations

  15. Collaborating CPU and GPU for large-scale high-order CFD simulations with complex grids on the TianHe-1A supercomputer

    International Nuclear Information System (INIS)

    Xu, Chuanfu; Deng, Xiaogang; Zhang, Lilun; Fang, Jianbin; Wang, Guangxue; Jiang, Yi; Cao, Wei; Che, Yonggang; Wang, Yongxian; Wang, Zhenghua; Liu, Wei; Cheng, Xinghua

    2014-01-01

    Programming and optimizing complex, real-world CFD codes on current many-core accelerated HPC systems is very challenging, especially when collaborating CPUs and accelerators to fully tap the potential of heterogeneous systems. In this paper, with a tri-level hybrid and heterogeneous programming model using MPI + OpenMP + CUDA, we port and optimize our high-order multi-block structured CFD software HOSTA on the GPU-accelerated TianHe-1A supercomputer. HOSTA adopts two self-developed high-order compact definite difference schemes WCNS and HDCS that can simulate flows with complex geometries. We present a dual-level parallelization scheme for efficient multi-block computation on GPUs and perform particular kernel optimizations for high-order CFD schemes. The GPU-only approach achieves a speedup of about 1.3 when comparing one Tesla M2050 GPU with two Xeon X5670 CPUs. To achieve a greater speedup, we collaborate CPU and GPU for HOSTA instead of using a naive GPU-only approach. We present a novel scheme to balance the loads between the store-poor GPU and the store-rich CPU. Taking CPU and GPU load balance into account, we improve the maximum simulation problem size per TianHe-1A node for HOSTA by 2.3×, meanwhile the collaborative approach can improve the performance by around 45% compared to the GPU-only approach. Further, to scale HOSTA on TianHe-1A, we propose a gather/scatter optimization to minimize PCI-e data transfer times for ghost and singularity data of 3D grid blocks, and overlap the collaborative computation and communication as far as possible using some advanced CUDA and MPI features. Scalability tests show that HOSTA can achieve a parallel efficiency of above 60% on 1024 TianHe-1A nodes. With our method, we have successfully simulated an EET high-lift airfoil configuration containing 800M cells and China's large civil airplane configuration containing 150M cells. To our best knowledge, those are the largest-scale CPU–GPU collaborative simulations

  16. Developing an Assessment Method of Active Aging: University of Jyvaskyla Active Aging Scale.

    Science.gov (United States)

    Rantanen, Taina; Portegijs, Erja; Kokko, Katja; Rantakokko, Merja; Törmäkangas, Timo; Saajanaho, Milla

    2018-01-01

    To develop an assessment method of active aging for research on older people. A multiphase process that included drafting by an expert panel, a pilot study for item analysis and scale validity, a feedback study with focus groups and questionnaire respondents, and a test-retest study. Altogether 235 people aged 60 to 94 years provided responses and/or feedback. We developed a 17-item University of Jyvaskyla Active Aging Scale with four aspects in each item (goals, ability, opportunity, and activity; range 0-272). The psychometric and item properties are good and the scale assesses a unidimensional latent construct of active aging. Our scale assesses older people's striving for well-being through activities pertaining to their goals, abilities, and opportunities. The University of Jyvaskyla Active Aging Scale provides a quantifiable measure of active aging that may be used in postal questionnaires or interviews in research and practice.

  17. FDTD method for laser absorption in metals for large scale problems.

    Science.gov (United States)

    Deng, Chun; Ki, Hyungson

    2013-10-21

    The FDTD method has been successfully used for many electromagnetic problems, but its application to laser material processing has been limited because even a several-millimeter domain requires a prohibitively large number of grids. In this article, we present a novel FDTD method for simulating large-scale laser beam absorption problems, especially for metals, by enlarging laser wavelength while maintaining the material's reflection characteristics. For validation purposes, the proposed method has been tested with in-house FDTD codes to simulate p-, s-, and circularly polarized 1.06 μm irradiation on Fe and Sn targets, and the simulation results are in good agreement with theoretical predictions.

  18. SVC Planning in Large–scale Power Systems via a Hybrid Optimization Method

    DEFF Research Database (Denmark)

    Yang, Guang ya; Majumder, Rajat; Xu, Zhao

    2009-01-01

    The research on allocation of FACTS devices has attracted quite a lot interests from various aspects. In this paper, a hybrid model is proposed to optimise the number, location as well as the parameter settings of static Var compensator (SVC) deployed in large–scale power systems. The model...... utilises the result of vulnerability assessment for determining the candidate locations. A hybrid optimisation method including two stages is proposed to find out the optimal solution of SVC in large– scale planning problem. In the first stage, a conventional genetic algorithm (GA) is exploited to generate...... a candidate solution pool. Then in the second stage, the candidates are presented to a linear planning model to investigate the system optimal loadability, hence the optimal solution for SVC planning can be achieved. The method is presented to IEEE 300–bus system....

  19. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. II. Linear scaling domain based pair natural orbital coupled cluster theory

    International Nuclear Information System (INIS)

    Riplinger, Christoph; Pinski, Peter; Becker, Ute; Neese, Frank; Valeev, Edward F.

    2016-01-01

    Domain based local pair natural orbital coupled cluster theory with single-, double-, and perturbative triple excitations (DLPNO-CCSD(T)) is a highly efficient local correlation method. It is known to be accurate and robust and can be used in a black box fashion in order to obtain coupled cluster quality total energies for large molecules with several hundred atoms. While previous implementations showed near linear scaling up to a few hundred atoms, several nonlinear scaling steps limited the applicability of the method for very large systems. In this work, these limitations are overcome and a linear scaling DLPNO-CCSD(T) method for closed shell systems is reported. The new implementation is based on the concept of sparse maps that was introduced in Part I of this series [P. Pinski, C. Riplinger, E. F. Valeev, and F. Neese, J. Chem. Phys. 143, 034108 (2015)]. Using the sparse map infrastructure, all essential computational steps (integral transformation and storage, initial guess, pair natural orbital construction, amplitude iterations, triples correction) are achieved in a linear scaling fashion. In addition, a number of additional algorithmic improvements are reported that lead to significant speedups of the method. The new, linear-scaling DLPNO-CCSD(T) implementation typically is 7 times faster than the previous implementation and consumes 4 times less disk space for large three-dimensional systems. For linear systems, the performance gains and memory savings are substantially larger. Calculations with more than 20 000 basis functions and 1000 atoms are reported in this work. In all cases, the time required for the coupled cluster step is comparable to or lower than for the preceding Hartree-Fock calculation, even if this is carried out with the efficient resolution-of-the-identity and chain-of-spheres approximations. The new implementation even reduces the error in absolute correlation energies by about a factor of two, compared to the already accurate

  20. Multidimensional scaling analysis of financial time series based on modified cross-sample entropy methods

    Science.gov (United States)

    He, Jiayi; Shang, Pengjian; Xiong, Hui

    2018-06-01

    Stocks, as the concrete manifestation of financial time series with plenty of potential information, are often used in the study of financial time series. In this paper, we utilize the stock data to recognize their patterns through out the dissimilarity matrix based on modified cross-sample entropy, then three-dimensional perceptual maps of the results are provided through multidimensional scaling method. Two modified multidimensional scaling methods are proposed in this paper, that is, multidimensional scaling based on Kronecker-delta cross-sample entropy (MDS-KCSE) and multidimensional scaling based on permutation cross-sample entropy (MDS-PCSE). These two methods use Kronecker-delta based cross-sample entropy and permutation based cross-sample entropy to replace the distance or dissimilarity measurement in classical multidimensional scaling (MDS). Multidimensional scaling based on Chebyshev distance (MDSC) is employed to provide a reference for comparisons. Our analysis reveals a clear clustering both in synthetic data and 18 indices from diverse stock markets. It implies that time series generated by the same model are easier to have similar irregularity than others, and the difference in the stock index, which is caused by the country or region and the different financial policies, can reflect the irregularity in the data. In the synthetic data experiments, not only the time series generated by different models can be distinguished, the one generated under different parameters of the same model can also be detected. In the financial data experiment, the stock indices are clearly divided into five groups. Through analysis, we find that they correspond to five regions, respectively, that is, Europe, North America, South America, Asian-Pacific (with the exception of mainland China), mainland China and Russia. The results also demonstrate that MDS-KCSE and MDS-PCSE provide more effective divisions in experiments than MDSC.

  1. A fast method for large-scale isolation of phages from hospital ...

    African Journals Online (AJOL)

    This plaque-forming method could be adopted to isolate E. coli phage easily, rapidly and in large quantities. Among the 18 isolated E. coli phages, 10 of them had a broad host range in E. coli and warrant further study. Key words: Escherichia coli phages, large-scale isolation, drug resistance, biological properties.

  2. Evolutionary analysis of apolipoprotein E by Maximum Likelihood and complex network methods

    Directory of Open Access Journals (Sweden)

    Leandro de Jesus Benevides

    Full Text Available Abstract Apolipoprotein E (apo E is a human glycoprotein with 299 amino acids, and it is a major component of very low density lipoproteins (VLDL and a group of high-density lipoproteins (HDL. Phylogenetic studies are important to clarify how various apo E proteins are related in groups of organisms and whether they evolved from a common ancestor. Here, we aimed at performing a phylogenetic study on apo E carrying organisms. We employed a classical and robust method, such as Maximum Likelihood (ML, and compared the results using a more recent approach based on complex networks. Thirty-two apo E amino acid sequences were downloaded from NCBI. A clear separation could be observed among three major groups: mammals, fish and amphibians. The results obtained from ML method, as well as from the constructed networks showed two different groups: one with mammals only (C1 and another with fish (C2, and a single node with the single sequence available for an amphibian. The accordance in results from the different methods shows that the complex networks approach is effective in phylogenetic studies. Furthermore, our results revealed the conservation of apo E among animal groups.

  3. A Feasibility Study of a Field-specific Weather Service for Small-scale Farms in a Topographically Complex Watershed

    Science.gov (United States)

    Kim, S. O.; Shim, K. M.; Shin, Y. S.; Yun, J. I.

    2015-12-01

    Adequate downscaling of synoptic forecasts is a prerequisite for improved agrometeorological service to rural areas in South Korea where complex terrain and small farms are common. Geospatial schemes based on topoclimatology were used to scale down the Korea Meteorological Administration (KMA) temperature forecasts to the local scale (~30 m) across a rural catchment. Local temperatures were estimated at 14 validation sites at 0600 and 1500 LST in 2013/2014 using these schemes and were compared with observations. A substantial reduction in the estimation error was found for both 0600 and 1500 temperatures compared with uncorrected KMA products. Improvement was most remarkable at low lying locations for the 0600 temperature and at the locations on west- and south-facing slopes for the 1500 temperature. Using the downscaled real-time temperature data, a pilot service has started to provide field-specific weather information tailored to meet the requirements of small-scale farms. For example, the service system makes a daily outlook on the phenology of crop species grown in a given field using the field-specific temperature data. When the temperature forecast is given for tomorrow morning, a frost risk index is calculated according to a known phenology-frost injury relationship. If the calculated index is higher than a pre-defined threshold, a warning is issued and delivered to the grower's cellular phone with relevant countermeasures to help protect crops against frost damage. The system was implemented for a topographically complex catchment of 350km2with diverse agricultural activities, and more than 400 volunteer farmers are participating in this pilot service to access user-specific weather information.

  4. Perspective: Differential dynamic microscopy extracts multi-scale activity in complex fluids and biological systems

    Science.gov (United States)

    Cerbino, Roberto; Cicuta, Pietro

    2017-09-01

    Differential dynamic microscopy (DDM) is a technique that exploits optical microscopy to obtain local, multi-scale quantitative information about dynamic samples, in most cases without user intervention. It is proving extremely useful in understanding dynamics in liquid suspensions, soft materials, cells, and tissues. In DDM, image sequences are analyzed via a combination of image differences and spatial Fourier transforms to obtain information equivalent to that obtained by means of light scattering techniques. Compared to light scattering, DDM offers obvious advantages, principally (a) simplicity of the setup; (b) possibility of removing static contributions along the optical path; (c) power of simultaneous different microscopy contrast mechanisms; and (d) flexibility of choosing an analysis region, analogous to a scattering volume. For many questions, DDM has also advantages compared to segmentation/tracking approaches and to correlation techniques like particle image velocimetry. The very straightforward DDM approach, originally demonstrated with bright field microscopy of aqueous colloids, has lately been used to probe a variety of other complex fluids and biological systems with many different imaging methods, including dark-field, differential interference contrast, wide-field, light-sheet, and confocal microscopy. The number of adopting groups is rapidly increasing and so are the applications. Here, we briefly recall the working principles of DDM, we highlight its advantages and limitations, we outline recent experimental breakthroughs, and we provide a perspective on future challenges and directions. DDM can become a standard primary tool in every laboratory equipped with a microscope, at the very least as a first bias-free automated evaluation of the dynamics in a system.

  5. POWER ANALYSIS FOR COMPLEX MEDIATIONAL DESIGNS USING MONTE CARLO METHODS

    OpenAIRE

    Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.

    2010-01-01

    Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex mediational models. The approach is based on the well known technique of generating a large number of samples in a Monte Carlo study, and estimating power...

  6. Quantitative analysis of surface deformation and ductile flow in complex analogue geodynamic models based on PIV method.

    Science.gov (United States)

    Krýza, Ondřej; Lexa, Ondrej; Závada, Prokop; Schulmann, Karel; Gapais, Denis; Cosgrove, John

    2017-04-01

    Recently, a PIV (particle image velocimetry) analysis method is optical method abundantly used in many technical branches where material flow visualization and quantification is important. Typical examples are studies of liquid flow through complex channel system, gas spreading or combustion problematics. In our current research we used this method for investigation of two types of complex analogue geodynamic and tectonic experiments. First class of experiments is aimed to model large-scale oroclinal buckling as an analogue of late Paleozoic to early Mesozoic evolution of Central Asian Orogenic Belt (CAOB) resulting from nortward drift of the North-China craton towards the Siberian craton. Here we studied relationship between lower crustal and lithospheric mantle flows and upper crustal deformation respectively. A second class of experiments is focused to more general study of a lower crustal flow in indentation systems that represent a major component of some large hot orogens (e.g. Bohemian massif). The most of simulations in both cases shows a strong dependency of a brittle structures shape, that are situated in upper crust, on folding style of a middle and lower ductile layers which is influenced by rheological, geometrical and thermal conditions of different parts across shortened domain. The purpose of PIV application is to quantify material redistribution in critical domains of the model. The derivation of flow direction and calculation of strain-rate and total displacement field in analogue experiments is generally difficult and time-expensive or often performed only on a base of visual evaluations. PIV method operates with set of images, where small tracer particles are seeded within modeled domain and are assumed to faithfully follow the material flow. On base of pixel coordinates estimation the material displacement field, velocity field, strain-rate, vorticity, tortuosity etc. are calculated. In our experiments we used velocity field divergence to

  7. A ghost-cell immersed boundary method for flow in complex geometry

    International Nuclear Information System (INIS)

    Tseng, Y.-H.; Ferziger, Joel H.

    2003-01-01

    An efficient ghost-cell immersed boundary method (GCIBM) for simulating turbulent flows in complex geometries is presented. A boundary condition is enforced through a ghost cell method. The reconstruction procedure allows systematic development of numerical schemes for treating the immersed boundary while preserving the overall second-order accuracy of the base solver. Both Dirichlet and Neumann boundary conditions can be treated. The current ghost cell treatment is both suitable for staggered and non-staggered Cartesian grids. The accuracy of the current method is validated using flow past a circular cylinder and large eddy simulation of turbulent flow over a wavy surface. Numerical results are compared with experimental data and boundary-fitted grid results. The method is further extended to an existing ocean model (MITGCM) to simulate geophysical flow over a three-dimensional bump. The method is easily implemented as evidenced by our use of several existing codes

  8. Colorimetric method for enzymatic screening assay of ATP using Fe(III)-xylenol orange complex formation.

    Science.gov (United States)

    Ishida, Akihiko; Yamada, Yasuko; Kamidate, Tamio

    2008-11-01

    In hygiene management, recently there has been a significant need for screening methods for microbial contamination by visual observation or with commonly used colorimetric apparatus. The amount of adenosine triphosphate (ATP) can serve as the index of a microorganism. This paper describes the development of a colorimetric method for the assay of ATP, using enzymatic cycling and Fe(III)-xylenol orange (XO) complex formation. The color characteristics of the Fe(III)-XO complexes, which show a distinct color change from yellow to purple, assist the visual observation in screening work. In this method, a trace amount of ATP was converted to pyruvate, which was further amplified exponentially with coupled enzymatic reactions. Eventually, pyruvate was converted to the Fe(III)-XO complexes through pyruvate oxidase reaction and Fe(II) oxidation. As the assay result, yellow or purple color was observed: A yellow color indicates that the ATP concentration is lower than the criterion of the test, and a purple color indicates that the ATP concentration is higher than the criterion. The method was applied to the assay of ATP extracted from Escherichia coli cells added to cow milk.

  9. Quantitative Research Methods in Chaos and Complexity: From Probability to Post Hoc Regression Analyses

    Science.gov (United States)

    Gilstrap, Donald L.

    2013-01-01

    In addition to qualitative methods presented in chaos and complexity theories in educational research, this article addresses quantitative methods that may show potential for future research studies. Although much in the social and behavioral sciences literature has focused on computer simulations, this article explores current chaos and…

  10. ADVANTAGES OF RAPID METHOD FOR DETERMINING SCALE MASS AND DECARBURIZED LAYER OF ROLLED COIL STEEL

    Directory of Open Access Journals (Sweden)

    E. V. Parusov

    2016-08-01

    Full Text Available Purpose. To determine the universal empirical relationships that allow for operational calculation of scale mass and decarbonized layer depth based on the parameters of the technological process for rolled coil steel production. Methodology. The research is carried out on the industrial batches of the rolled steel of SAE 1006 and SAE 1065 grades. Scale removability was determined in accordance with the procedure of «Bekaert» company by the specifi-cations: GA-03-16, GA-03-18, GS-03-02, GS-06-01. The depth of decarbonized layer was identified in accordance with GOST 1763-68 (M method. Findings. Analysis of experimental data allowed us to determine the rational temperature of coil formation of the investigated steel grades, which provide the best possible removal of scale from the metal surface, a minimal amount of scale, as well as compliance of the metal surface color with the require-ments of European consumers. Originality. The work allowed establishing correlation of the basic quality indicators of the rolled coil high carbon steel (scale mass, depth of decarbonized layer and inter-laminar distance in pearlite with one of the main parameters (coil formation temperature of the deformation and heat treatment mode. The re-sulting regression equations, without metallographic analysis, can be used to determine, with a minimum error, the quantitative values of the total scale mass, depth of decarbonized layer and the average inter-lamellar distance in pearlite of the rolled coil high carbon steel. Practical value. Based on the specifications of «Bekaert» company (GA-03-16, GA-03-18, GS-03-02 and GS-06-01 the method of testing descaling by mechanical means from the surface of the rolled coil steel of low- and high-carbon steel grades was developed and approved in the environment of PJSC «ArcelorMittal Kryvyi Rih». The work resulted in development of the rapid method for determination of total and remaining scale mass on the rolled coil steel

  11. Functional analytic methods in complex analysis and applications to partial differential equations

    International Nuclear Information System (INIS)

    Mshimba, A.S.A.; Tutschke, W.

    1990-01-01

    The volume contains 24 lectures given at the Workshop on Functional Analytic Methods in Complex Analysis and Applications to Partial Differential Equations held in Trieste, Italy, between 8-19 February 1988, at the ICTP. A separate abstract was prepared for each of these lectures. Refs and figs

  12. Evaluating polymer degradation with complex mixtures using a simplified surface area method.

    Science.gov (United States)

    Steele, Kandace M; Pelham, Todd; Phalen, Robert N

    2017-09-01

    Chemical-resistant gloves, designed to protect workers from chemical hazards, are made from a variety of polymer materials such as plastic, rubber, and synthetic rubber. One material does not provide protection against all chemicals, thus proper polymer selection is critical. Standardized testing, such as chemical degradation tests, are used to aid in the selection process. The current methods of degradation ratings based on changes in weight or tensile properties can be expensive and data often do not exist for complex chemical mixtures. There are hundreds of thousands of chemical products on the market that do not have chemical resistance data for polymer selection. The method described in this study provides an inexpensive alternative to gravimetric analysis. This method uses surface area change to evaluate degradation of a polymer material. Degradation tests for 5 polymer types against 50 complex mixtures were conducted using both gravimetric and surface area methods. The percent change data were compared between the two methods. The resulting regression line was y = 0.48x + 0.019, in units of percent, and the Pearson correlation coefficient was r = 0.9537 (p ≤ 0.05), which indicated a strong correlation between percent weight change and percent surface area change. On average, the percent change for surface area was about half that of the weight change. Using this information, an equivalent rating system was developed for determining the chemical degradation of polymer gloves using surface area.

  13. Integrated complex care coordination for children with medical complexity: A mixed-methods evaluation of tertiary care-community collaboration

    Directory of Open Access Journals (Sweden)

    Cohen Eyal

    2012-10-01

    Full Text Available Abstract Background Primary care medical homes may improve health outcomes for children with special healthcare needs (CSHCN, by improving care coordination. However, community-based primary care practices may be challenged to deliver comprehensive care coordination to complex subsets of CSHCN such as children with medical complexity (CMC. Linking a tertiary care center with the community may achieve cost effective and high quality care for CMC. The objective of this study was to evaluate the outcomes of community-based complex care clinics integrated with a tertiary care center. Methods A before- and after-intervention study design with mixed (quantitative/qualitative methods was utilized. Clinics at two community hospitals distant from tertiary care were staffed by local community pediatricians with the tertiary care center nurse practitioner and linked with primary care providers. Eighty-one children with underlying chronic conditions, fragility, requirement for high intensity care and/or technology assistance, and involvement of multiple providers participated. Main outcome measures included health care utilization and expenditures, parent reports of parent- and child-quality of life [QOL (SF-36®, CPCHILD©, PedsQL™], and family-centered care (MPOC-20®. Comparisons were made in equal (up to 1 year pre- and post-periods supplemented by qualitative perspectives of families and pediatricians. Results Total health care system costs decreased from median (IQR $244 (981 per patient per month (PPPM pre-enrolment to $131 (355 PPPM post-enrolment (p=.007, driven primarily by fewer inpatient days in the tertiary care center (p=.006. Parents reported decreased out of pocket expenses (p© domains [Health Standardization Section (p=.04; Comfort and Emotions (p=.03], while total CPCHILD© score decreased between baseline and 1 year (p=.003. Parents and providers reported the ability to receive care close to home as a key benefit. Conclusions Complex

  14. The application of HP-GFC chromatographic method for the analysis of oligosaccharides in bioactive complexes

    Directory of Open Access Journals (Sweden)

    Savić Ivan

    2009-01-01

    Full Text Available The aim of this work was to optimize a GFC method for the analysis of bioactive metal (Cu, Co and Fe complexes with olygosaccharides (dextran and pullulan. Bioactive metal complexes with olygosaccharides were synthesized by original procedure. GFC was used to study the molecular weight distribution, polymerization degree of oligosaccharides and bioactive metal complexes. The metal bounding in complexes depends on the ligand polymerization degree and the presence of OH groups in coordinative sphere of the central metal ion. The interaction between oligosaccharide and metal ions are very important in veterinary medicine, agriculture, pharmacy and medicine.

  15. A new high-throughput LC-MS method for the analysis of complex fructan mixtures

    DEFF Research Database (Denmark)

    Verspreet, Joran; Hansen, Anders Holmgaard; Dornez, Emmie

    2014-01-01

    In this paper, a new liquid chromatography-mass spectrometry (LC-MS) method for the analysis of complex fructan mixtures is presented. In this method, columns with a trifunctional C18 alkyl stationary phase (T3) were used and their performance compared with that of a porous graphitized carbon (PGC...

  16. The Multi-Scale Network Landscape of Collaboration.

    Science.gov (United States)

    Bae, Arram; Park, Doheum; Ahn, Yong-Yeol; Park, Juyong

    2016-01-01

    Propelled by the increasing availability of large-scale high-quality data, advanced data modeling and analysis techniques are enabling many novel and significant scientific understanding of a wide range of complex social, natural, and technological systems. These developments also provide opportunities for studying cultural systems and phenomena--which can be said to refer to all products of human creativity and way of life. An important characteristic of a cultural product is that it does not exist in isolation from others, but forms an intricate web of connections on many levels. In the creation and dissemination of cultural products and artworks in particular, collaboration and communication of ideas play an essential role, which can be captured in the heterogeneous network of the creators and practitioners of art. In this paper we propose novel methods to analyze and uncover meaningful patterns from such a network using the network of western classical musicians constructed from a large-scale comprehensive Compact Disc recordings data. We characterize the complex patterns in the network landscape of collaboration between musicians across multiple scales ranging from the macroscopic to the mesoscopic and microscopic that represent the diversity of cultural styles and the individuality of the artists.

  17. Interchange Recognition Method Based on CNN

    Directory of Open Access Journals (Sweden)

    HE Haiwei

    2018-03-01

    Full Text Available The identification and classification of interchange structures in OSM data can provide important information for the construction of multi-scale model, navigation and location services, congestion analysis, etc. The traditional method of interchange identification relies on the low-level characteristics of artificial design, and cannot distinguish the complex interchange structure with interference section effectively. In this paper, a new method based on convolutional neural network for identification of the interchange is proposed. The method combines vector data with raster image, and uses neural network to learn the fuzzy characteristics of the interchange, and classifies the complex interchange structure in OSM. Experiments show that this method has strong anti-interference, and has achieved good results in the classification of complex interchange shape, and there is room for further improvement with the expansion of the case base and the optimization of neural network model.

  18. Application of a non-contiguous grid generation method to complex configurations

    International Nuclear Information System (INIS)

    Chen, S.; McIlwain, S.; Khalid, M.

    2003-01-01

    An economical non-contiguous grid generation method was developed to efficiently generate structured grids for complex 3D problems. Compared with traditional contiguous grids, this new approach generated grids for different block clusters independently and was able to distribute the grid points more economically according to the user's specific topology design. The method was evaluated by applying it to a Navier-Stokes computation of flow past a hypersonic projectile. Both the flow velocity and the heat transfer characteristics of the projectile agreed qualitatively with other numerical data in the literature and with available field data. Detailed grid topology designs for 3D geometries were addressed, and the advantages of this approach were analysed and compared with traditional contiguous grid generation methods. (author)

  19. Analysis of the structure of complex networks at different resolution levels

    Energy Technology Data Exchange (ETDEWEB)

    Arenas, A.; Fernandez, A.; Gomez, S.

    2008-02-28

    Modular structure is ubiquitous in real-world complex networks, and its detection is important because it gives insights in the structure-functionality relationship. The standard approach is based on the optimization of a quality function, modularity, which is a relative quality measure for a partition of a network into modules. Recently some authors have pointed out that the optimization of modularity has a fundamental drawback: the existence of a resolution limit beyond which no modular structure can be detected even though these modules might have own entity. The reason is that several topological descriptions of the network coexist at different scales, which is, in general, a fingerprint of complex systems. Here we propose a method that allows for multiple resolution screening of the modular structure. The method has been validated using synthetic networks, discovering the predefined structures at all scales. Its application to two real social networks allows to find the exact splits reported in the literature, as well as the substructure beyond the actual split.

  20. Hexographic Method of Complex Town-Planning Terrain Estimate

    Science.gov (United States)

    Khudyakov, A. Ju

    2017-11-01

    The article deals with the vital problem of a complex town-planning analysis based on the “hexographic” graphic analytic method, makes a comparison with conventional terrain estimate methods and contains the method application examples. It discloses a procedure of the author’s estimate of restrictions and building of a mathematical model which reflects not only conventional town-planning restrictions, but also social and aesthetic aspects of the analyzed territory. The method allows one to quickly get an idea of the territory potential. It is possible to use an unlimited number of estimated factors. The method can be used for the integrated assessment of urban areas. In addition, it is possible to use the methods of preliminary evaluation of the territory commercial attractiveness in the preparation of investment projects. The technique application results in simple informative graphics. Graphical interpretation is straightforward from the experts. A definite advantage is the free perception of the subject results as they are not prepared professionally. Thus, it is possible to build a dialogue between professionals and the public on a new level allowing to take into account the interests of various parties. At the moment, the method is used as a tool for the preparation of integrated urban development projects at the Department of Architecture in Federal State Autonomous Educational Institution of Higher Education “South Ural State University (National Research University)”, FSAEIHE SUSU (NRU). The methodology is included in a course of lectures as the material on architectural and urban design for architecture students. The same methodology was successfully tested in the preparation of business strategies for the development of some territories in the Chelyabinsk region. This publication is the first in a series of planned activities developing and describing the methodology of hexographical analysis in urban and architectural practice. It is also