WorldWideScience

Sample records for method requires complex

  1. Metric-based method of software requirements correctness improvement

    Directory of Open Access Journals (Sweden)

    Yaremchuk Svitlana

    2017-01-01

    Full Text Available The work highlights the most important principles of software reliability management (SRM. The SRM concept construes a basis for developing a method of requirements correctness improvement. The method assumes that complicated requirements contain more actual and potential design faults/defects. The method applies a newer metric to evaluate the requirements complexity and double sorting technique evaluating the priority and complexity of a particular requirement. The method enables to improve requirements correctness due to identification of a higher number of defects with restricted resources. Practical application of the proposed method in the course of demands review assured a sensible technical and economic effect.

  2. A Fluorine-18 Radiolabeling Method Enabled by Rhenium(I) Complexation Circumvents the Requirement of Anhydrous Conditions.

    Science.gov (United States)

    Klenner, Mitchell A; Pascali, Giancarlo; Zhang, Bo; Sia, Tiffany R; Spare, Lawson K; Krause-Heuer, Anwen M; Aldrich-Wright, Janice R; Greguric, Ivan; Guastella, Adam J; Massi, Massimiliano; Fraser, Benjamin H

    2017-05-11

    Azeotropic distillation is typically required to achieve fluorine-18 radiolabeling during the production of positron emission tomography (PET) imaging agents. However, this time-consuming process also limits fluorine-18 incorporation, due to radioactive decay of the isotope and its adsorption to the drying vessel. In addressing these limitations, the fluorine-18 radiolabeling of one model rhenium(I) complex is reported here, which is significantly improved under conditions that do not require azeotropic drying. This work could open a route towards the investigation of a simplified metal-mediated late-stage radiofluorination method, which would expand upon the accessibility of new PET and PET-optical probes. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. State analysis requirements database for engineering complex embedded systems

    Science.gov (United States)

    Bennett, Matthew B.; Rasmussen, Robert D.; Ingham, Michel D.

    2004-01-01

    It has become clear that spacecraft system complexity is reaching a threshold where customary methods of control are no longer affordable or sufficiently reliable. At the heart of this problem are the conventional approaches to systems and software engineering based on subsystem-level functional decomposition, which fail to scale in the tangled web of interactions typically encountered in complex spacecraft designs. Furthermore, there is a fundamental gap between the requirements on software specified by systems engineers and the implementation of these requirements by software engineers. Software engineers must perform the translation of requirements into software code, hoping to accurately capture the systems engineer's understanding of the system behavior, which is not always explicitly specified. This gap opens up the possibility for misinterpretation of the systems engineer's intent, potentially leading to software errors. This problem is addressed by a systems engineering tool called the State Analysis Database, which provides a tool for capturing system and software requirements in the form of explicit models. This paper describes how requirements for complex aerospace systems can be developed using the State Analysis Database.

  4. Enabling Requirements-Based Programming for Highly-Dependable Complex Parallel and Distributed Systems

    Science.gov (United States)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2005-01-01

    The manual application of formal methods in system specification has produced successes, but in the end, despite any claims and assertions by practitioners, there is no provable relationship between a manually derived system specification or formal model and the customer's original requirements. Complex parallel and distributed system present the worst case implications for today s dearth of viable approaches for achieving system dependability. No avenue other than formal methods constitutes a serious contender for resolving the problem, and so recognition of requirements-based programming has come at a critical juncture. We describe a new, NASA-developed automated requirement-based programming method that can be applied to certain classes of systems, including complex parallel and distributed systems, to achieve a high degree of dependability.

  5. Satisfying positivity requirement in the Beyond Complex Langevin approach

    Directory of Open Access Journals (Sweden)

    Wyrzykowski Adam

    2018-01-01

    Full Text Available The problem of finding a positive distribution, which corresponds to a given complex density, is studied. By the requirement that the moments of the positive distribution and of the complex density are equal, one can reduce the problem to solving the matching conditions. These conditions are a set of quadratic equations, thus Groebner basis method was used to find its solutions when it is restricted to a few lowest-order moments. For a Gaussian complex density, these approximate solutions are compared with the exact solution, that is known in this special case.

  6. Satisfying positivity requirement in the Beyond Complex Langevin approach

    Science.gov (United States)

    Wyrzykowski, Adam; Ruba, Błażej Ruba

    2018-03-01

    The problem of finding a positive distribution, which corresponds to a given complex density, is studied. By the requirement that the moments of the positive distribution and of the complex density are equal, one can reduce the problem to solving the matching conditions. These conditions are a set of quadratic equations, thus Groebner basis method was used to find its solutions when it is restricted to a few lowest-order moments. For a Gaussian complex density, these approximate solutions are compared with the exact solution, that is known in this special case.

  7. Molecular photoionization using the complex Kohn variational method

    International Nuclear Information System (INIS)

    Lynch, D.L.; Schneider, B.I.

    1992-01-01

    We have applied the complex Kohn variational method to the study of molecular-photoionization processes. This requires electron-ion scattering calculations enforcing incoming boundary conditions. The sensitivity of these results to the choice of the cutoff function in the Kohn method has been studied and we have demonstrated that a simple matching of the irregular function to a linear combination of regular functions produces accurate scattering phase shifts

  8. Linearization Method and Linear Complexity

    Science.gov (United States)

    Tanaka, Hidema

    We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N(≅2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.

  9. Lithography requirements in complex VLSI device fabrication

    International Nuclear Information System (INIS)

    Wilson, A.D.

    1985-01-01

    Fabrication of complex very large scale integration (VLSI) circuits requires continual advances in lithography to satisfy: decreasing minimum linewidths, larger chip sizes, tighter linewidth and overlay control, increasing topography to linewidth ratios, higher yield demands, increased throughput, harsher device processing, lower lithography cost, and a larger part number set with quick turn-around time. Where optical, electron beam, x-ray, and ion beam lithography can be applied to judiciously satisfy the complex VLSI circuit fabrication requirements is discussed and those areas that are in need of major further advances are addressed. Emphasis will be placed on advanced electron beam and storage ring x-ray lithography

  10. Complex Data Modeling and Computationally Intensive Statistical Methods

    CERN Document Server

    Mantovan, Pietro

    2010-01-01

    The last years have seen the advent and development of many devices able to record and store an always increasing amount of complex and high dimensional data; 3D images generated by medical scanners or satellite remote sensing, DNA microarrays, real time financial data, system control datasets. The analysis of this data poses new challenging problems and requires the development of novel statistical models and computational methods, fueling many fascinating and fast growing research areas of modern statistics. The book offers a wide variety of statistical methods and is addressed to statistici

  11. Models, methods and software tools for building complex adaptive traffic systems

    International Nuclear Information System (INIS)

    Alyushin, S.A.

    2011-01-01

    The paper studies the modern methods and tools to simulate the behavior of complex adaptive systems (CAS), the existing systems of traffic modeling in simulators and their characteristics; proposes requirements for assessing the suitability of the system to simulate the CAS behavior in simulators. The author has developed a model of adaptive agent representation and its functioning environment to meet certain requirements set above, and has presented methods of agents' interactions and methods of conflict resolution in simulated traffic situations. A simulation system realizing computer modeling for simulating the behavior of CAS in traffic situations has been created [ru

  12. Method of complex scaling

    International Nuclear Information System (INIS)

    Braendas, E.

    1986-01-01

    The method of complex scaling is taken to include bound states, resonances, remaining scattering background and interference. Particular points of the general complex coordinate formulation are presented. It is shown that care must be exercised to avoid paradoxical situations resulting from inadequate definitions of operator domains. A new resonance localization theorem is presented

  13. Automated Derivation of Complex System Constraints from User Requirements

    Science.gov (United States)

    Foshee, Mark; Murey, Kim; Marsh, Angela

    2010-01-01

    The Payload Operations Integration Center (POIC) located at the Marshall Space Flight Center has the responsibility of integrating US payload science requirements for the International Space Station (ISS). All payload operations must request ISS system resources so that the resource usage will be included in the ISS on-board execution timelines. The scheduling of resources and building of the timeline is performed using the Consolidated Planning System (CPS). The ISS resources are quite complex due to the large number of components that must be accounted for. The planners at the POIC simplify the process for Payload Developers (PD) by providing the PDs with a application that has the basic functionality PDs need as well as list of simplified resources in the User Requirements Collection (URC) application. The planners maintained a mapping of the URC resources to the CPS resources. The process of manually converting PD's science requirements from a simplified representation to a more complex CPS representation is a time-consuming and tedious process. The goal is to provide a software solution to allow the planners to build a mapping of the complex CPS constraints to the basic URC constraints and automatically convert the PD's requirements into systems requirements during export to CPS.

  14. Equivalence of the generalized and complex Kohn variational methods

    Energy Technology Data Exchange (ETDEWEB)

    Cooper, J N; Armour, E A G [School of Mathematical Sciences, University Park, Nottingham NG7 2RD (United Kingdom); Plummer, M, E-mail: pmxjnc@googlemail.co [STFC Daresbury Laboratory, Daresbury, Warrington, Cheshire WA4 4AD (United Kingdom)

    2010-04-30

    For Kohn variational calculations on low energy (e{sup +} - H{sub 2}) elastic scattering, we prove that the phase shift approximation, obtained using the complex Kohn method, is precisely equal to a value which can be obtained immediately via the real-generalized Kohn method. Our treatment is sufficiently general to be applied directly to arbitrary potential scattering or single open channel scattering problems, with exchange if required. In the course of our analysis, we develop a framework formally to describe the anomalous behaviour of our generalized Kohn calculations in the regions of the well-known Schwartz singularities. This framework also explains the mathematical origin of the anomaly-free singularities we reported in a previous article. Moreover, we demonstrate a novelty: that explicit solutions of the Kohn equations are not required in order to calculate optimal phase shift approximations. We relate our rigorous framework to earlier descriptions of the Kohn-type methods.

  15. Equivalence of the generalized and complex Kohn variational methods

    International Nuclear Information System (INIS)

    Cooper, J N; Armour, E A G; Plummer, M

    2010-01-01

    For Kohn variational calculations on low energy (e + - H 2 ) elastic scattering, we prove that the phase shift approximation, obtained using the complex Kohn method, is precisely equal to a value which can be obtained immediately via the real-generalized Kohn method. Our treatment is sufficiently general to be applied directly to arbitrary potential scattering or single open channel scattering problems, with exchange if required. In the course of our analysis, we develop a framework formally to describe the anomalous behaviour of our generalized Kohn calculations in the regions of the well-known Schwartz singularities. This framework also explains the mathematical origin of the anomaly-free singularities we reported in a previous article. Moreover, we demonstrate a novelty: that explicit solutions of the Kohn equations are not required in order to calculate optimal phase shift approximations. We relate our rigorous framework to earlier descriptions of the Kohn-type methods.

  16. Analytical Method to Estimate the Complex Permittivity of Oil Samples

    Directory of Open Access Journals (Sweden)

    Lijuan Su

    2018-03-01

    Full Text Available In this paper, an analytical method to estimate the complex dielectric constant of liquids is presented. The method is based on the measurement of the transmission coefficient in an embedded microstrip line loaded with a complementary split ring resonator (CSRR, which is etched in the ground plane. From this response, the dielectric constant and loss tangent of the liquid under test (LUT can be extracted, provided that the CSRR is surrounded by such LUT, and the liquid level extends beyond the region where the electromagnetic fields generated by the CSRR are present. For that purpose, a liquid container acting as a pool is added to the structure. The main advantage of this method, which is validated from the measurement of the complex dielectric constant of olive and castor oil, is that reference samples for calibration are not required.

  17. Complex finite element sensitivity method for creep analysis

    International Nuclear Information System (INIS)

    Gomez-Farias, Armando; Montoya, Arturo; Millwater, Harry

    2015-01-01

    The complex finite element method (ZFEM) has been extended to perform sensitivity analysis for mechanical and structural systems undergoing creep deformation. ZFEM uses a complex finite element formulation to provide shape, material, and loading derivatives of the system response, providing an insight into the essential factors which control the behavior of the system as a function of time. A complex variable-based quadrilateral user element (UEL) subroutine implementing the power law creep constitutive formulation was incorporated within the Abaqus commercial finite element software. The results of the complex finite element computations were verified by comparing them to the reference solution for the steady-state creep problem of a thick-walled cylinder in the power law creep range. A practical application of the ZFEM implementation to creep deformation analysis is the calculation of the skeletal point of a notched bar test from a single ZFEM run. In contrast, the standard finite element procedure requires multiple runs. The value of the skeletal point is that it identifies the location where the stress state is accurate, regardless of the certainty of the creep material properties. - Highlights: • A novel finite element sensitivity method (ZFEM) for creep was introduced. • ZFEM has the capability to calculate accurate partial derivatives. • ZFEM can be used for identification of the skeletal point of creep structures. • ZFEM can be easily implemented in a commercial software, e.g. Abaqus. • ZFEM results were shown to be in excellent agreement with analytical solutions

  18. Complex Method Mixed with PSO Applying to Optimization Design of Bridge Crane Girder

    Directory of Open Access Journals (Sweden)

    He Yan

    2017-01-01

    Full Text Available In engineer design, basic complex method has not enough global search ability for the nonlinear optimization problem, so it mixed with particle swarm optimization (PSO has been presented in the paper,that is the optimal particle evaluated from fitness function of particle swarm displacement complex vertex in order to realize optimal principle of the largest complex central distance.This method is applied to optimization design problems of box girder of bridge crane with constraint conditions.At first a mathematical model of the girder optimization has been set up,in which box girder cross section area of bridge crane is taken as the objective function, and its four sizes parameters as design variables, girder mechanics performance, manufacturing process, border sizes and so on requirements as constraint conditions. Then complex method mixed with PSO is used to solve optimization design problem of cane box girder from constrained optimization studying approach, and its optimal results have achieved the goal of lightweight design and reducing the crane manufacturing cost . The method is reliable, practical and efficient by the practical engineer calculation and comparative analysis with basic complex method.

  19. Design Analysis Method for Multidisciplinary Complex Product using SysML

    Directory of Open Access Journals (Sweden)

    Liu Jihong

    2017-01-01

    Full Text Available In the design of multidisciplinary complex products, model-based systems engineering methods are widely used. However, the methodologies only contain only modeling order and simple analysis steps, and lack integrated design analysis methods supporting the whole process. In order to solve the problem, a conceptual design analysis method with integrating modern design methods has been proposed. First, based on the requirement analysis of the quantization matrix, the user’s needs are quantitatively evaluated and translated to system requirements. Then, by the function decomposition of the function knowledge base, the total function is semi-automatically decomposed into the predefined atomic function. The function is matched into predefined structure through the behaviour layer using function-structure mapping based on the interface matching. Finally based on design structure matrix (DSM, the structure reorganization is completed. The process of analysis is implemented with SysML, and illustrated through an aircraft air conditioning system for the system validation.

  20. Complexity analysis of accelerated MCMC methods for Bayesian inversion

    International Nuclear Information System (INIS)

    Hoang, Viet Ha; Schwab, Christoph; Stuart, Andrew M

    2013-01-01

    The Bayesian approach to inverse problems, in which the posterior probability distribution on an unknown field is sampled for the purposes of computing posterior expectations of quantities of interest, is starting to become computationally feasible for partial differential equation (PDE) inverse problems. Balancing the sources of error arising from finite-dimensional approximation of the unknown field, the PDE forward solution map and the sampling of the probability space under the posterior distribution are essential for the design of efficient computational Bayesian methods for PDE inverse problems. We study Bayesian inversion for a model elliptic PDE with an unknown diffusion coefficient. We provide complexity analyses of several Markov chain Monte Carlo (MCMC) methods for the efficient numerical evaluation of expectations under the Bayesian posterior distribution, given data δ. Particular attention is given to bounds on the overall work required to achieve a prescribed error level ε. Specifically, we first bound the computational complexity of ‘plain’ MCMC, based on combining MCMC sampling with linear complexity multi-level solvers for elliptic PDE. Our (new) work versus accuracy bounds show that the complexity of this approach can be quite prohibitive. Two strategies for reducing the computational complexity are then proposed and analyzed: first, a sparse, parametric and deterministic generalized polynomial chaos (gpc) ‘surrogate’ representation of the forward response map of the PDE over the entire parameter space, and, second, a novel multi-level Markov chain Monte Carlo strategy which utilizes sampling from a multi-level discretization of the posterior and the forward PDE. For both of these strategies, we derive asymptotic bounds on work versus accuracy, and hence asymptotic bounds on the computational complexity of the algorithms. In particular, we provide sufficient conditions on the regularity of the unknown coefficients of the PDE and on the

  1. Evaluating the response of complex systems to environmental threats: the Σ II method

    International Nuclear Information System (INIS)

    Corynen, G.C.

    1983-05-01

    The Σ II method was developed to model and compute the probabilistic performance of systems that operate in a threatening environment. Although we emphasize the vulnerability of complex systems to earthquakes and to electromagnetic threats such as EMP (electromagnetic pulse), the method applies in general to most large-scale systems or networks that are embedded in a potentially harmful environment. Other methods exist for obtaining system vulnerability, but their complexity increases exponentially as the size of systems is increased. The complexity of the Σ II method is polynomial, and accurate solutions are now possible for problems for which current methods require the use of rough statistical bounds, confidence statements, and other approximations. For super-large problems, where the costs of precise answers may be prohibitive, a desired accuracy can be specified, and the Σ II algorithms will halt when that accuracy has been reached. We summarize the results of a theoretical complexity analysis - which is reported elsewhere - and validate the theory with computer experiments conducted both on worst-case academic problems and on more reasonable problems occurring in practice. Finally, we compare our method with the exact methods of Abraham and Nakazawa, and with current bounding methods, and we demonstrate the computational efficiency and accuracy of Σ II

  2. Information geometric methods for complexity

    Science.gov (United States)

    Felice, Domenico; Cafaro, Carlo; Mancini, Stefano

    2018-03-01

    Research on the use of information geometry (IG) in modern physics has witnessed significant advances recently. In this review article, we report on the utilization of IG methods to define measures of complexity in both classical and, whenever available, quantum physical settings. A paradigmatic example of a dramatic change in complexity is given by phase transitions (PTs). Hence, we review both global and local aspects of PTs described in terms of the scalar curvature of the parameter manifold and the components of the metric tensor, respectively. We also report on the behavior of geodesic paths on the parameter manifold used to gain insight into the dynamics of PTs. Going further, we survey measures of complexity arising in the geometric framework. In particular, we quantify complexity of networks in terms of the Riemannian volume of the parameter space of a statistical manifold associated with a given network. We are also concerned with complexity measures that account for the interactions of a given number of parts of a system that cannot be described in terms of a smaller number of parts of the system. Finally, we investigate complexity measures of entropic motion on curved statistical manifolds that arise from a probabilistic description of physical systems in the presence of limited information. The Kullback-Leibler divergence, the distance to an exponential family and volumes of curved parameter manifolds, are examples of essential IG notions exploited in our discussion of complexity. We conclude by discussing strengths, limits, and possible future applications of IG methods to the physics of complexity.

  3. Flow assurance : complex phase behavior and complex work requires confidence and vigilance

    Energy Technology Data Exchange (ETDEWEB)

    Brown, L.D. [ConocoPhillips, Major Projects, Advanced Integrated Simulation, Houston, TX (United States)

    2008-07-01

    Petroleum exploration and development projects and operations increasingly rely on flow assurance definition. Flow assurance is an integrating discipline as it follows the fluid from the reservoir to the market. Flow assurance works across complex technical and non-technical interfaces, including the reservoir, well completions, operation processes, project management, physical/organic chemistry, fluid mechanics, chemical engineering, mechanical engineering and corrosion. The phase behaviour in real fluids also has complex interfaces. The understanding and management of flow assurance of complex phase behaviour must be well communicated in order to enable proper selection, execution, and operation of development concepts designed to manage successful production within the fluid's phase behaviour. Simulation tools facilitate the translation of science into engineering. Academic, industrial, and field research is the core of these tools. The author cautioned that vigilance is required to assist and identify the right time to move innovation into the core tools.

  4. Immune Algorithm Complex Method for Transducer Calibration

    Directory of Open Access Journals (Sweden)

    YU Jiangming

    2014-08-01

    Full Text Available As a key link in engineering test tasks, the transducer calibration has significant influence on accuracy and reliability of test results. Because of unknown and complex nonlinear characteristics, conventional method can’t achieve satisfactory accuracy. An Immune algorithm complex modeling approach is proposed, and the simulated studies on the calibration of third multiple output transducers is made respectively by use of the developed complex modeling. The simulated and experimental results show that the Immune algorithm complex modeling approach can improve significantly calibration precision comparison with traditional calibration methods.

  5. Iterative methods for the solution of very large complex symmetric linear systems of equations in electrodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Clemens, M.; Weiland, T. [Technische Hochschule Darmstadt (Germany)

    1996-12-31

    In the field of computational electrodynamics the discretization of Maxwell`s equations using the Finite Integration Theory (FIT) yields very large, sparse, complex symmetric linear systems of equations. For this class of complex non-Hermitian systems a number of conjugate gradient-type algorithms is considered. The complex version of the biconjugate gradient (BiCG) method by Jacobs can be extended to a whole class of methods for complex-symmetric algorithms SCBiCG(T, n), which only require one matrix vector multiplication per iteration step. In this class the well-known conjugate orthogonal conjugate gradient (COCG) method for complex-symmetric systems corresponds to the case n = 0. The case n = 1 yields the BiCGCR method which corresponds to the conjugate residual algorithm for the real-valued case. These methods in combination with a minimal residual smoothing process are applied separately to practical 3D electro-quasistatical and eddy-current problems in electrodynamics. The practical performance of the SCBiCG methods is compared with other methods such as QMR and TFQMR.

  6. Education requirements for nurses working with people with complex neurological conditions: nurses' perceptions.

    Science.gov (United States)

    Baker, Mark

    2012-01-01

    Following a service evaluation methodology, this paper reports on registered nurses' (RNs) and healthcare assistants' (HCAs) perceptions about education and training requirements in order to work with people with complex neurological disabilities. A service evaluation was undertaken to meet the study aim using a non-probability, convenience method of sampling 368 nurses (n=110 RNs, n=258 HCAs) employed between October and November 2008 at one specialist hospital in south-west London in the U.K. The main results show that respondents were clear about the need to develop an education and training programme for RNs and HCAs working in this speciality area (91% of RNs and 94% of HCAs). A variety of topics were identified to be included within a work-based education and training programme, such as positively managing challenging behaviour, moving and handling, working with families. Adults with complex neurological needs have diverse needs and thus nurses working with this patient group require diverse education and training in order to deliver quality patient-focused nursing care. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Method Points: towards a metric for method complexity

    Directory of Open Access Journals (Sweden)

    Graham McLeod

    1998-11-01

    Full Text Available A metric for method complexity is proposed as an aid to choosing between competing methods, as well as in validating the effects of method integration or the products of method engineering work. It is based upon a generic method representation model previously developed by the author and adaptation of concepts used in the popular Function Point metric for system size. The proposed technique is illustrated by comparing two popular I.E. deliverables with counterparts in the object oriented Unified Modeling Language (UML. The paper recommends ways to improve the practical adoption of new methods.

  8. Experimentation on accuracy of non functional requirement prioritization approaches for different complexity projects

    Directory of Open Access Journals (Sweden)

    Raj Kumar Chopra

    2016-09-01

    Full Text Available Non functional requirements must be selected for implementation together with functional requirements to enhance the success of software projects. Three approaches exist for performing the prioritization of non functional requirements using the suitable prioritization technique. This paper performs experimentation on three different complexity versions of the industrial software project using cost-value prioritization technique employing three approaches. Experimentation is conducted to analyze the accuracy of individual approaches and the variation of accuracy with the complexity of the software project. The results indicate that selecting non functional requirements separately, but in accordance with functionality has higher accuracy amongst the other two approaches. Further, likewise other approaches, it witnesses the decrease in accuracy with increase in software complexity but the decrease is minimal.

  9. Curcumin complexation with cyclodextrins by the autoclave process: Method development and characterization of complex formation.

    Science.gov (United States)

    Hagbani, Turki Al; Nazzal, Sami

    2017-03-30

    One approach to enhance curcumin (CUR) aqueous solubility is to use cyclodextrins (CDs) to form inclusion complexes where CUR is encapsulated as a guest molecule within the internal cavity of the water-soluble CD. Several methods have been reported for the complexation of CUR with CDs. Limited information, however, is available on the use of the autoclave process (AU) in complex formation. The aims of this work were therefore to (1) investigate and evaluate the AU cycle as a complex formation method to enhance CUR solubility; (2) compare the efficacy of the AU process with the freeze-drying (FD) and evaporation (EV) processes in complex formation; and (3) confirm CUR stability by characterizing CUR:CD complexes by NMR, Raman spectroscopy, DSC, and XRD. Significant differences were found in the saturation solubility of CUR from its complexes with CD when prepared by the three complexation methods. The AU yielded a complex with expected chemical and physical fingerprints for a CUR:CD inclusion complex that maintained the chemical integrity and stability of CUR and provided the highest solubility of CUR in water. Physical and chemical characterizations of the AU complexes confirmed the encapsulated of CUR inside the CD cavity and the transformation of the crystalline CUR:CD inclusion complex to an amorphous form. It was concluded that the autoclave process with its short processing time could be used as an alternate and efficient methods for drug:CD complexation. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Research on image complexity evaluation method based on color information

    Science.gov (United States)

    Wang, Hao; Duan, Jin; Han, Xue-hui; Xiao, Bo

    2017-11-01

    In order to evaluate the complexity of a color image more effectively and find the connection between image complexity and image information, this paper presents a method to compute the complexity of image based on color information.Under the complexity ,the theoretical analysis first divides the complexity from the subjective level, divides into three levels: low complexity, medium complexity and high complexity, and then carries on the image feature extraction, finally establishes the function between the complexity value and the color characteristic model. The experimental results show that this kind of evaluation method can objectively reconstruct the complexity of the image from the image feature research. The experimental results obtained by the method of this paper are in good agreement with the results of human visual perception complexity,Color image complexity has a certain reference value.

  11. An Engineering Method of Civil Jet Requirements Validation Based on Requirements Project Principle

    Science.gov (United States)

    Wang, Yue; Gao, Dan; Mao, Xuming

    2018-03-01

    A method of requirements validation is developed and defined to meet the needs of civil jet requirements validation in product development. Based on requirements project principle, this method will not affect the conventional design elements, and can effectively connect the requirements with design. It realizes the modern civil jet development concept, which is “requirement is the origin, design is the basis”. So far, the method has been successfully applied in civil jet aircraft development in China. Taking takeoff field length as an example, the validation process and the validation method of the requirements are detailed introduced in the study, with the hope of providing the experiences to other civil jet product design.

  12. A Qualitative Method to Estimate HSI Display Complexity

    International Nuclear Information System (INIS)

    Hugo, Jacques; Gertman, David

    2013-01-01

    There is mounting evidence that complex computer system displays in control rooms contribute to cognitive complexity and, thus, to the probability of human error. Research shows that reaction time increases and response accuracy decreases as the number of elements in the display screen increase. However, in terms of supporting the control room operator, approaches focusing on addressing display complexity solely in terms of information density and its location and patterning, will fall short of delivering a properly designed interface. This paper argues that information complexity and semantic complexity are mandatory components when considering display complexity and that the addition of these concepts assists in understanding and resolving differences between designers and the preferences and performance of operators. This paper concludes that a number of simplified methods, when combined, can be used to estimate the impact that a particular display may have on the operator's ability to perform a function accurately and effectively. We present a mixed qualitative and quantitative approach and a method for complexity estimation

  13. Nuclear localization of Schizosaccharomyces pombe Mcm2/Cdc19p requires MCM complex assembly.

    Science.gov (United States)

    Pasion, S G; Forsburg, S L

    1999-12-01

    The minichromosome maintenance (MCM) proteins MCM2-MCM7 are conserved eukaryotic replication factors that assemble in a heterohexameric complex. In fission yeast, these proteins are nuclear throughout the cell cycle. In studying the mechanism that regulates assembly of the MCM complex, we analyzed the cis and trans elements required for nuclear localization of a single subunit, Mcm2p. Mutation of any single mcm gene leads to redistribution of wild-type MCM subunits to the cytoplasm, and this redistribution depends on an active nuclear export system. We identified the nuclear localization signal sequences of Mcm2p and showed that these are required for nuclear targeting of other MCM subunits. In turn, Mcm2p must associate with other MCM proteins for its proper localization; nuclear localization of MCM proteins thus requires assembly of MCM proteins in a complex. We suggest that coupling complex assembly to nuclear targeting and retention ensures that only intact heterohexameric MCM complexes remain nuclear.

  14. Structuring requirements as necessary premise for customer-oriented development of complex products: A generic approach

    Directory of Open Access Journals (Sweden)

    Sandra Klute

    2011-10-01

    Full Text Available Purpose: Complex products like for example intra-logistical facilities make high demands on developers and producers and involve high investment and operating costs. When planning and developing and also making buying decisions the facility utilization and the thus ensuing requirements on the facility and its components are inadequately considered to date. Nevertheless, with regard to customer-directed product design, these requirements must all be taken into account – especially as they can contribute to possible savings. In this context, it is necessary to survey and systematically regard requirements from a large number of areas like for example the operator, the facility producer and also requirements of external parties such as the law and to implement into adequate product characteristics to produce customer-oriented products. This is, however, a difficult task because of the diversity of stakeholders involved and their numerous and often divergent requirements. Therefore, it is essential to structure the requirements, so that planners and developers are able to manage the large amount of information. Structure models can be used in this context to cluster requirements. Within the German Collaborative Research Centre 696 a 10-dimensional model has been developed. This model allows structuring of all requirements on intra-logistical facilities or respectively complex products in general. In the context of dealing with hundreds of data records, structuring requirements is mandatory to achieve accuracy, clarity and consequently satisfactory results when transforming requirements into product characteristics which fit customer needs. In the paper an excerpt of this model is presented. Design/methodology/approach: In literature a multitude of methods which deal with the topic of structuring exist. The methods have been analysed regarding their purpose and their level of specification, i.e. the number of differentiated categories, to check if

  15. Supplemental design requirements document solid waste operations complex

    International Nuclear Information System (INIS)

    Ocampo, V.P.; Boothe, G.F.; Broz, D.R.; Eaton, H.E.; Greager, T.M.; Huckfeldt, R.A.; Kooiker, S.L.; Lamberd, D.L.; Lang, L.L.; Myers, J.B.

    1994-11-01

    This document provides additional and supplemental information to the WHC-SD-W112-FDC-001, WHC-SD-W113-FDC-001, and WHC-SD-W100-FDC-001. It provides additional requirements for the design and summarizes Westinghouse Hanford Company key design guidance and establishes the technical baseline agreements to be used for definitive design common to the Solid Waste Operations Complex (SWOC) Facilities (Project W-112, Project W-113, and WRAP 2A)

  16. Knowledge based method for solving complexity in design problems

    NARCIS (Netherlands)

    Vermeulen, B.

    2007-01-01

    The process of design aircraft systems is becoming more and more complex, due to an increasing amount of requirements. Moreover, the knowledge on how to solve these complex design problems becomes less readily available, because of a decrease in availability of intellectual resources and reduced

  17. 77 FR 50726 - Software Requirement Specifications for Digital Computer Software and Complex Electronics Used in...

    Science.gov (United States)

    2012-08-22

    ... Computer Software and Complex Electronics Used in Safety Systems of Nuclear Power Plants AGENCY: Nuclear...-1209, ``Software Requirement Specifications for Digital Computer Software and Complex Electronics used... Electronics Engineers (ANSI/IEEE) Standard 830-1998, ``IEEE Recommended Practice for Software Requirements...

  18. A Qualitative Method to Estimate HSI Display Complexity

    Energy Technology Data Exchange (ETDEWEB)

    Hugo, Jacques; Gertman, David [Idaho National Laboratory, Idaho (United States)

    2013-04-15

    There is mounting evidence that complex computer system displays in control rooms contribute to cognitive complexity and, thus, to the probability of human error. Research shows that reaction time increases and response accuracy decreases as the number of elements in the display screen increase. However, in terms of supporting the control room operator, approaches focusing on addressing display complexity solely in terms of information density and its location and patterning, will fall short of delivering a properly designed interface. This paper argues that information complexity and semantic complexity are mandatory components when considering display complexity and that the addition of these concepts assists in understanding and resolving differences between designers and the preferences and performance of operators. This paper concludes that a number of simplified methods, when combined, can be used to estimate the impact that a particular display may have on the operator's ability to perform a function accurately and effectively. We present a mixed qualitative and quantitative approach and a method for complexity estimation.

  19. Polar localization of Escherichia coli chemoreceptors requires an intact Tol–Pal complex

    Science.gov (United States)

    Santos, Thiago M. A.; Lin, Ti-Yu; Rajendran, Madhusudan; Anderson, Samantha M.; Weibel, Douglas B.

    2014-01-01

    Summary Subcellular biomolecular localization is critical for the metabolic and structural properties of the cell. The functional implications of the spatiotemporal distribution of protein complexes during the bacterial cell cycle have long been acknowledged; however, the molecular mechanisms for generating and maintaining their dynamic localization in bacteria are not completely understood. Here we demonstrate that the trans-envelope Tol–Pal complex, a widely conserved component of the cell envelope of Gram-negative bacteria, is required to maintain the polar positioning of chemoreceptor clusters in Escherichia coli. Localization of the chemoreceptors was independent of phospholipid composition of the membrane and the curvature of the cell wall. Instead, our data indicate that chemoreceptors interact with components of the Tol–Pal complex and that this interaction is required to polarly localize chemoreceptor clusters. We found that disruption of the Tol–Pal complex perturbs the polar localization of chemoreceptors, alters cell motility, and affects chemotaxis. We propose that the E. coli Tol–Pal complex restricts mobility of the chemoreceptor clusters at the cell poles and may be involved in regulatory mechanisms that co-ordinate cell division and segregation of the chemosensory machinery. PMID:24720726

  20. High-resolution method for evolving complex interface networks

    Science.gov (United States)

    Pan, Shucheng; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2018-04-01

    In this paper we describe a high-resolution transport formulation of the regional level-set approach for an improved prediction of the evolution of complex interface networks. The novelty of this method is twofold: (i) construction of local level sets and reconstruction of a global level set, (ii) local transport of the interface network by employing high-order spatial discretization schemes for improved representation of complex topologies. Various numerical test cases of multi-region flow problems, including triple-point advection, single vortex flow, mean curvature flow, normal driven flow, dry foam dynamics and shock-bubble interaction show that the method is accurate and suitable for a wide range of complex interface-network evolutions. Its overall computational cost is comparable to the Semi-Lagrangian regional level-set method while the prediction accuracy is significantly improved. The approach thus offers a viable alternative to previous interface-network level-set method.

  1. Cut Based Method for Comparing Complex Networks.

    Science.gov (United States)

    Liu, Qun; Dong, Zhishan; Wang, En

    2018-03-23

    Revealing the underlying similarity of various complex networks has become both a popular and interdisciplinary topic, with a plethora of relevant application domains. The essence of the similarity here is that network features of the same network type are highly similar, while the features of different kinds of networks present low similarity. In this paper, we introduce and explore a new method for comparing various complex networks based on the cut distance. We show correspondence between the cut distance and the similarity of two networks. This correspondence allows us to consider a broad range of complex networks and explicitly compare various networks with high accuracy. Various machine learning technologies such as genetic algorithms, nearest neighbor classification, and model selection are employed during the comparison process. Our cut method is shown to be suited for comparisons of undirected networks and directed networks, as well as weighted networks. In the model selection process, the results demonstrate that our approach outperforms other state-of-the-art methods with respect to accuracy.

  2. Multistage Spectral Relaxation Method for Solving the Hyperchaotic Complex Systems

    Directory of Open Access Journals (Sweden)

    Hassan Saberi Nik

    2014-01-01

    Full Text Available We present a pseudospectral method application for solving the hyperchaotic complex systems. The proposed method, called the multistage spectral relaxation method (MSRM is based on a technique of extending Gauss-Seidel type relaxation ideas to systems of nonlinear differential equations and using the Chebyshev pseudospectral methods to solve the resulting system on a sequence of multiple intervals. In this new application, the MSRM is used to solve famous hyperchaotic complex systems such as hyperchaotic complex Lorenz system and the complex permanent magnet synchronous motor. We compare this approach to the Runge-Kutta based ode45 solver to show that the MSRM gives accurate results.

  3. A method to compute the inverse of a complex n-block tridiagonal quasi-hermitian matrix

    International Nuclear Information System (INIS)

    Godfrin, Elena

    1990-01-01

    This paper presents a method to compute the inverse of a complex n-block tridiagonal quasi-hermitian matrix using adequate partitions of the complete matrix. This type of matrix is very usual in quantum mechanics and, more specifically, in solid state physics (e.g., interfaces and superlattices), when the tight-binding approximation is used. The efficiency of the method is analyzed comparing the required CPU time and work-area for different usual techniques. (Author)

  4. Leading healthcare in complexity.

    Science.gov (United States)

    Cohn, Jeffrey

    2014-12-01

    Healthcare institutions and providers are in complexity. Networks of interconnections from relationships and technology create conditions in which interdependencies and non-linear dynamics lead to surprising, unpredictable outcomes. Previous effective approaches to leadership, focusing on top-down bureaucratic methods, are no longer effective. Leading in complexity requires leaders to accept the complexity, create an adaptive space in which innovation and creativity can flourish and then integrate the successful practices that emerge into the formal organizational structure. Several methods for doing adaptive space work will be discussed. Readers will be able to contrast traditional leadership approaches with leading in complexity. They will learn new behaviours that are required of complexity leaders, along with challenges they will face, often from other leaders within the organization.

  5. Continuum Level Density in Complex Scaling Method

    International Nuclear Information System (INIS)

    Suzuki, R.; Myo, T.; Kato, K.

    2005-01-01

    A new calculational method of continuum level density (CLD) at unbound energies is studied in the complex scaling method (CSM). It is shown that the CLD can be calculated by employing the discretization of continuum states in the CSM without any smoothing technique

  6. Krylov Subspace Methods for Complex Non-Hermitian Linear Systems. Thesis

    Science.gov (United States)

    Freund, Roland W.

    1991-01-01

    We consider Krylov subspace methods for the solution of large sparse linear systems Ax = b with complex non-Hermitian coefficient matrices. Such linear systems arise in important applications, such as inverse scattering, numerical solution of time-dependent Schrodinger equations, underwater acoustics, eddy current computations, numerical computations in quantum chromodynamics, and numerical conformal mapping. Typically, the resulting coefficient matrices A exhibit special structures, such as complex symmetry, or they are shifted Hermitian matrices. In this paper, we first describe a Krylov subspace approach with iterates defined by a quasi-minimal residual property, the QMR method, for solving general complex non-Hermitian linear systems. Then, we study special Krylov subspace methods designed for the two families of complex symmetric respectively shifted Hermitian linear systems. We also include some results concerning the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.

  7. Experimentation on accuracy of non functional requirement prioritization approaches for different complexity projects

    OpenAIRE

    Raj Kumar Chopra; Varun Gupta; Durg Singh Chauhan

    2016-01-01

    Non functional requirements must be selected for implementation together with functional requirements to enhance the success of software projects. Three approaches exist for performing the prioritization of non functional requirements using the suitable prioritization technique. This paper performs experimentation on three different complexity versions of the industrial software project using cost-value prioritization technique employing three approaches. Experimentation is conducted to analy...

  8. Benchmarking of London Dispersion-Accounting Density Functional Theory Methods on Very Large Molecular Complexes.

    Science.gov (United States)

    Risthaus, Tobias; Grimme, Stefan

    2013-03-12

    A new test set (S12L) containing 12 supramolecular noncovalently bound complexes is presented and used to evaluate seven different methods to account for dispersion in DFT (DFT-D3, DFT-D2, DFT-NL, XDM, dDsC, TS-vdW, M06-L) at different basis set levels against experimental, back-corrected reference energies. This allows conclusions about the performance of each method in an explorative research setting on "real-life" problems. Most DFT methods show satisfactory performance but, due to the largeness of the complexes, almost always require an explicit correction for the nonadditive Axilrod-Teller-Muto three-body dispersion interaction to get accurate results. The necessity of using a method capable of accounting for dispersion is clearly demonstrated in that the two-body dispersion contributions are on the order of 20-150% of the total interaction energy. MP2 and some variants thereof are shown to be insufficient for this while a few tested D3-corrected semiempirical MO methods perform reasonably well. Overall, we suggest the use of this benchmark set as a "sanity check" against overfitting to too small molecular cases.

  9. Functional Requirements for Fab-7 Boundary Activity in the Bithorax Complex

    Science.gov (United States)

    Wolle, Daniel; Cleard, Fabienne; Aoki, Tsutomu; Deshpande, Girish; Karch, Francois

    2015-01-01

    Chromatin boundaries are architectural elements that determine the three-dimensional folding of the chromatin fiber and organize the chromosome into independent units of genetic activity. The Fab-7 boundary from the Drosophila bithorax complex (BX-C) is required for the parasegment-specific expression of the Abd-B gene. We have used a replacement strategy to identify sequences that are necessary and sufficient for Fab-7 boundary function in the BX-C. Fab-7 boundary activity is known to depend on factors that are stage specific, and we describe a novel ∼700-kDa complex, the late boundary complex (LBC), that binds to Fab-7 sequences that have insulator functions in late embryos and adults. We show that the LBC is enriched in nuclear extracts from late, but not early, embryos and that it contains three insulator proteins, GAF, Mod(mdg4), and E(y)2. Its DNA binding properties are unusual in that it requires a minimal sequence of >65 bp; however, other than a GAGA motif, the three Fab-7 LBC recognition elements display few sequence similarities. Finally, we show that mutations which abrogate LBC binding in vitro inactivate the Fab-7 boundary in the BX-C. PMID:26303531

  10. Functional Requirements for Fab-7 Boundary Activity in the Bithorax Complex.

    Science.gov (United States)

    Wolle, Daniel; Cleard, Fabienne; Aoki, Tsutomu; Deshpande, Girish; Schedl, Paul; Karch, Francois

    2015-11-01

    Chromatin boundaries are architectural elements that determine the three-dimensional folding of the chromatin fiber and organize the chromosome into independent units of genetic activity. The Fab-7 boundary from the Drosophila bithorax complex (BX-C) is required for the parasegment-specific expression of the Abd-B gene. We have used a replacement strategy to identify sequences that are necessary and sufficient for Fab-7 boundary function in the BX-C. Fab-7 boundary activity is known to depend on factors that are stage specific, and we describe a novel ∼700-kDa complex, the late boundary complex (LBC), that binds to Fab-7 sequences that have insulator functions in late embryos and adults. We show that the LBC is enriched in nuclear extracts from late, but not early, embryos and that it contains three insulator proteins, GAF, Mod(mdg4), and E(y)2. Its DNA binding properties are unusual in that it requires a minimal sequence of >65 bp; however, other than a GAGA motif, the three Fab-7 LBC recognition elements display few sequence similarities. Finally, we show that mutations which abrogate LBC binding in vitro inactivate the Fab-7 boundary in the BX-C. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  11. 40 CFR 180.1022 - Iodine-detergent complex; exemption from the requirement of a tolerance.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Iodine-detergent complex; exemption... FOOD Exemptions From Tolerances § 180.1022 Iodine-detergent complex; exemption from the requirement of a tolerance. The aqueous solution of hydriodic acid and elemental iodine, including one or both of...

  12. Germination and seedling establishment in orchids: a complex of requirements.

    Science.gov (United States)

    Rasmussen, Hanne N; Dixon, Kingsley W; Jersáková, Jana; Těšitelová, Tamara

    2015-09-01

    Seedling recruitment is essential to the sustainability of any plant population. Due to the minute nature of seeds and early-stage seedlings, orchid germination in situ was for a long time practically impossible to observe, creating an obstacle towards understanding seedling site requirements and fluctuations in orchid populations. The introduction of seed packet techniques for sowing and retrieval in natural sites has brought with it important insights, but many aspects of orchid seed and germination biology remain largely unexplored. The germination niche for orchids is extremely complex, because it is defined by requirements not only for seed lodging and germination, but also for presence of a fungal host and its substrate. A mycobiont that the seedling can parasitize is considered an essential element, and a great diversity of Basidiomycota and Ascomycota have now been identified for their role in orchid seed germination, with fungi identifiable as imperfect Rhizoctonia species predominating. Specificity patterns vary from orchid species employing a single fungal lineage to species associating individually with a limited selection of distantly related fungi. A suitable organic carbon source for the mycobiont constitutes another key requirement. Orchid germination also relies on factors that generally influence the success of plant seeds, both abiotic, such as light/shade, moisture, substrate chemistry and texture, and biotic, such as competitors and antagonists. Complexity is furthermore increased when these factors influence seeds/seedling, fungi and fungal substrate differentially. A better understanding of germination and seedling establishment is needed for conservation of orchid populations. Due to the obligate association with a mycobiont, the germination niches in orchid species are extremely complex and varied. Microsites suitable for germination can be small and transient, and direct observation is difficult. An experimental approach using several

  13. Predicting protein complexes using a supervised learning method combined with local structural information.

    Science.gov (United States)

    Dong, Yadong; Sun, Yongqi; Qin, Chao

    2018-01-01

    The existing protein complex detection methods can be broadly divided into two categories: unsupervised and supervised learning methods. Most of the unsupervised learning methods assume that protein complexes are in dense regions of protein-protein interaction (PPI) networks even though many true complexes are not dense subgraphs. Supervised learning methods utilize the informative properties of known complexes; they often extract features from existing complexes and then use the features to train a classification model. The trained model is used to guide the search process for new complexes. However, insufficient extracted features, noise in the PPI data and the incompleteness of complex data make the classification model imprecise. Consequently, the classification model is not sufficient for guiding the detection of complexes. Therefore, we propose a new robust score function that combines the classification model with local structural information. Based on the score function, we provide a search method that works both forwards and backwards. The results from experiments on six benchmark PPI datasets and three protein complex datasets show that our approach can achieve better performance compared with the state-of-the-art supervised, semi-supervised and unsupervised methods for protein complex detection, occasionally significantly outperforming such methods.

  14. Level density in the complex scaling method

    International Nuclear Information System (INIS)

    Suzuki, Ryusuke; Kato, Kiyoshi; Myo, Takayuki

    2005-01-01

    It is shown that the continuum level density (CLD) at unbound energies can be calculated with the complex scaling method (CSM), in which the energy spectra of bound states, resonances and continuum states are obtained in terms of L 2 basis functions. In this method, the extended completeness relation is applied to the calculation of the Green functions, and the continuum-state part is approximately expressed in terms of discretized complex scaled continuum solutions. The obtained result is compared with the CLD calculated exactly from the scattering phase shift. The discretization in the CSM is shown to give a very good description of continuum states. We discuss how the scattering phase shifts can inversely be calculated from the discretized CLD using a basis function technique in the CSM. (author)

  15. Complex data modeling and computationally intensive methods for estimation and prediction

    CERN Document Server

    Secchi, Piercesare; Advances in Complex Data Modeling and Computational Methods in Statistics

    2015-01-01

    The book is addressed to statisticians working at the forefront of the statistical analysis of complex and high dimensional data and offers a wide variety of statistical models, computer intensive methods and applications: network inference from the analysis of high dimensional data; new developments for bootstrapping complex data; regression analysis for measuring the downsize reputational risk; statistical methods for research on the human genome dynamics; inference in non-euclidean settings and for shape data; Bayesian methods for reliability and the analysis of complex data; methodological issues in using administrative data for clinical and epidemiological research; regression models with differential regularization; geostatistical methods for mobility analysis through mobile phone data exploration. This volume is the result of a careful selection among the contributions presented at the conference "S.Co.2013: Complex data modeling and computationally intensive methods for estimation and prediction" held...

  16. Low-complexity computation of plate eigenmodes with Vekua approximations and the method of particular solutions

    Science.gov (United States)

    Chardon, Gilles; Daudet, Laurent

    2013-11-01

    This paper extends the method of particular solutions (MPS) to the computation of eigenfrequencies and eigenmodes of thin plates, in the framework of the Kirchhoff-Love plate theory. Specific approximation schemes are developed, with plane waves (MPS-PW) or Fourier-Bessel functions (MPS-FB). This framework also requires a suitable formulation of the boundary conditions. Numerical tests, on two plates with various boundary conditions, demonstrate that the proposed approach provides competitive results with standard numerical schemes such as the finite element method, at reduced complexity, and with large flexibility in the implementation choices.

  17. Unplanned Complex Suicide-A Consideration of Multiple Methods.

    Science.gov (United States)

    Ateriya, Navneet; Kanchan, Tanuj; Shekhawat, Raghvendra Singh; Setia, Puneet; Saraf, Ashish

    2018-05-01

    Detailed death investigations are mandatory to find out the exact cause and manner in non-natural deaths. In this reference, use of multiple methods in suicide poses a challenge for the investigators especially when the choice of methods to cause death is unplanned. There is an increased likelihood that doubts of homicide are raised in cases of unplanned complex suicides. A case of complex suicide is reported where the victim resorted to multiple methods to end his life, and what appeared to be an unplanned variant based on the death scene investigations. A meticulous crime scene examination, interviews of the victim's relatives and other witnesses, and a thorough autopsy are warranted to conclude on the cause and manner of death in all such cases. © 2017 American Academy of Forensic Sciences.

  18. Complexity and accuracy of image registration methods in SPECT-guided radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    Yin, L S; Duzenli, C; Moiseenko, V [Physics and Astronomy, University of British Columbia, 6224 Agricultural Road, Vancouver, BC, V6T 1Z1 (Canada); Tang, L; Hamarneh, G [Computing Science, Simon Fraser University, 9400 TASC1, Burnaby, BC, V5A 1S6 (Canada); Gill, B [Medical Physics, Vancouver Cancer Centre, BC Cancer Agency, 600 West 10th Ave, Vancouver, BC, V5Z 4E6 (Canada); Celler, A; Shcherbinin, S [Department of Radiology, University of British Columbia, 828 West 10th Ave, Vancouver, BC, V5Z 1L8 (Canada); Fua, T F; Thompson, A; Sheehan, F [Radiation Oncology, Vancouver Cancer Centre, BC Cancer Agency, 600 West 10th Ave, Vancouver, BC, V5Z 4E6 (Canada); Liu, M [Radiation Oncology, Fraser Valley Cancer Centre, BC Cancer Agency, 13750 9th Ave, Surrey, BC, V3V 1Z2 (Canada)], E-mail: lyin@bccancer.bc.ca

    2010-01-07

    The use of functional imaging in radiotherapy treatment (RT) planning requires accurate co-registration of functional imaging scans to CT scans. We evaluated six methods of image registration for use in SPECT-guided radiotherapy treatment planning. Methods varied in complexity from 3D affine transform based on control points to diffeomorphic demons and level set non-rigid registration. Ten lung cancer patients underwent perfusion SPECT-scans prior to their radiotherapy. CT images from a hybrid SPECT/CT scanner were registered to a planning CT, and then the same transformation was applied to the SPECT images. According to registration evaluation measures computed based on the intensity difference between the registered CT images or based on target registration error, non-rigid registrations provided a higher degree of accuracy than rigid methods. However, due to the irregularities in some of the obtained deformation fields, warping the SPECT using these fields may result in unacceptable changes to the SPECT intensity distribution that would preclude use in RT planning. Moreover, the differences between intensity histograms in the original and registered SPECT image sets were the largest for diffeomorphic demons and level set methods. In conclusion, the use of intensity-based validation measures alone is not sufficient for SPECT/CT registration for RTTP. It was also found that the proper evaluation of image registration requires the use of several accuracy metrics.

  19. The Sources and Methods of Engineering Design Requirement

    DEFF Research Database (Denmark)

    Li, Xuemeng; Zhang, Zhinan; Ahmed-Kristensen, Saeema

    2014-01-01

    to be defined in a new context. This paper focuses on understanding the design requirement sources at the requirement elicitation phase. It aims at proposing an improved design requirement source classification considering emerging markets and presenting current methods for eliciting requirement for each source...

  20. Transforming Multidisciplinary Customer Requirements to Product Design Specifications

    Science.gov (United States)

    Ma, Xiao-Jie; Ding, Guo-Fu; Qin, Sheng-Feng; Li, Rong; Yan, Kai-Yin; Xiao, Shou-Ne; Yang, Guang-Wu

    2017-09-01

    With the increasing of complexity of complex mechatronic products, it is necessary to involve multidisciplinary design teams, thus, the traditional customer requirements modeling for a single discipline team becomes difficult to be applied in a multidisciplinary team and project since team members with various disciplinary backgrounds may have different interpretations of the customers' requirements. A new synthesized multidisciplinary customer requirements modeling method is provided for obtaining and describing the common understanding of customer requirements (CRs) and more importantly transferring them into a detailed and accurate product design specifications (PDS) to interact with different team members effectively. A case study of designing a high speed train verifies the rationality and feasibility of the proposed multidisciplinary requirement modeling method for complex mechatronic product development. This proposed research offersthe instruction to realize the customer-driven personalized customization of complex mechatronic product.

  1. Estimating the complexity of 3D structural models using machine learning methods

    Science.gov (United States)

    Mejía-Herrera, Pablo; Kakurina, Maria; Royer, Jean-Jacques

    2016-04-01

    Quantifying the complexity of 3D geological structural models can play a major role in natural resources exploration surveys, for predicting environmental hazards or for forecasting fossil resources. This paper proposes a structural complexity index which can be used to help in defining the degree of effort necessary to build a 3D model for a given degree of confidence, and also to identify locations where addition efforts are required to meet a given acceptable risk of uncertainty. In this work, it is considered that the structural complexity index can be estimated using machine learning methods on raw geo-data. More precisely, the metrics for measuring the complexity can be approximated as the difficulty degree associated to the prediction of the geological objects distribution calculated based on partial information on the actual structural distribution of materials. The proposed methodology is tested on a set of 3D synthetic structural models for which the degree of effort during their building is assessed using various parameters (such as number of faults, number of part in a surface object, number of borders, ...), the rank of geological elements contained in each model, and, finally, their level of deformation (folding and faulting). The results show how the estimated complexity in a 3D model can be approximated by the quantity of partial data necessaries to simulated at a given precision the actual 3D model without error using machine learning algorithms.

  2. Methods for determination of extractable complex composition

    International Nuclear Information System (INIS)

    Sergievskij, V.V.

    1984-01-01

    Specific features and restrictions of main methods for determining the extractable complex composition by the distribution data (methods of equilibrium shift, saturation, mathematical models) are considered. Special attention is given to the solution of inverse problems with account for hydration effect on the activity of organic phase components. By example of the systems lithium halides-isoamyl alcohol, thorium nitrate-n-hexyl alcohol, mineral acids tri-n-butyl phosphate (TBP), metal nitrates (uranium lanthanides) - TBP the results on determining stoichiometry of extraction equilibria obtained by various methods are compared

  3. A Survey of Various Object Oriented Requirement Engineering Methods

    OpenAIRE

    Anandi Mahajan; Dr. Anurag Dixit

    2013-01-01

    In current years many industries have been moving to the use of object-oriented methods for the development of large scale information systems The requirement of Object Oriented approach in the development of software systems is increasing day by day. This paper is basically a survey paper on various Object-oriented requirement engineering methods. This paper contains a summary of the available Object-oriented requirement engineering methods with their relative advantages and disadvantages...

  4. New complex variable meshless method for advection—diffusion problems

    International Nuclear Information System (INIS)

    Wang Jian-Fei; Cheng Yu-Min

    2013-01-01

    In this paper, an improved complex variable meshless method (ICVMM) for two-dimensional advection—diffusion problems is developed based on improved complex variable moving least-square (ICVMLS) approximation. The equivalent functional of two-dimensional advection—diffusion problems is formed, the variation method is used to obtain the equation system, and the penalty method is employed to impose the essential boundary conditions. The difference method for two-point boundary value problems is used to obtain the discrete equations. Then the corresponding formulas of the ICVMM for advection—diffusion problems are presented. Two numerical examples with different node distributions are used to validate and inestigate the accuracy and efficiency of the new method in this paper. It is shown that ICVMM is very effective for advection—diffusion problems, and has a good convergent character, accuracy, and computational efficiency

  5. Basic requirements to the methods of personnel monitoring

    International Nuclear Information System (INIS)

    Keirim-Markus, I.B.

    1981-01-01

    Requirements to methods of personnel monitoring (PMM) depending on irradiation conditions are given. The irradiation conditions determine subjected to monitoring types of irradiation, measurement ranges, periodicity of monitoring, operativeness of obtaining results and required accuracy. The PMM based on the photographic effect of ionizing radiation is the main method of the mass monitoring [ru

  6. Integrated complex care coordination for children with medical complexity: A mixed-methods evaluation of tertiary care-community collaboration

    Directory of Open Access Journals (Sweden)

    Cohen Eyal

    2012-10-01

    Full Text Available Abstract Background Primary care medical homes may improve health outcomes for children with special healthcare needs (CSHCN, by improving care coordination. However, community-based primary care practices may be challenged to deliver comprehensive care coordination to complex subsets of CSHCN such as children with medical complexity (CMC. Linking a tertiary care center with the community may achieve cost effective and high quality care for CMC. The objective of this study was to evaluate the outcomes of community-based complex care clinics integrated with a tertiary care center. Methods A before- and after-intervention study design with mixed (quantitative/qualitative methods was utilized. Clinics at two community hospitals distant from tertiary care were staffed by local community pediatricians with the tertiary care center nurse practitioner and linked with primary care providers. Eighty-one children with underlying chronic conditions, fragility, requirement for high intensity care and/or technology assistance, and involvement of multiple providers participated. Main outcome measures included health care utilization and expenditures, parent reports of parent- and child-quality of life [QOL (SF-36®, CPCHILD©, PedsQL™], and family-centered care (MPOC-20®. Comparisons were made in equal (up to 1 year pre- and post-periods supplemented by qualitative perspectives of families and pediatricians. Results Total health care system costs decreased from median (IQR $244 (981 per patient per month (PPPM pre-enrolment to $131 (355 PPPM post-enrolment (p=.007, driven primarily by fewer inpatient days in the tertiary care center (p=.006. Parents reported decreased out of pocket expenses (p© domains [Health Standardization Section (p=.04; Comfort and Emotions (p=.03], while total CPCHILD© score decreased between baseline and 1 year (p=.003. Parents and providers reported the ability to receive care close to home as a key benefit. Conclusions Complex

  7. Symmetrized complex amplitudes for He double photoionization from the time-dependent close coupling and exterior complex scaling methods

    International Nuclear Information System (INIS)

    Horner, D.A.; Colgan, J.; Martin, F.; McCurdy, C.W.; Pindzola, M.S.; Rescigno, T.N.

    2004-01-01

    Symmetrized complex amplitudes for the double photoionization of helium are computed by the time-dependent close-coupling and exterior complex scaling methods, and it is demonstrated that both methods are capable of the direct calculation of these amplitudes. The results are found to be in excellent agreement with each other and in very good agreement with results of other ab initio methods and experiment

  8. Energy-based method for near-real time modeling of sound field in complex urban environments.

    Science.gov (United States)

    Pasareanu, Stephanie M; Remillieux, Marcel C; Burdisso, Ricardo A

    2012-12-01

    Prediction of the sound field in large urban environments has been limited thus far by the heavy computational requirements of conventional numerical methods such as boundary element (BE) or finite-difference time-domain (FDTD) methods. Recently, a considerable amount of work has been devoted to developing energy-based methods for this application, and results have shown the potential to compete with conventional methods. However, these developments have been limited to two-dimensional (2-D) studies (along street axes), and no real description of the phenomena at issue has been exposed. Here the mathematical theory of diffusion is used to predict the sound field in 3-D complex urban environments. A 3-D diffusion equation is implemented by means of a simple finite-difference scheme and applied to two different types of urban configurations. This modeling approach is validated against FDTD and geometrical acoustic (GA) solutions, showing a good overall agreement. The role played by diffraction near buildings edges close to the source is discussed, and suggestions are made on the possibility to predict accurately the sound field in complex urban environments, in near real time simulations.

  9. Random walk-based similarity measure method for patterns in complex object

    Directory of Open Access Journals (Sweden)

    Liu Shihu

    2017-04-01

    Full Text Available This paper discusses the similarity of the patterns in complex objects. The complex object is composed both of the attribute information of patterns and the relational information between patterns. Bearing in mind the specificity of complex object, a random walk-based similarity measurement method for patterns is constructed. In this method, the reachability of any two patterns with respect to the relational information is fully studied, and in the case of similarity of patterns with respect to the relational information can be calculated. On this bases, an integrated similarity measurement method is proposed, and algorithms 1 and 2 show the performed calculation procedure. One can find that this method makes full use of the attribute information and relational information. Finally, a synthetic example shows that our proposed similarity measurement method is validated.

  10. A direction of developing a mining method and mining complexes

    Energy Technology Data Exchange (ETDEWEB)

    Gabov, V.V.; Efimov, I.A. [St. Petersburg State Mining Institute, St. Petersburg (Russian Federation). Vorkuta Branch

    1996-12-31

    The analyses of a mining method as a main factor determining the development stages of mining units is presented. The paper suggests a perspective mining method which differs from the known ones by following peculiarities: the direction selectivity of cuts with regard to coal seams structure; the cutting speed, thickness and succession of dusts. This method may be done by modulate complexes (a shield carrying a cutting head for coal mining), their mining devices being supplied with hydraulic drive. An experimental model of the module complex has been developed. 2 refs.

  11. Modeling complex work systems - method meets reality

    NARCIS (Netherlands)

    van der Veer, Gerrit C.; Hoeve, Machteld; Lenting, Bert

    1996-01-01

    Modeling an existing task situation is often a first phase in the (re)design of information systems. For complex systems design, this model should consider both the people and the organization involved, the work, and situational aspects. Groupware Task Analysis (GTA) as part of a method for the

  12. A primary method for the complex calibration of a hydrophone from 1 Hz to 2 kHz

    Science.gov (United States)

    Slater, W. H.; E Crocker, S.; Baker, S. R.

    2018-02-01

    A primary calibration method is demonstrated to obtain the magnitude and phase of the complex sensitivity for a hydrophone at frequencies between 1 Hz and 2 kHz. The measurement is performed in a coupler reciprocity chamber (‘coupler’) a closed test chamber where time harmonic oscillations in pressure can be achieved and the reciprocity conditions required for a primary calibration can be realized. Relevant theory is reviewed and the reciprocity parameter updated for the complex measurement. Systematic errors and corrections for magnitude are reviewed and more added for phase. The combined expanded uncertainties of the magnitude and phase of the complex sensitivity at 1 Hz were 0.1 dB re 1 V μ Pa-1 and  ± 1\\circ , respectively. Complex sensitivity, sensitivity magnitude, and phase measurements are presented on an example primary reference hydrophone.

  13. A Method for Software Requirement Volatility Analysis Using QFD

    Directory of Open Access Journals (Sweden)

    Yunarso Anang

    2016-10-01

    Full Text Available Changes of software requirements are inevitable during the development life cycle. Rather than avoiding the circumstance, it is easier to just accept it and find a way to anticipate those changes. This paper proposes a method to analyze the volatility of requirement by using the Quality Function Deployment (QFD method and the introduced degree of volatility. Customer requirements are deployed to software functions and subsequently to architectural design elements. And then, after determining the potential for changes of the design elements, the degree of volatility of the software requirements is calculated. In this paper the method is described using a flow diagram and illustrated using a simple example, and is evaluated using a case study.

  14. Complex rectal polyps: other treatment modalities required when offering a transanal endoscopic microsurgery service.

    LENUS (Irish Health Repository)

    Joyce, Myles R

    2011-09-01

    Complex rectal polyps may present a clinical challenge. The study aim was to assess different treatment modalities required in the management of patients referred for transanal endoscopic microsurgery.

  15. An improved sampling method of complex network

    Science.gov (United States)

    Gao, Qi; Ding, Xintong; Pan, Feng; Li, Weixing

    2014-12-01

    Sampling subnet is an important topic of complex network research. Sampling methods influence the structure and characteristics of subnet. Random multiple snowball with Cohen (RMSC) process sampling which combines the advantages of random sampling and snowball sampling is proposed in this paper. It has the ability to explore global information and discover the local structure at the same time. The experiments indicate that this novel sampling method could keep the similarity between sampling subnet and original network on degree distribution, connectivity rate and average shortest path. This method is applicable to the situation where the prior knowledge about degree distribution of original network is not sufficient.

  16. Statistical methods for anomaly detection in the complex process; Methodes statistiques de detection d'anomalies de fonctionnement dans les processus complexes

    Energy Technology Data Exchange (ETDEWEB)

    Al Mouhamed, Mayez

    1977-09-15

    In a number of complex physical systems the accessible signals are often characterized by random fluctuations about a mean value. The fluctuations (signature) often transmit information about the state of the system that the mean value cannot predict. This study is undertaken to elaborate statistical methods of anomaly detection on the basis of signature analysis of the noise inherent in the process. The algorithm presented first learns the characteristics of normal operation of a complex process. Then it detects small deviations from the normal behavior. The algorithm can be implemented in a medium-sized computer for on line application. (author) [French] Dans de nombreux systemes physiques complexes les grandeurs accessibles a l'homme sont souvent caracterisees par des fluctuations aleatoires autour d'une valeur moyenne. Les fluctuations (signatures) transmettent souvent des informations sur l'etat du systeme que la valeur moyenne ne peut predire. Cette etude est entreprise pour elaborer des methodes statistiques de detection d'anomalies de fonctionnement sur la base de l'analyse des signatures contenues dans les signaux de bruit provenant du processus. L'algorithme presente est capable de: 1/ Apprendre les caracteristiques des operations normales dans un processus complexe. 2/ Detecter des petites deviations par rapport a la conduite normale du processus. L'algorithme peut etre implante sur un calculateur de taille moyenne pour les applications en ligne. (auteur)

  17. Method for analysis the complex grounding cables system

    International Nuclear Information System (INIS)

    Ackovski, R.; Acevski, N.

    2002-01-01

    A new iterative method for the analysis of the performances of the complex grounding systems (GS) in underground cable power networks with coated and/or uncoated metal sheathed cables is proposed in this paper. The analyzed grounding system consists of the grounding grid of a high voltage (HV) supplying transformer station (TS), middle voltage/low voltage (MV/LV) consumer TSs and arbitrary number of power cables, connecting them. The derived method takes into consideration the drops of voltage in the cable sheets and the mutual influence among all earthing electrodes, due to the resistive coupling through the soil. By means of the presented method it is possible to calculate the main grounding system performances, such as earth electrode potentials under short circuit fault to ground conditions, earth fault current distribution in the whole complex grounding system, step and touch voltages in the nearness of the earthing electrodes dissipating the fault current in the earth, impedances (resistances) to ground of all possible fault locations, apparent shield impedances to ground of all power cables, e.t.c. The proposed method is based on the admittance summation method [1] and is appropriately extended, so that it takes into account resistive coupling between the elements that the GS. (Author)

  18. Formal methods applied to industrial complex systems implementation of the B method

    CERN Document Server

    Boulanger, Jean-Louis

    2014-01-01

    This book presents real-world examples of formal techniques in an industrial context. It covers formal methods such as SCADE and/or the B Method, in various fields such as railways, aeronautics, and the automotive industry. The purpose of this book is to present a summary of experience on the use of "formal methods" (based on formal techniques such as proof, abstract interpretation and model-checking) in industrial examples of complex systems, based on the experience of people currently involved in the creation and assessment of safety critical system software. The involvement of people from

  19. Complex networks principles, methods and applications

    CERN Document Server

    Latora, Vito; Russo, Giovanni

    2017-01-01

    Networks constitute the backbone of complex systems, from the human brain to computer communications, transport infrastructures to online social systems and metabolic reactions to financial markets. Characterising their structure improves our understanding of the physical, biological, economic and social phenomena that shape our world. Rigorous and thorough, this textbook presents a detailed overview of the new theory and methods of network science. Covering algorithms for graph exploration, node ranking and network generation, among the others, the book allows students to experiment with network models and real-world data sets, providing them with a deep understanding of the basics of network theory and its practical applications. Systems of growing complexity are examined in detail, challenging students to increase their level of skill. An engaging presentation of the important principles of network science makes this the perfect reference for researchers and undergraduate and graduate students in physics, ...

  20. How to Compare the Security Quality Requirements Engineering (SQUARE) Method with Other Methods

    National Research Council Canada - National Science Library

    Mead, Nancy R

    2007-01-01

    The Security Quality Requirements Engineering (SQUARE) method, developed at the Carnegie Mellon Software Engineering Institute, provides a systematic way to identify security requirements in a software development project...

  1. Quantum complex rotation and uniform semiclassical calculations of complex energy eigenvalues

    International Nuclear Information System (INIS)

    Connor, J.N.L.; Smith, A.D.

    1983-01-01

    Quantum and semiclassical calculations of complex energy eigenvalues have been carried out for an exponential potential of the form V 0 r 2 exp(-r) and Lennard-Jones (12,6) potential. A straightforward method, based on the complex coordinate rotation technique, is described for the quantum calculation of complex eigenenergies. For singular potentials, the method involves an inward and outward integration of the radial Schroedinger equation, followed by matching of the logarithmic derivatives of the wave functions at an intermediate point. For regular potentials, the method is simpler, as only an inward integration is required. Attention is drawn to the World War II researches of Hartree and co-workers who anticipated later quantum mechanical work on the complex rotation method. Complex eigenenergies are also calculated from a uniform semiclassical three turning point quantization formula, which allows for the proximity of the outer pair of complex turning points. Limiting cases of this formula, which are valid for very narrow or very broad widths, are also used in the calculations. We obtain good agreement between the semiclassical and quantum results. For the Lennard-Jones (12,6) potential, we compare resonance energies and widths from the complex energy definition of a resonance with those obtained from the time delay definition

  2. On Measuring the Complexity of Networks: Kolmogorov Complexity versus Entropy

    Directory of Open Access Journals (Sweden)

    Mikołaj Morzy

    2017-01-01

    Full Text Available One of the most popular methods of estimating the complexity of networks is to measure the entropy of network invariants, such as adjacency matrices or degree sequences. Unfortunately, entropy and all entropy-based information-theoretic measures have several vulnerabilities. These measures neither are independent of a particular representation of the network nor can capture the properties of the generative process, which produces the network. Instead, we advocate the use of the algorithmic entropy as the basis for complexity definition for networks. Algorithmic entropy (also known as Kolmogorov complexity or K-complexity for short evaluates the complexity of the description required for a lossless recreation of the network. This measure is not affected by a particular choice of network features and it does not depend on the method of network representation. We perform experiments on Shannon entropy and K-complexity for gradually evolving networks. The results of these experiments point to K-complexity as the more robust and reliable measure of network complexity. The original contribution of the paper includes the introduction of several new entropy-deceiving networks and the empirical comparison of entropy and K-complexity as fundamental quantities for constructing complexity measures for networks.

  3. Determining Complex Structures using Docking Method with Single Particle Scattering Data

    Directory of Open Access Journals (Sweden)

    Haiguang Liu

    2017-04-01

    Full Text Available Protein complexes are critical for many molecular functions. Due to intrinsic flexibility and dynamics of complexes, their structures are more difficult to determine using conventional experimental methods, in contrast to individual subunits. One of the major challenges is the crystallization of protein complexes. Using X-ray free electron lasers (XFELs, it is possible to collect scattering signals from non-crystalline protein complexes, but data interpretation is more difficult because of unknown orientations. Here, we propose a hybrid approach to determine protein complex structures by combining XFEL single particle scattering data with computational docking methods. Using simulations data, we demonstrate that a small set of single particle scattering data collected at random orientations can be used to distinguish the native complex structure from the decoys generated using docking algorithms. The results also indicate that a small set of single particle scattering data is superior to spherically averaged intensity profile in distinguishing complex structures. Given the fact that XFEL experimental data are difficult to acquire and at low abundance, this hybrid approach should find wide applications in data interpretations.

  4. Method for developing cost estimates for generic regulatory requirements

    International Nuclear Information System (INIS)

    1985-01-01

    The NRC has established a practice of performing regulatory analyses, reflecting costs as well as benefits, of proposed new or revised generic requirements. A method had been developed to assist the NRC in preparing the types of cost estimates required for this purpose and for assigning priorities in the resolution of generic safety issues. The cost of a generic requirement is defined as the net present value of total lifetime cost incurred by the public, industry, and government in implementing the requirement for all affected plants. The method described here is for commercial light-water-reactor power plants. Estimating the cost for a generic requirement involves several steps: (1) identifying the activities that must be carried out to fully implement the requirement, (2) defining the work packages associated with the major activities, (3) identifying the individual elements of cost for each work package, (4) estimating the magnitude of each cost element, (5) aggregating individual plant costs over the plant lifetime, and (6) aggregating all plant costs and generic costs to produce a total, national, present value of lifetime cost for the requirement. The method developed addresses all six steps. In this paper, we discuss on the first three

  5. Classical Methods and Calculation Algorithms for Determining Lime Requirements

    Directory of Open Access Journals (Sweden)

    André Guarçoni

    Full Text Available ABSTRACT The methods developed for determination of lime requirements (LR are based on widely accepted principles. However, the formulas used for calculation have evolved little over recent decades, and in some cases there are indications of their inadequacy. The aim of this study was to compare the lime requirements calculated by three classic formulas and three algorithms, defining those most appropriate for supplying Ca and Mg to coffee plants and the smaller possibility of causing overliming. The database used contained 600 soil samples, which were collected in coffee plantings. The LR was estimated by the methods of base saturation, neutralization of Al3+, and elevation of Ca2+ and Mg2+ contents (two formulas and by the three calculation algorithms. Averages of the lime requirements were compared, determining the frequency distribution of the 600 lime requirements (LR estimated through each calculation method. In soils with low cation exchange capacity at pH 7, the base saturation method may fail to adequately supply the plants with Ca and Mg in many situations, while the method of Al3+ neutralization and elevation of Ca2+ and Mg2+ contents can result in the calculation of application rates that will increase the pH above the suitable range. Among the methods studied for calculating lime requirements, the algorithm that predicts reaching a defined base saturation, with adequate Ca and Mg supply and the maximum application rate limited to the H+Al value, proved to be the most efficient calculation method, and it can be recommended for use in numerous crops conditions.

  6. A method for work modeling at complex systems: towards applying information systems in family health care units.

    Science.gov (United States)

    Jatobá, Alessandro; de Carvalho, Paulo Victor R; da Cunha, Amauri Marques

    2012-01-01

    Work in organizations requires a minimum level of consensus on the understanding of the practices performed. To adopt technological devices to support the activities in environments where work is complex, characterized by the interdependence among a large number of variables, understanding about how work is done not only takes an even greater importance, but also becomes a more difficult task. Therefore, this study aims to present a method for modeling of work in complex systems, which allows improving the knowledge about the way activities are performed where these activities do not simply happen by performing procedures. Uniting techniques of Cognitive Task Analysis with the concept of Work Process, this work seeks to provide a method capable of providing a detailed and accurate vision of how people perform their tasks, in order to apply information systems for supporting work in organizations.

  7. Workshop on Recent Trends in Complex Methods for Partial Differential Equations

    CERN Document Server

    Celebi, A; Tutschke, Wolfgang

    1999-01-01

    This volume is a collection of manscripts mainly originating from talks and lectures given at the Workshop on Recent Trends in Complex Methods for Par­ tial Differential Equations held from July 6 to 10, 1998 at the Middle East Technical University in Ankara, Turkey, sponsored by The Scientific and Tech­ nical Research Council of Turkey and the Middle East Technical University. This workshop is a continuation oftwo workshops from 1988 and 1993 at the In­ ternational Centre for Theoretical Physics in Trieste, Italy entitled Functional analytic Methods in Complex Analysis and Applications to Partial Differential Equations. Since classical complex analysis of one and several variables has a long tra­ dition it is of high level. But most of its basic problems are solved nowadays so that within the last few decades it has lost more and more attention. The area of complex and functional analytic methods in partial differential equations, however, is still a growing and flourishing field, in particular as these ...

  8. A novel method for preparation of HAMLET-like protein complexes.

    Science.gov (United States)

    Permyakov, Sergei E; Knyazeva, Ekaterina L; Leonteva, Marina V; Fadeev, Roman S; Chekanov, Aleksei V; Zhadan, Andrei P; Håkansson, Anders P; Akatov, Vladimir S; Permyakov, Eugene A

    2011-09-01

    Some natural proteins induce tumor-selective apoptosis. α-Lactalbumin (α-LA), a milk calcium-binding protein, is converted into an antitumor form, called HAMLET/BAMLET, via partial unfolding and association with oleic acid (OA). Besides triggering multiple cell death mechanisms in tumor cells, HAMLET exhibits bactericidal activity against Streptococcus pneumoniae. The existing methods for preparation of active complexes of α-LA with OA employ neutral pH solutions, which greatly limit water solubility of OA. Therefore these methods suffer from low scalability and/or heterogeneity of the resulting α-LA - OA samples. In this study we present a novel method for preparation of α-LA - OA complexes using alkaline conditions that favor aqueous solubility of OA. The unbound OA is removed by precipitation under acidic conditions. The resulting sample, bLA-OA-45, bears 11 OA molecules and exhibits physico-chemical properties similar to those of BAMLET. Cytotoxic activities of bLA-OA-45 against human epidermoid larynx carcinoma and S. pneumoniae D39 cells are close to those of HAMLET. Treatment of S. pneumoniae with bLA-OA-45 or HAMLET induces depolarization and rupture of the membrane. The cells are markedly rescued from death upon pretreatment with an inhibitor of Ca(2+) transport. Hence, the activation mechanisms of S. pneumoniae death are analogous for these two complexes. The developed express method for preparation of active α-LA - OA complex is high-throughput and suited for development of other protein complexes with low-molecular-weight amphiphilic substances possessing valuable cytotoxic properties. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  9. Review of analytical methods for the quantification of iodine in complex matrices

    Energy Technology Data Exchange (ETDEWEB)

    Shelor, C. Phillip [Department of Chemistry and Biochemistry, University of Texas at Arlington, Arlington, TX 76019-0065 (United States); Dasgupta, Purnendu K., E-mail: Dasgupta@uta.edu [Department of Chemistry and Biochemistry, University of Texas at Arlington, Arlington, TX 76019-0065 (United States)

    2011-09-19

    Highlights: {yields} We focus on iodine in biological samples, notably urine and milk. {yields} Sample preparation and the Sandell-Kolthoff method are extensively discussed. - Abstract: Iodine is an essential element of human nutrition. Nearly a third of the global population has insufficient iodine intake and is at risk of developing Iodine Deficiency Disorders (IDD). Most countries have iodine supplementation and monitoring programs. Urinary iodide (UI) is the biomarker used for epidemiological studies; only a few methods are currently used routinely for analysis. These methods either require expensive instrumentation with qualified personnel (inductively coupled plasma-mass spectrometry, instrumental nuclear activation analysis) or oxidative sample digestion to remove potential interferences prior to analysis by a kinetic colorimetric method originally introduced by Sandell and Kolthoff {approx}75 years ago. The Sandell-Kolthoff (S-K) method is based on the catalytic effect of iodide on the reaction between Ce{sup 4+} and As{sup 3+}. No available technique fully fits the needs of developing countries; research into inexpensive reliable methods and instrumentation are needed. There have been multiple reviews of methods used for epidemiological studies and specific techniques. However, a general review of iodine determination on a wide-ranging set of complex matrices is not available. While this review is not comprehensive, we cover the principal developments since the original development of the S-K method.

  10. Using mixed methods to develop and evaluate complex interventions in palliative care research.

    Science.gov (United States)

    Farquhar, Morag C; Ewing, Gail; Booth, Sara

    2011-12-01

    there is increasing interest in combining qualitative and quantitative research methods to provide comprehensiveness and greater knowledge yield. Mixed methods are valuable in the development and evaluation of complex interventions. They are therefore particularly valuable in palliative care research where the majority of interventions are complex, and the identification of outcomes particularly challenging. this paper aims to introduce the role of mixed methods in the development and evaluation of complex interventions in palliative care, and how they may be used in palliative care research. the paper defines mixed methods and outlines why and how mixed methods are used to develop and evaluate complex interventions, with a pragmatic focus on design and data collection issues and data analysis. Useful texts are signposted and illustrative examples provided of mixed method studies in palliative care, including a detailed worked example of the development and evaluation of a complex intervention in palliative care for breathlessness. Key challenges to conducting mixed methods in palliative care research are identified in relation to data collection, data integration in analysis, costs and dissemination and how these might be addressed. the development and evaluation of complex interventions in palliative care benefit from the application of mixed methods. Mixed methods enable better understanding of whether and how an intervention works (or does not work) and inform the design of subsequent studies. However, they can be challenging: mixed method studies in palliative care will benefit from working with agreed protocols, multidisciplinary teams and engaging staff with appropriate skill sets.

  11. Using Common Graphics Paradigms Implemented in a Java Applet to Represent Complex Scheduling Requirements

    Science.gov (United States)

    Jaap, John; Meyer, Patrick; Davis, Elizabeth

    1997-01-01

    The experiments planned for the International Space Station promise to be complex, lengthy and diverse. The scarcity of the space station resources will cause significant competition for resources between experiments. The scheduling job facing the Space Station mission planning software requires a concise and comprehensive description of the experiments' requirements (to ensure a valid schedule) and a good description of the experiments' flexibility (to effectively utilize available resources). In addition, the continuous operation of the station, the wide geographic dispersion of station users, and the budgetary pressure to reduce operations manpower make a low-cost solution mandatory. A graphical representation of the scheduling requirements for station payloads implemented via an Internet-based application promises to be an elegant solution that addresses all of these issues. The graphical representation of experiment requirements permits a station user to describe his experiment by defining "activities" and "sequences of activities". Activities define the resource requirements (with alternatives) and other quantitative constraints of tasks to be performed. Activities definitions use an "outline" graphics paradigm. Sequences define the time relationships between activities. Sequences may also define time relationships with activities of other payloads or space station systems. Sequences of activities are described by a "network" graphics paradigm. The bulk of this paper will describe the graphical approach to representing requirements and provide examples that show the ease and clarity with which complex requirements can be represented. A Java applet, to run in a web browser, is being developed to support the graphical representation of payload scheduling requirements. Implementing the entry and editing of requirements via the web solves the problems introduced by the geographic dispersion of users. Reducing manpower is accomplished by developing a concise

  12. A Low Complexity Discrete Radiosity Method

    OpenAIRE

    Chatelier , Pierre Yves; Malgouyres , Rémy

    2006-01-01

    International audience; Rather than using Monte Carlo sampling techniques or patch projections to compute radiosity, it is possible to use a discretization of a scene into voxels and perform some discrete geometry calculus to quickly compute visibility information. In such a framework , the radiosity method may be as precise as a patch-based radiosity using hemicube computation for form-factors, but it lowers the overall theoretical complexity to an O(N log N) + O(N), where the O(N) is largel...

  13. Handbook of methods for risk-based analysis of technical specification requirements

    International Nuclear Information System (INIS)

    Samanta, P.K.; Vesely, W.E.

    1994-01-01

    Technical Specifications (TS) requirements for nuclear power plants define the Limiting Conditions for Operation (LCOs) and Surveillance Requirements (SRs) to assure safety during operation. In general, these requirements were based on deterministic analysis and engineering judgments. Experiences with plant operation indicate that some elements of the requirements are unnecessarily restrictive, while others may not be conducive to safety. Improvements in these requirements are facilitated by the availability of plant specific Probabilistic Safety Assessments (PSAs). The use of risk and reliability-based methods to improve TS requirements has gained wide interest because these methods can: Quantitatively evaluate the risk and justify changes based on objective risk arguments; Provide a defensible basis for these requirements for regulatory applications. The US NRC Office of Research is sponsoring research to develop systematic risk-based methods to improve various aspects of TS requirements. The handbook of methods, which is being prepared, summarizes such risk-based methods. The scope of the handbook includes reliability and risk-based methods for evaluating allowed outage times (AOTs), action statements requiring shutdown where shutdown risk may be substantial, surveillance test intervals (STIs), defenses against common-cause failures, managing plant configurations, and scheduling maintenances. For each topic, the handbook summarizes methods of analysis and data needs, outlines the insights to be gained, lists additional references, and presents examples of evaluations

  14. Handbook of methods for risk-based analysis of Technical Specification requirements

    International Nuclear Information System (INIS)

    Samanta, P.K.; Vesely, W.E.

    1993-01-01

    Technical Specifications (TS) requirements for nuclear power plants define the Limiting Conditions for Operation (LCOs) and Surveillance Requirements (SRs) to assure safety during operation. In general, these requirements were based on deterministic analysis and engineering judgments. Experiences with plant operation indicate that some elements of the requirements are unnecessarily restrictive, while others may not be conducive to safety. Improvements in these requirements are facilitated by the availability of plant specific Probabilistic Safety Assessments (PSAs). The use of risk and reliability-based methods to improve TS requirements has gained wide interest because these methods can: quantitatively evaluate the risk impact and justify changes based on objective risk arguments. Provide a defensible basis for these requirements for regulatory applications. The United States Nuclear Regulatory Commission (USNRC) Office of Research is sponsoring research to develop systematic risk-based methods to improve various aspects of TS requirements. The handbook of methods, which is being prepared, summarizes such risk-based methods. The scope of the handbook includes reliability and risk-based methods for evaluating allowed outage times (AOTs), action statements requiring shutdown where shutdown risk may be substantial, surveillance test intervals (STIs), defenses against common-cause failures, managing plant configurations, and scheduling maintenances. For each topic, the handbook summarizes methods of analysis and data needs, outlines the insights to be gained, lists additional references, and presents examples of evaluations

  15. A new spinning reserve requirement forecast method for deregulated electricity markets

    International Nuclear Information System (INIS)

    Amjady, Nima; Keynia, Farshid

    2010-01-01

    Ancillary services are necessary for maintaining the security and reliability of power systems and constitute an important part of trade in competitive electricity markets. Spinning Reserve (SR) is one of the most important ancillary services for saving power system stability and integrity in response to contingencies and disturbances that continuously occur in the power systems. Hence, an accurate day-ahead forecast of SR requirement helps the Independent System Operator (ISO) to conduct a reliable and economic operation of the power system. However, SR signal has complex, non-stationary and volatile behavior along the time domain and depends greatly on system load. In this paper, a new hybrid forecast engine is proposed for SR requirement prediction. The proposed forecast engine has an iterative training mechanism composed of Levenberg-Marquadt (LM) learning algorithm and Real Coded Genetic Algorithm (RCGA), implemented on the Multi-Layer Perceptron (MLP) neural network. The proposed forecast methodology is examined by means of real data of Pennsylvania-New Jersey-Maryland (PJM) electricity market and the California ISO (CAISO) controlled grid. The obtained forecast results are presented and compared with those of the other SR forecast methods. (author)

  16. A new spinning reserve requirement forecast method for deregulated electricity markets

    Energy Technology Data Exchange (ETDEWEB)

    Amjady, Nima; Keynia, Farshid [Department of Electrical Engineering, Semnan University, Semnan (Iran)

    2010-06-15

    Ancillary services are necessary for maintaining the security and reliability of power systems and constitute an important part of trade in competitive electricity markets. Spinning Reserve (SR) is one of the most important ancillary services for saving power system stability and integrity in response to contingencies and disturbances that continuously occur in the power systems. Hence, an accurate day-ahead forecast of SR requirement helps the Independent System Operator (ISO) to conduct a reliable and economic operation of the power system. However, SR signal has complex, non-stationary and volatile behavior along the time domain and depends greatly on system load. In this paper, a new hybrid forecast engine is proposed for SR requirement prediction. The proposed forecast engine has an iterative training mechanism composed of Levenberg-Marquadt (LM) learning algorithm and Real Coded Genetic Algorithm (RCGA), implemented on the Multi-Layer Perceptron (MLP) neural network. The proposed forecast methodology is examined by means of real data of Pennsylvania-New Jersey-Maryland (PJM) electricity market and the California ISO (CAISO) controlled grid. The obtained forecast results are presented and compared with those of the other SR forecast methods. (author)

  17. Conjugate gradient type methods for linear systems with complex symmetric coefficient matrices

    Science.gov (United States)

    Freund, Roland

    1989-01-01

    We consider conjugate gradient type methods for the solution of large sparse linear system Ax equals b with complex symmetric coefficient matrices A equals A(T). Such linear systems arise in important applications, such as the numerical solution of the complex Helmholtz equation. Furthermore, most complex non-Hermitian linear systems which occur in practice are actually complex symmetric. We investigate conjugate gradient type iterations which are based on a variant of the nonsymmetric Lanczos algorithm for complex symmetric matrices. We propose a new approach with iterates defined by a quasi-minimal residual property. The resulting algorithm presents several advantages over the standard biconjugate gradient method. We also include some remarks on the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.

  18. Complexity, Methodology and Method: Crafting a Critical Process of Research

    Science.gov (United States)

    Alhadeff-Jones, Michel

    2013-01-01

    This paper defines a theoretical framework aiming to support the actions and reflections of researchers looking for a "method" in order to critically conceive the complexity of a scientific process of research. First, it starts with a brief overview of the core assumptions framing Morin's "paradigm of complexity" and Le…

  19. A dissipative particle dynamics method for arbitrarily complex geometries

    Science.gov (United States)

    Li, Zhen; Bian, Xin; Tang, Yu-Hang; Karniadakis, George Em

    2018-02-01

    Dissipative particle dynamics (DPD) is an effective Lagrangian method for modeling complex fluids in the mesoscale regime but so far it has been limited to relatively simple geometries. Here, we formulate a local detection method for DPD involving arbitrarily shaped geometric three-dimensional domains. By introducing an indicator variable of boundary volume fraction (BVF) for each fluid particle, the boundary of arbitrary-shape objects is detected on-the-fly for the moving fluid particles using only the local particle configuration. Therefore, this approach eliminates the need of an analytical description of the boundary and geometry of objects in DPD simulations and makes it possible to load the geometry of a system directly from experimental images or computer-aided designs/drawings. More specifically, the BVF of a fluid particle is defined by the weighted summation over its neighboring particles within a cutoff distance. Wall penetration is inferred from the value of the BVF and prevented by a predictor-corrector algorithm. The no-slip boundary condition is achieved by employing effective dissipative coefficients for liquid-solid interactions. Quantitative evaluations of the new method are performed for the plane Poiseuille flow, the plane Couette flow and the Wannier flow in a cylindrical domain and compared with their corresponding analytical solutions and (high-order) spectral element solution of the Navier-Stokes equations. We verify that the proposed method yields correct no-slip boundary conditions for velocity and generates negligible fluctuations of density and temperature in the vicinity of the wall surface. Moreover, we construct a very complex 3D geometry - the "Brown Pacman" microfluidic device - to explicitly demonstrate how to construct a DPD system with complex geometry directly from loading a graphical image. Subsequently, we simulate the flow of a surfactant solution through this complex microfluidic device using the new method. Its

  20. The Videographic Requirements Gathering Method for Adolescent-Focused Interaction Design

    Directory of Open Access Journals (Sweden)

    Tamara Peyton

    2014-08-01

    Full Text Available We present a novel method for conducting requirements gathering with adolescent populations. Called videographic requirements gathering, this technique makes use of mobile phone data capture and participant creation of media images. The videographic requirements gathering method can help researchers and designers gain intimate insight into adolescent lives while simultaneously reducing power imbalances. We provide rationale for this approach, pragmatics of using the method, and advice on overcoming common challenges facing researchers and designers relying on this technique.

  1. An Extended Newmark-FDTD Method for Complex Dispersive Media

    Directory of Open Access Journals (Sweden)

    Yu-Qiang Zhang

    2018-01-01

    Full Text Available Based on polarizability in the form of a complex quadratic rational function, a novel finite-difference time-domain (FDTD approach combined with the Newmark algorithm is presented for dealing with a complex dispersive medium. In this paper, the time-stepping equation of the polarization vector is derived by applying simultaneously the Newmark algorithm to the two sides of a second-order time-domain differential equation obtained from the relation between the polarization vector and electric field intensity in the frequency domain by the inverse Fourier transform. Then, its accuracy and stability are discussed from the two aspects of theoretical analysis and numerical computation. It is observed that this method possesses the advantages of high accuracy, high stability, and a wide application scope and can thus be applied to the treatment of many complex dispersion models, including the complex conjugate pole residue model, critical point model, modified Lorentz model, and complex quadratic rational function.

  2. An efficient Korringa-Kohn-Rostoker method for ''complex'' lattices

    International Nuclear Information System (INIS)

    Yussouff, M.; Zeller, R.

    1980-10-01

    We present a modification of the exact KKR-band structure method which uses (a) a new energy expansion for structure constants and (b) only the reciprocal lattice summation. It is quite efficient and particularly useful for 'complex' lattices. The band structure of hexagonal-close-packed Beryllium at symmetry points is presented as an example of this method. (author)

  3. Investigation of complexing equilibrium of polyacrylate-anion with cadmium ions by polarographic method

    Energy Technology Data Exchange (ETDEWEB)

    Avlyanov, Zh K; Kabanov, N M; Zezin, A B

    1985-01-01

    Polarographic investigation of cadmium complex with polyacrylate-anion in aqueous KCl solution is carried out. It is shown that the polarographic method allows one to define equilibrium constants of polymer metallic complex (PMC) formation even in the case when current magnitudes are defined by PMC dissociation reaction kinetic characteristics. The obtained equilibrium constants of stepped complexing provide the values of mean coordination PAAxCd complex number of approximately 1.5, that coincides with the value obtained by the potentiometric method.

  4. Investigation of complexing equilibrium of polyacrylate-anion with cadmium ions by polarographic method

    International Nuclear Information System (INIS)

    Avlyanov, Zh.K.; Kabanov, N.M.; Zezin, A.B.

    1985-01-01

    Polarographic investigation of cadmium complex with polyacrylate-anion in aqueous KCl solution is carried out. It is shown that the polarographic method allows one to define equilibrium constants of polymer metallic complex (PMC) formation even in the case, when current magnitudes are defined by PMC dissociation reaction kinetic characteristics. The obtained equilibrium constants of stepped complexing provide the values of mean coordination PAAxCd complex number of approximately 1.5, that coinsides with the value obtained by the potentiometric method

  5. Three-dimensional Cross-Platform Planning for Complex Spinal Procedures: A New Method Adaptive to Different Navigation Systems.

    Science.gov (United States)

    Kosterhon, Michael; Gutenberg, Angelika; Kantelhardt, Sven R; Conrad, Jens; Nimer Amr, Amr; Gawehn, Joachim; Giese, Alf

    2017-08-01

    A feasibility study. To develop a method based on the DICOM standard which transfers complex 3-dimensional (3D) trajectories and objects from external planning software to any navigation system for planning and intraoperative guidance of complex spinal procedures. There have been many reports about navigation systems with embedded planning solutions but only few on how to transfer planning data generated in external software. Patients computerized tomography and/or magnetic resonance volume data sets of the affected spinal segments were imported to Amira software, reconstructed to 3D images and fused with magnetic resonance data for soft-tissue visualization, resulting in a virtual patient model. Objects needed for surgical plans or surgical procedures such as trajectories, implants or surgical instruments were either digitally constructed or computerized tomography scanned and virtually positioned within the 3D model as required. As crucial step of this method these objects were fused with the patient's original diagnostic image data, resulting in a single DICOM sequence, containing all preplanned information necessary for the operation. By this step it was possible to import complex surgical plans into any navigation system. We applied this method not only to intraoperatively adjustable implants and objects under experimental settings, but also planned and successfully performed surgical procedures, such as the percutaneous lateral approach to the lumbar spine following preplanned trajectories and a thoracic tumor resection including intervertebral body replacement using an optical navigation system. To demonstrate the versatility and compatibility of the method with an entirely different navigation system, virtually preplanned lumbar transpedicular screw placement was performed with a robotic guidance system. The presented method not only allows virtual planning of complex surgical procedures, but to export objects and surgical plans to any navigation or

  6. The JPL functional requirements tool

    Science.gov (United States)

    Giffin, Geoff; Skinner, Judith; Stoller, Richard

    1987-01-01

    Planetary spacecraft are complex vehicles which are built according to many thousands of requirements. Problems encountered in documenting and maintaining these requirements led to the current attempt to reduce or eliminate these problems by a computer automated data base Functional Requirements Tool. The tool developed at JPL and in use on several JPL Projects is described. The organization and functionality of the Tool, together with an explanation of the data base inputs, their relationships, and use are presented. Methods of interfacing with external documents, representation of tables and figures, and methods of approval and change processing are discussed. The options available for disseminating information from the Tool are identified. The implementation of the Requirements Tool is outlined, and the operation is summarized. The conclusions drawn from this work is that the Requirements Tool represents a useful addition to the System Engineer's Tool kit, it is not currently available elsewhere, and a clear development path exists to expand the capabilities of the Tool to serve larger and more complex projects.

  7. Hybrid RANS/LES method for wind flow over complex terrain

    DEFF Research Database (Denmark)

    Bechmann, Andreas; Sørensen, Niels N.

    2010-01-01

    for flows at high Reynolds numbers. To reduce the computational cost of traditional LES, a hybrid method is proposed in which the near-wall eddies are modelled in a Reynolds-averaged sense. Close to walls, the flow is treated with the Reynolds-averaged Navier-Stokes (RANS) equations (unsteady RANS...... rough walls. Previous attempts of combining RANS and LES has resulted in unphysical transition regions between the two layers, but the present work improves this region by using a stochastic backscatter model. To demonstrate the ability of the proposed hybrid method, simulations are presented for wind...... the turbulent kinetic energy, whereas the new method captures the high turbulence levels well but underestimates the mean velocity. The presented results are for a relative mild configuration of complex terrain, but the proposed method can also be used for highly complex terrain where the benefits of the new...

  8. Ethnographic methods for process evaluations of complex health behaviour interventions.

    Science.gov (United States)

    Morgan-Trimmer, Sarah; Wood, Fiona

    2016-05-04

    This article outlines the contribution that ethnography could make to process evaluations for trials of complex health-behaviour interventions. Process evaluations are increasingly used to examine how health-behaviour interventions operate to produce outcomes and often employ qualitative methods to do this. Ethnography shares commonalities with the qualitative methods currently used in health-behaviour evaluations but has a distinctive approach over and above these methods. It is an overlooked methodology in trials of complex health-behaviour interventions that has much to contribute to the understanding of how interventions work. These benefits are discussed here with respect to three strengths of ethnographic methodology: (1) producing valid data, (2) understanding data within social contexts, and (3) building theory productively. The limitations of ethnography within the context of process evaluations are also discussed.

  9. A numerical method for solving the 3D unsteady incompressible Navier Stokes equations in curvilinear domains with complex immersed boundaries

    Science.gov (United States)

    Ge, Liang; Sotiropoulos, Fotis

    2007-08-01

    A novel numerical method is developed that integrates boundary-conforming grids with a sharp interface, immersed boundary methodology. The method is intended for simulating internal flows containing complex, moving immersed boundaries such as those encountered in several cardiovascular applications. The background domain (e.g. the empty aorta) is discretized efficiently with a curvilinear boundary-fitted mesh while the complex moving immersed boundary (say a prosthetic heart valve) is treated with the sharp-interface, hybrid Cartesian/immersed-boundary approach of Gilmanov and Sotiropoulos [A. Gilmanov, F. Sotiropoulos, A hybrid cartesian/immersed boundary method for simulating flows with 3d, geometrically complex, moving bodies, Journal of Computational Physics 207 (2005) 457-492.]. To facilitate the implementation of this novel modeling paradigm in complex flow simulations, an accurate and efficient numerical method is developed for solving the unsteady, incompressible Navier-Stokes equations in generalized curvilinear coordinates. The method employs a novel, fully-curvilinear staggered grid discretization approach, which does not require either the explicit evaluation of the Christoffel symbols or the discretization of all three momentum equations at cell interfaces as done in previous formulations. The equations are integrated in time using an efficient, second-order accurate fractional step methodology coupled with a Jacobian-free, Newton-Krylov solver for the momentum equations and a GMRES solver enhanced with multigrid as preconditioner for the Poisson equation. Several numerical experiments are carried out on fine computational meshes to demonstrate the accuracy and efficiency of the proposed method for standard benchmark problems as well as for unsteady, pulsatile flow through a curved, pipe bend. To demonstrate the ability of the method to simulate flows with complex, moving immersed boundaries we apply it to calculate pulsatile, physiological flow

  10. Elongator complex is required for long-term olfactory memory formation in Drosophila.

    Science.gov (United States)

    Yu, Dinghui; Tan, Ying; Chakraborty, Molee; Tomchik, Seth; Davis, Ronald L

    2018-04-01

    The evolutionarily conserved Elongator Complex associates with RNA polymerase II for transcriptional elongation. Elp3 is the catalytic subunit, contains histone acetyltransferase activity, and is associated with neurodegeneration in humans. Elp1 is a scaffolding subunit and when mutated causes familial dysautonomia. Here, we show that elp3 and elp1 are required for aversive long-term olfactory memory in Drosophila RNAi knockdown of elp3 in adult mushroom bodies impairs long-term memory (LTM) without affecting earlier forms of memory. RNAi knockdown with coexpression of elp3 cDNA reverses the impairment. Similarly, RNAi knockdown of elp1 impairs LTM and coexpression of elp1 cDNA reverses this phenotype. The LTM deficit in elp3 and elp1 knockdown flies is accompanied by the abolishment of a LTM trace, which is registered as increased calcium influx in response to the CS+ odor in the α-branch of mushroom body neurons. Coexpression of elp1 or elp3 cDNA rescues the memory trace in parallel with LTM. These data show that the Elongator complex is required in adult mushroom body neurons for long-term behavioral memory and the associated long-term memory trace. © 2018 Yu et al.; Published by Cold Spring Harbor Laboratory Press.

  11. IMPACT OF MATRIX INVERSION ON THE COMPLEXITY OF THE FINITE ELEMENT METHOD

    Directory of Open Access Journals (Sweden)

    M. Sybis

    2016-04-01

    Full Text Available Purpose. The development of a wide construction market and a desire to design innovative architectural building constructions has resulted in the need to create complex numerical models of objects having increasingly higher computational complexity. The purpose of this work is to show that choosing a proper method for solving the set of equations can improve the calculation time (reduce the complexity by a few levels of magnitude. Methodology. The article presents an analysis of the impact of matrix inversion algorithm on the deflection calculation in the beam, using the finite element method (FEM. Based on the literature analysis, common methods of calculating set of equations were determined. From the found solutions the Gaussian elimination, LU and Cholesky decomposition methods have been implemented to determine the effect of the matrix inversion algorithm used for solving the equations set on the number of computational operations performed. In addition, each of the implemented method has been further optimized thereby reducing the number of necessary arithmetic operations. Findings. These optimizations have been performed on the use of certain properties of the matrix, such as symmetry or significant number of zero elements in the matrix. The results of the analysis are presented for the division of the beam to 5, 50, 100 and 200 nodes, for which the deflection has been calculated. Originality. The main achievement of this work is that it shows the impact of the used methodology on the complexity of solving the problem (or equivalently, time needed to obtain results. Practical value. The difference between the best (the less complex and the worst (the most complex is in the row of few orders of magnitude. This result shows that choosing wrong methodology may enlarge time needed to perform calculation significantly.

  12. Verification and Examination Management of Complex Systems

    Directory of Open Access Journals (Sweden)

    Stian Ruud

    2014-10-01

    Full Text Available As ship systems become more complex, with an increasing number of safety-critical functions, many interconnected subsystems, tight integration to other systems, and a large amount of potential failure modes, several industry parties have identified the need for improved methods for managing the verification and examination efforts of such complex systems. Such needs are even more prominent now that the marine and offshore industries are targeting more activities and operations in the Arctic environment. In this paper, a set of requirements and a method for verification and examination management are proposed for allocating examination efforts to selected subsystems. The method is based on a definition of a verification risk function for a given system topology and given requirements. The marginal verification risks for the subsystems may then be evaluated, so that examination efforts for the subsystem can be allocated. Two cases of requirements and systems are used to demonstrate the proposed method. The method establishes a systematic relationship between the verification loss, the logic system topology, verification method performance, examination stop criterion, the required examination effort, and a proposed sequence of examinations to reach the examination stop criterion.

  13. Orbiter data reduction complex data processing requirements for the OFT mission evaluation team (level C)

    Science.gov (United States)

    1979-01-01

    This document addresses requirements for post-test data reduction in support of the Orbital Flight Tests (OFT) mission evaluation team, specifically those which are planned to be implemented in the ODRC (Orbiter Data Reduction Complex). Only those requirements which have been previously baselined by the Data Systems and Analysis Directorate configuration control board are included. This document serves as the control document between Institutional Data Systems Division and the Integration Division for OFT mission evaluation data processing requirements, and shall be the basis for detailed design of ODRC data processing systems.

  14. γ-Tubulin complex in Trypanosoma brucei: molecular composition, subunit interdependence and requirement for axonemal central pair protein assembly.

    Science.gov (United States)

    Zhou, Qing; Li, Ziyin

    2015-11-01

    γ-Tubulin complex constitutes a key component of the microtubule-organizing center and nucleates microtubule assembly. This complex differs in complexity in different organisms: the budding yeast contains the γ-tubulin small complex (γTuSC) composed of γ-tubulin, gamma-tubulin complex protein (GCP)2 and GCP3, whereas animals contain the γ-tubulin ring complex (γTuRC) composed of γTuSC and three additional proteins, GCP4, GCP5 and GCP6. In Trypanosoma brucei, the composition of the γ-tubulin complex remains elusive, and it is not known whether it also regulates assembly of the subpellicular microtubules and the spindle microtubules. Here we report that the γ-tubulin complex in T. brucei is composed of γ-tubulin and three GCP proteins, GCP2-GCP4, and is primarily localized in the basal body throughout the cell cycle. Depletion of GCP2 and GCP3, but not GCP4, disrupted the axonemal central pair microtubules, but not the subpellicular microtubules and the spindle microtubules. Furthermore, we showed that the γTuSC is required for assembly of two central pair proteins and that γTuSC subunits are mutually required for stability. Together, these results identified an unusual γ-tubulin complex in T. brucei, uncovered an essential role of γTuSC in central pair protein assembly, and demonstrated the interdependence of individual γTuSC components for maintaining a stable complex. © 2015 John Wiley & Sons Ltd.

  15. Quantitative methods for developing C2 system requirement

    Energy Technology Data Exchange (ETDEWEB)

    Tyler, K.K.

    1992-06-01

    The US Army established the Army Tactical Command and Control System (ATCCS) Experimentation Site (AES) to provide a place where material and combat developers could experiment with command and control systems. The AES conducts fundamental and applied research involving command and control issues using a number of research methods, ranging from large force-level experiments, to controlled laboratory experiments, to studies and analyses. The work summarized in this paper was done by Pacific Northwest Laboratory under task order from the Army Tactical Command and Control System Experimentation Site. The purpose of the task was to develop the functional requirements for army engineer automation and support software, including MCS-ENG. A client, such as an army engineer, has certain needs and requirements of his or her software; these needs must be presented in ways that are readily understandable to the software developer. A requirements analysis then, such as the one described in this paper, is simply the means of communication between those who would use a piece of software and those who would develop it. The analysis from which this paper was derived attempted to bridge the ``communications gap`` between army combat engineers and software engineers. It sought to derive and state the software needs of army engineers in ways that are meaningful to software engineers. In doing this, it followed a natural sequence of investigation: (1) what does an army engineer do, (2) with which tasks can software help, (3) how much will it cost, and (4) where is the highest payoff? This paper demonstrates how each of these questions was addressed during an analysis of the functional requirements of engineer support software. Systems engineering methods are used in a task analysis and a quantitative scoring method was developed to score responses regarding the feasibility of task automation. The paper discusses the methods used to perform utility and cost-benefits estimates.

  16. Quality functions for requirements engineering in system development methods.

    Science.gov (United States)

    Johansson, M; Timpka, T

    1996-01-01

    Based on a grounded theory framework, this paper analyses the quality characteristics for methods to be used for requirements engineering in the development of medical decision support systems (MDSS). The results from a Quality Function Deployment (QFD) used to rank functions connected to user value and a focus group study were presented to a validation focus group. The focus group studies take advantage of a group process to collect data for further analyses. The results describe factors considered by the participants as important in the development of methods for requirements engineering in health care. Based on the findings, the content which, according to the user a MDSS method should support is established.

  17. An Investigation of the Variety and Complexity of Statistical Methods Used in Current Internal Medicine Literature.

    Science.gov (United States)

    Narayanan, Roshni; Nugent, Rebecca; Nugent, Kenneth

    2015-10-01

    Accreditation Council for Graduate Medical Education guidelines require internal medicine residents to develop skills in the interpretation of medical literature and to understand the principles of research. A necessary component is the ability to understand the statistical methods used and their results, material that is not an in-depth focus of most medical school curricula and residency programs. Given the breadth and depth of the current medical literature and an increasing emphasis on complex, sophisticated statistical analyses, the statistical foundation and education necessary for residents are uncertain. We reviewed the statistical methods and terms used in 49 articles discussed at the journal club in the Department of Internal Medicine residency program at Texas Tech University between January 1, 2013 and June 30, 2013. We collected information on the study type and on the statistical methods used for summarizing and comparing samples, determining the relations between independent variables and dependent variables, and estimating models. We then identified the typical statistics education level at which each term or method is learned. A total of 14 articles came from the Journal of the American Medical Association Internal Medicine, 11 from the New England Journal of Medicine, 6 from the Annals of Internal Medicine, 5 from the Journal of the American Medical Association, and 13 from other journals. Twenty reported randomized controlled trials. Summary statistics included mean values (39 articles), category counts (38), and medians (28). Group comparisons were based on t tests (14 articles), χ2 tests (21), and nonparametric ranking tests (10). The relations between dependent and independent variables were analyzed with simple regression (6 articles), multivariate regression (11), and logistic regression (8). Nine studies reported odds ratios with 95% confidence intervals, and seven analyzed test performance using sensitivity and specificity calculations

  18. Viability and resilience of complex systems concepts, methods and case studies from ecology and society

    CERN Document Server

    Deffuant, Guillaume

    2011-01-01

    One common characteristic of a complex system is its ability to withstand major disturbances and the capacity to rebuild itself. Understanding how such systems demonstrate resilience by absorbing or recovering from major external perturbations requires both quantitative foundations and a multidisciplinary view of the topic. This book demonstrates how new methods can be used to identify the actions favouring the recovery from perturbations on a variety of examples including the dynamics of bacterial biofilms, grassland savannahs, language competition and Internet social networking sites. The reader is taken through an introduction to the idea of resilience and viability and shown the mathematical basis of the techniques used to analyse systems. The idea of individual or agent-based modelling of complex systems is introduced and related to analytically tractable approximations of such models. A set of case studies illustrates the use of the techniques in real applications, and the final section describes how on...

  19. Membranes linked by trans-SNARE complexes require lipids prone to non-bilayer structure for progression to fusion.

    Science.gov (United States)

    Zick, Michael; Stroupe, Christopher; Orr, Amy; Douville, Deborah; Wickner, William T

    2014-01-01

    Like other intracellular fusion events, the homotypic fusion of yeast vacuoles requires a Rab GTPase, a large Rab effector complex, SNARE proteins which can form a 4-helical bundle, and the SNARE disassembly chaperones Sec17p and Sec18p. In addition to these proteins, specific vacuole lipids are required for efficient fusion in vivo and with the purified organelle. Reconstitution of vacuole fusion with all purified components reveals that high SNARE levels can mask the requirement for a complex mixture of vacuole lipids. At lower, more physiological SNARE levels, neutral lipids with small headgroups that tend to form non-bilayer structures (phosphatidylethanolamine, diacylglycerol, and ergosterol) are essential. Membranes without these three lipids can dock and complete trans-SNARE pairing but cannot rearrange their lipids for fusion. DOI: http://dx.doi.org/10.7554/eLife.01879.001.

  20. Low-complexity video encoding method for wireless image transmission in capsule endoscope.

    Science.gov (United States)

    Takizawa, Kenichi; Hamaguchi, Kiyoshi

    2010-01-01

    This paper presents a low-complexity video encoding method applicable for wireless image transmission in capsule endoscopes. This encoding method is based on Wyner-Ziv theory, in which side information available at a transmitter is treated as side information at its receiver. Therefore complex processes in video encoding, such as estimation of the motion vector, are moved to the receiver side, which has a larger-capacity battery. As a result, the encoding process is only to decimate coded original data through channel coding. We provide a performance evaluation for a low-density parity check (LDPC) coding method in the AWGN channel.

  1. On a computational method for modelling complex ecosystems by superposition procedure

    International Nuclear Information System (INIS)

    He Shanyu.

    1986-12-01

    In this paper, the Superposition Procedure is concisely described, and a computational method for modelling a complex ecosystem is proposed. With this method, the information contained in acceptable submodels and observed data can be utilized to maximal degree. (author). 1 ref

  2. Quantitative Nuclear Medicine Imaging: Concepts, Requirements and Methods

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2014-01-15

    The absolute quantification of radionuclide distribution has been a goal since the early days of nuclear medicine. Nevertheless, the apparent complexity and sometimes limited accuracy of these methods have prevented them from being widely used in important applications such as targeted radionuclide therapy or kinetic analysis. The intricacy of the effects degrading nuclear medicine images and the lack of availability of adequate methods to compensate for these effects have frequently been seen as insurmountable obstacles in the use of quantitative nuclear medicine in clinical institutions. In the last few decades, several research groups have consistently devoted their efforts to the filling of these gaps. As a result, many efficient methods are now available that make quantification a clinical reality, provided appropriate compensation tools are used. Despite these efforts, many clinical institutions still lack the knowledge and tools to adequately measure and estimate the accumulated activities in the human body, thereby using potentially outdated protocols and procedures. The purpose of the present publication is to review the current state of the art of image quantification and to provide medical physicists and other related professionals facing quantification tasks with a solid background of tools and methods. It describes and analyses the physical effects that degrade image quality and affect the accuracy of quantification, and describes methods to compensate for them in planar, single photon emission computed tomography (SPECT) and positron emission tomography (PET) images. The fast paced development of the computational infrastructure, both hardware and software, has made drastic changes in the ways image quantification is now performed. The measuring equipment has evolved from the simple blind probes to planar and three dimensional imaging, supported by SPECT, PET and hybrid equipment. Methods of iterative reconstruction have been developed to allow for

  3. The Visual Orientation Memory of "Drosophila" Requires Foraging (PKG) Upstream of Ignorant (RSK2) in Ring Neurons of the Central Complex

    Science.gov (United States)

    Kuntz, Sara; Poeck, Burkhard; Sokolowski, Marla B.; Strauss, Roland

    2012-01-01

    Orientation and navigation in a complex environment requires path planning and recall to exert goal-driven behavior. Walking "Drosophila" flies possess a visual orientation memory for attractive targets which is localized in the central complex of the adult brain. Here we show that this type of working memory requires the cGMP-dependent protein…

  4. Quantitative methods for developing C2 system requirement

    Energy Technology Data Exchange (ETDEWEB)

    Tyler, K.K.

    1992-06-01

    The US Army established the Army Tactical Command and Control System (ATCCS) Experimentation Site (AES) to provide a place where material and combat developers could experiment with command and control systems. The AES conducts fundamental and applied research involving command and control issues using a number of research methods, ranging from large force-level experiments, to controlled laboratory experiments, to studies and analyses. The work summarized in this paper was done by Pacific Northwest Laboratory under task order from the Army Tactical Command and Control System Experimentation Site. The purpose of the task was to develop the functional requirements for army engineer automation and support software, including MCS-ENG. A client, such as an army engineer, has certain needs and requirements of his or her software; these needs must be presented in ways that are readily understandable to the software developer. A requirements analysis then, such as the one described in this paper, is simply the means of communication between those who would use a piece of software and those who would develop it. The analysis from which this paper was derived attempted to bridge the communications gap'' between army combat engineers and software engineers. It sought to derive and state the software needs of army engineers in ways that are meaningful to software engineers. In doing this, it followed a natural sequence of investigation: (1) what does an army engineer do, (2) with which tasks can software help, (3) how much will it cost, and (4) where is the highest payoff This paper demonstrates how each of these questions was addressed during an analysis of the functional requirements of engineer support software. Systems engineering methods are used in a task analysis and a quantitative scoring method was developed to score responses regarding the feasibility of task automation. The paper discusses the methods used to perform utility and cost-benefits estimates.

  5. Proposal for Requirement Validation Criteria and Method Based on Actor Interaction

    Science.gov (United States)

    Hattori, Noboru; Yamamoto, Shuichiro; Ajisaka, Tsuneo; Kitani, Tsuyoshi

    We propose requirement validation criteria and a method based on the interaction between actors in an information system. We focus on the cyclical transitions of one actor's situation against another and clarify observable stimuli and responses based on these transitions. Both actors' situations can be listed in a state transition table, which describes the observable stimuli or responses they send or receive. Examination of the interaction between both actors in the state transition tables enables us to detect missing or defective observable stimuli or responses. Typically, this method can be applied to the examination of the interaction between a resource managed by the information system and its user. As a case study, we analyzed 332 requirement defect reports of an actual system development project in Japan. We found that there were a certain amount of defects regarding missing or defective stimuli and responses, which can be detected using our proposed method if this method is used in the requirement definition phase. This means that we can reach a more complete requirement definition with our proposed method.

  6. Construction of Intelligence Knowledge Map for Complex Product Development

    Directory of Open Access Journals (Sweden)

    Yan-jie LV,

    2013-11-01

    Full Text Available The complex product design and development is an integrated discipline. A lot of knowledge overloads and knowledge trek phenomenon appeared with the raise of product complexity and the explosion of knowledge and information. To improve the utilization efficiency of the knowledge using and shorten the time and effort spent on the Knowledge screening, avoid missing the knowledge, which is required, the paper proposes a method for the intelligence knowledge map construct model based on knowledge requirements and knowledge connection. Analyzing the context information of the user and giving the method of acquiring the knowledge requirement based on the context information and the user’s personal knowledge structure. This method can get the knowledge requirements of the users to generate the knowledge retrieval expressions to obtain the knowledge points and then construct the intelligent knowledge map through the analysis of multiple dimensions and using the knowledge related to the development of aircraft landing gear as an example to verify the feasibility of this method.

  7. Spectroscopic methods for aqueous cyclodextrin inclusion complex binding measurement for 1,4-dioxane, chlorinated co-contaminants, and ozone

    Science.gov (United States)

    Khan, Naima A.; Johnson, Michael D.; Carroll, Kenneth C.

    2018-03-01

    Recalcitrant organic contaminants, such as 1,4-dioxane, typically require advanced oxidation process (AOP) oxidants, such as ozone (O3), for their complete mineralization during water treatment. Unfortunately, the use of AOPs can be limited by these oxidants' relatively high reactivities and short half-lives. These drawbacks can be minimized by partial encapsulation of the oxidants within a cyclodextrin cavity to form inclusion complexes. We determined the inclusion complexes of O3 and three common co-contaminants (trichloroethene, 1,1,1-trichloroethane, and 1,4-dioxane) as guest compounds within hydroxypropyl-β-cyclodextrin. Both direct (ultraviolet or UV) and competitive (fluorescence changes with 6-p-toluidine-2-naphthalenesulfonic acid as the probe) methods were used, which gave comparable results for the inclusion constants of these species. Impacts of changing pH and NaCl concentrations were also assessed. Binding constants increased with pH and with ionic strength, which was attributed to variations in guest compound solubility. The results illustrate the versatility of cyclodextrins for inclusion complexation with various types of compounds, binding measurement methods are applicable to a wide range of applications, and have implications for both extraction of contaminants and delivery of reagents for treatment of contaminants in wastewater or contaminated groundwater.

  8. High-level inhibition of mitochondrial complexes III and IV is required to increase glutamate release from the nerve terminal

    Directory of Open Access Journals (Sweden)

    Kilbride Seán M

    2011-07-01

    Full Text Available Abstract Background The activities of mitochondrial complex III (ubiquinol-cytochrome c reductase, EC 1.10.2.2 and complex IV (cytochrome c oxidase EC 1.9.3.1 are reduced by 30-70% in Huntington's disease and Alzheimer's disease, respectively, and are associated with excitotoxic cell death in these disorders. In this study, we investigated the control that complexes III and complex IV exert on glutamate release from the isolated nerve terminal. Results Inhibition of complex III activity by 60-90% was necessary for a major increase in the rate of Ca2+-independent glutamate release to occur from isolated nerve terminals (synaptosomes depolarized with 4-aminopyridine or KCl. Similarly, an 85-90% inhibition of complex IV activity was required before a major increase in the rate of Ca2+-independent glutamate release from depolarized synaptosomes was observed. Inhibition of complex III and IV activities by ~ 60% and above was required before rates of glutamate efflux from polarized synaptosomes were increased. Conclusions These results suggest that nerve terminal mitochondria possess high reserves of complex III and IV activity and that high inhibition thresholds must be reached before excess glutamate is released from the nerve terminal. The implications of the results in the context of the relationship between electron transport chain enzyme deficiencies and excitotoxicity in neurodegenerative disorders are discussed.

  9. High-level inhibition of mitochondrial complexes III and IV is required to increase glutamate release from the nerve terminal

    LENUS (Irish Health Repository)

    Kilbride, Sean M

    2011-07-26

    Abstract Background The activities of mitochondrial complex III (ubiquinol-cytochrome c reductase, EC 1.10.2.2) and complex IV (cytochrome c oxidase EC 1.9.3.1) are reduced by 30-70% in Huntington\\'s disease and Alzheimer\\'s disease, respectively, and are associated with excitotoxic cell death in these disorders. In this study, we investigated the control that complexes III and complex IV exert on glutamate release from the isolated nerve terminal. Results Inhibition of complex III activity by 60-90% was necessary for a major increase in the rate of Ca2+-independent glutamate release to occur from isolated nerve terminals (synaptosomes) depolarized with 4-aminopyridine or KCl. Similarly, an 85-90% inhibition of complex IV activity was required before a major increase in the rate of Ca2+-independent glutamate release from depolarized synaptosomes was observed. Inhibition of complex III and IV activities by ~ 60% and above was required before rates of glutamate efflux from polarized synaptosomes were increased. Conclusions These results suggest that nerve terminal mitochondria possess high reserves of complex III and IV activity and that high inhibition thresholds must be reached before excess glutamate is released from the nerve terminal. The implications of the results in the context of the relationship between electron transport chain enzyme deficiencies and excitotoxicity in neurodegenerative disorders are discussed.

  10. EVALUATING THE NOVEL METHODS ON SPECIES DISTRIBUTION MODELING IN COMPLEX FOREST

    Directory of Open Access Journals (Sweden)

    C. H. Tu

    2012-07-01

    Full Text Available The prediction of species distribution has become a focus in ecology. For predicting a result more effectively and accurately, some novel methods have been proposed recently, like support vector machine (SVM and maximum entropy (MAXENT. However, high complexity in the forest, like that in Taiwan, will make the modeling become even harder. In this study, we aim to explore which method is more applicable to species distribution modeling in the complex forest. Castanopsis carlesii (long-leaf chinkapin, LLC, growing widely in Taiwan, was chosen as the target species because its seeds are an important food source for animals. We overlaid the tree samples on the layers of altitude, slope, aspect, terrain position, and vegetation index derived from SOPT-5 images, and developed three models, MAXENT, SVM, and decision tree (DT, to predict the potential habitat of LLCs. We evaluated these models by two sets of independent samples in different site and the effect on the complexity of forest by changing the background sample size (BSZ. In the forest with low complex (small BSZ, the accuracies of SVM (kappa = 0.87 and DT (0.86 models were slightly higher than that of MAXENT (0.84. In the more complex situation (large BSZ, MAXENT kept high kappa value (0.85, whereas SVM (0.61 and DT (0.57 models dropped significantly due to limiting the habitat close to samples. Therefore, MAXENT model was more applicable to predict species’ potential habitat in the complex forest; whereas SVM and DT models would tend to underestimate the potential habitat of LLCs.

  11. Identifying deterministic signals in simulated gravitational wave data: algorithmic complexity and the surrogate data method

    International Nuclear Information System (INIS)

    Zhao Yi; Small, Michael; Coward, David; Howell, Eric; Zhao Chunnong; Ju Li; Blair, David

    2006-01-01

    We describe the application of complexity estimation and the surrogate data method to identify deterministic dynamics in simulated gravitational wave (GW) data contaminated with white and coloured noises. The surrogate method uses algorithmic complexity as a discriminating statistic to decide if noisy data contain a statistically significant level of deterministic dynamics (the GW signal). The results illustrate that the complexity method is sensitive to a small amplitude simulated GW background (SNR down to 0.08 for white noise and 0.05 for coloured noise) and is also more robust than commonly used linear methods (autocorrelation or Fourier analysis)

  12. Methods for forming complex oxidation reaction products including superconducting articles

    International Nuclear Information System (INIS)

    Rapp, R.A.; Urquhart, A.W.; Nagelberg, A.S.; Newkirk, M.S.

    1992-01-01

    This patent describes a method for producing a superconducting complex oxidation reaction product of two or more metals in an oxidized state. It comprises positioning at least one parent metal source comprising one of the metals adjacent to a permeable mass comprising at least one metal-containing compound capable of reaction to form the complex oxidation reaction product in step below, the metal component of the at least one metal-containing compound comprising at least a second of the two or more metals, and orienting the parent metal source and the permeable mass relative to each other so that formation of the complex oxidation reaction product will occur in a direction towards and into the permeable mass; and heating the parent metal source in the presence of an oxidant to a temperature region above its melting point to form a body of molten parent metal to permit infiltration and reaction of the molten parent metal into the permeable mass and with the oxidant and the at least one metal-containing compound to form the complex oxidation reaction product, and progressively drawing the molten parent metal source through the complex oxidation reaction product towards the oxidant and towards and into the adjacent permeable mass so that fresh complex oxidation reaction product continues to form within the permeable mass; and recovering the resulting complex oxidation reaction product

  13. Project risk management in complex petrochemical system

    Directory of Open Access Journals (Sweden)

    Kirin Snežana

    2012-01-01

    Full Text Available Investigation of risk in complex industrial systems, as well as evaluation of main factors influencing decision making and implementation process using large petrochemical company as an example, has proved the importance of successful project risk management. This is even more emphasized when analyzing systems with complex structure, i.e. with several organizational units. It has been shown that successful risk management requires modern methods, based on adequate application of statistical analysis methods.

  14. Network reliability analysis of complex systems using a non-simulation-based method

    International Nuclear Information System (INIS)

    Kim, Youngsuk; Kang, Won-Hee

    2013-01-01

    Civil infrastructures such as transportation, water supply, sewers, telecommunications, and electrical and gas networks often establish highly complex networks, due to their multiple source and distribution nodes, complex topology, and functional interdependence between network components. To understand the reliability of such complex network system under catastrophic events such as earthquakes and to provide proper emergency management actions under such situation, efficient and accurate reliability analysis methods are necessary. In this paper, a non-simulation-based network reliability analysis method is developed based on the Recursive Decomposition Algorithm (RDA) for risk assessment of generic networks whose operation is defined by the connections of multiple initial and terminal node pairs. The proposed method has two separate decomposition processes for two logical functions, intersection and union, and combinations of these processes are used for the decomposition of any general system event with multiple node pairs. The proposed method is illustrated through numerical network examples with a variety of system definitions, and is applied to a benchmark gas transmission pipe network in Memphis TN to estimate the seismic performance and functional degradation of the network under a set of earthquake scenarios.

  15. The γ-tubulin complex in Trypanosoma brucei: molecular composition, subunit interdependence and requirement for axonemal central pair protein assembly

    Science.gov (United States)

    Zhou, Qing; Li, Ziyin

    2015-01-01

    The γ-tubulin complex constitutes a key component of the microtubule-organizing center and nucleates microtubule assembly. This complex differs in complexity in different organisms: the budding yeast contains the γ-tubulin small complex (γTuSC) composed of γ-tubulin, GCP2 and GCP3, whereas animals contain the γ-tubulin ring complex (γTuRC) composed of γTuSC and three additional proteins, GCP4, GCP5 and GCP6. In Trypanosoma brucei, the composition of the γ-tubulin complex remains elusive, and it is not known whether it also regulates assembly of the subpellicular microtubules and the spindle microtubules. Here we report that the γ-tubulin complex in T. brucei is composed of γ-tubulin and three GCP proteins, GCP2-GCP4, and is primarily localized in the basal body throughout the cell cycle. Depletion of GCP2 and GCP3, but not GCP4, disrupted the axonemal central pair microtubules, but not the subpellicular microtubules and the spindle microtubules. Furthermore, we showed that the γTuSC is required for assembly of two central pair proteins and that γTuSC subunits are mutually required for stability. Together, these results identified an unusual γ-tubulin complex in T. brucei, uncovered an essential role of γTuSC in central pair protein assembly, and demonstrated the interdependence of individual γTuSC components for maintaining a stable complex. PMID:26224545

  16. A novel method of complex evaluation of meibomian glands morphological and functional state

    Directory of Open Access Journals (Sweden)

    V. N. Trubilin

    2014-01-01

    Full Text Available A novel method that provides complex assessment of meibomian glands morphological and functional state — biometry of meibomian glands — was developed. The results of complex examination (including meibomian glands biometry, correlation analysis data and clinical findings demonstrate direct association between the objective (i.e., meibomian glands dysfunction by biomicroscopy, tear film break-up time / TBUT, symptomatic TBUT, compression testing and subjective signs of meibomian glands dysfunction (patient’s complaints and the parameters of meibomian glands biometry. High direct correlation between biometrical index and compression test result was revealed (p = 0.002, Spearman’s rank correlation coefficient = 0.6644. Meibomian glands dysfunction is characterized by biometric parameters abnormalities, i.e., dilatation of meibomian glands orifices, decrease of distance between meibomian glands orifices, partial or total atrophy of meibomian glands (even up to gland collapse with its visual reduction and increase of distance between the glands. The suppression of inflammatory process and the recovery of meibomian glands secretion improve biometric parameters and result in the opening of meibomian glands orifices, liquefaction of clogs, evacuation of meibomian glands secretion, narrowing of meibomian glands orifices and increase of distance between them. The proposed method expands the armamentarium of meibomian glands dysfunction and lipid-deficient dry eye diagnosing. Meibomian glands biometry can be applied in specialized ophthalmological hospitals and outpatient departments. It is a simple procedure of short duration that does not require any special equipment or professional skills. Meibomian glands biometry enables to prescribe pathogenically targeted therapy and to improve quality of life. 

  17. Complex Hand Dexterity: A Review of Biomechanical Methods for Measuring Musical Performance

    Directory of Open Access Journals (Sweden)

    Cheryl Diane Metcalf

    2014-05-01

    Full Text Available Complex hand dexterity is fundamental to our interactions with the physical, social and cultural environment. Dexterity can be an expression of creativity and precision in a range of activities, including musical performance. Little is understood about complex hand dexterity or how virtuoso expertise is acquired, due to the versatility of movement combinations available to complete any given task. This has historically limited progress of the field because of difficulties in measuring movements of the hand. Recent developments in methods of motion capture and analysis mean it is now possible to explore the intricate movements of the hand and fingers. These methods allow us insights into the neurophysiological mechanisms underpinning complex hand dexterity and motor learning. They also allow investigation into the key factors that contribute to injury, recovery and functional compensation.The application of such analytical techniques within musical performance provides a multidisciplinary framework for purposeful investigation into the process of learning and skill acquisition in instrumental performance. These highly skilled manual and cognitive tasks present the ultimate achievement in complex hand dexterity. This paper will review methods of assessing instrumental performance in music, focusing specifically on biomechanical measurement and the associated technical challenges faced when measuring highly dexterous activities.

  18. Scattering methods in complex fluids

    CERN Document Server

    Chen, Sow-Hsin

    2015-01-01

    Summarising recent research on the physics of complex liquids, this in-depth analysis examines the topic of complex liquids from a modern perspective, addressing experimental, computational and theoretical aspects of the field. Selecting only the most interesting contemporary developments in this rich field of research, the authors present multiple examples including aggregation, gel formation and glass transition, in systems undergoing percolation, at criticality, or in supercooled states. Connecting experiments and simulation with key theoretical principles, and covering numerous systems including micelles, micro-emulsions, biological systems, and cement pastes, this unique text is an invaluable resource for graduate students and researchers looking to explore and understand the expanding field of complex fluids.

  19. Method for data compression by associating complex numbers with files of data values

    Science.gov (United States)

    Feo, John Thomas; Hanks, David Carlton; Kraay, Thomas Arthur

    1998-02-10

    A method for compressing data for storage or transmission. Given a complex polynomial and a value assigned to each root, a root generated data file (RGDF) is created, one entry at a time. Each entry is mapped to a point in a complex plane. An iterative root finding technique is used to map the coordinates of the point to the coordinates of one of the roots of the polynomial. The value associated with that root is assigned to the entry. An equational data compression (EDC) method reverses this procedure. Given a target data file, the EDC method uses a search algorithm to calculate a set of m complex numbers and a value map that will generate the target data file. The error between a simple target data file and generated data file is typically less than 10%. Data files can be transmitted or stored without loss by transmitting the m complex numbers, their associated values, and an error file whose size is at most one-tenth of the size of the input data file.

  20. Comparison of association mapping methods in a complex pedigreed population

    DEFF Research Database (Denmark)

    Sahana, Goutam; Guldbrandtsen, Bernt; Janss, Luc

    2010-01-01

    to collect SNP signals in intervals, to avoid the scattering of a QTL signal over multiple neighboring SNPs. Methods not accounting for genetic background (full pedigree information) performed worse, and methods using haplotypes were considerably worse with a high false-positive rate, probably due...... to the presence of low-frequency haplotypes. It was necessary to account for full relationships among individuals to avoid excess false discovery. Although the methods were tested on a cattle pedigree, the results are applicable to any population with a complex pedigree structure...

  1. Multiple domains of fission yeast Cdc19p (MCM2) are required for its association with the core MCM complex.

    Science.gov (United States)

    Sherman, D A; Pasion, S G; Forsburg, S L

    1998-07-01

    The members of the MCM protein family are essential eukaryotic DNA replication factors that form a six-member protein complex. In this study, we use antibodies to four MCM proteins to investigate the structure of and requirements for the formation of fission yeast MCM complexes in vivo, with particular regard to Cdc19p (MCM2). Gel filtration analysis shows that the MCM protein complexes are unstable and can be broken down to subcomplexes. Using coimmunoprecipitation, we find that Mis5p (MCM6) and Cdc21p (MCM4) are tightly associated with one another in a core complex with which Cdc19p loosely associates. Assembly of Cdc19p with the core depends upon Cdc21p. Interestingly, there is no obvious change in Cdc19p-containing MCM complexes through the cell cycle. Using a panel of Cdc19p mutants, we find that multiple domains of Cdc19p are required for MCM binding. These studies indicate that MCM complexes in fission yeast have distinct substructures, which may be relevant for function.

  2. Comparison of topotactic fluorination methods for complex oxide films

    Science.gov (United States)

    Moon, E. J.; Choquette, A. K.; Huon, A.; Kulesa, S. Z.; Barbash, D.; May, S. J.

    2015-06-01

    We have investigated the synthesis of SrFeO3-αFγ (α and γ ≤ 1) perovskite films using topotactic fluorination reactions utilizing poly(vinylidene fluoride) as a fluorine source. Two different fluorination methods, a spin-coating and a vapor transport approach, were performed on as-grown SrFeO2.5 films. We highlight differences in the structural, compositional, and optical properties of the oxyfluoride films obtained via the two methods, providing insight into how fluorination reactions can be used to modify electronic and optical behavior in complex oxide heterostructures.

  3. Feeding cells induced by phytoparasitic nematodes require γ-tubulin ring complex for microtubule reorganization.

    Directory of Open Access Journals (Sweden)

    Mohamed Youssef Banora

    2011-12-01

    Full Text Available Reorganization of the microtubule network is important for the fast isodiametric expansion of giant-feeding cells induced by root-knot nematodes. The efficiency of microtubule reorganization depends on the nucleation of new microtubules, their elongation rate and activity of microtubule severing factors. New microtubules in plants are nucleated by cytoplasmic or microtubule-bound γ-tubulin ring complexes. Here we investigate the requirement of γ-tubulin complexes for giant feeding cells development using the interaction between Arabidopsis and Meloidogyne spp. as a model system. Immunocytochemical analyses demonstrate that γ-tubulin localizes to both cortical cytoplasm and mitotic microtubule arrays of the giant cells where it can associate with microtubules. The transcripts of two Arabidopsis γ-tubulin (TUBG1 and TUBG2 and two γ-tubulin complex proteins genes (GCP3 and GCP4 are upregulated in galls. Electron microscopy demonstrates association of GCP3 and γ-tubulin as part of a complex in the cytoplasm of giant cells. Knockout of either or both γ-tubulin genes results in the gene dose-dependent alteration of the morphology of feeding site and failure of nematode life cycle completion. We conclude that the γ-tubulin complex is essential for the control of microtubular network remodelling in the course of initiation and development of giant-feeding cells, and for the successful reproduction of nematodes in their plant hosts.

  4. The power of gene-based rare variant methods to detect disease-associated variation and test hypotheses about complex disease.

    Directory of Open Access Journals (Sweden)

    Loukas Moutsianas

    2015-04-01

    Full Text Available Genome and exome sequencing in large cohorts enables characterization of the role of rare variation in complex diseases. Success in this endeavor, however, requires investigators to test a diverse array of genetic hypotheses which differ in the number, frequency and effect sizes of underlying causal variants. In this study, we evaluated the power of gene-based association methods to interrogate such hypotheses, and examined the implications for study design. We developed a flexible simulation approach, using 1000 Genomes data, to (a generate sequence variation at human genes in up to 10K case-control samples, and (b quantify the statistical power of a panel of widely used gene-based association tests under a variety of allelic architectures, locus effect sizes, and significance thresholds. For loci explaining ~1% of phenotypic variance underlying a common dichotomous trait, we find that all methods have low absolute power to achieve exome-wide significance (~5-20% power at α = 2.5 × 10(-6 in 3K individuals; even in 10K samples, power is modest (~60%. The combined application of multiple methods increases sensitivity, but does so at the expense of a higher false positive rate. MiST, SKAT-O, and KBAC have the highest individual mean power across simulated datasets, but we observe wide architecture-dependent variability in the individual loci detected by each test, suggesting that inferences about disease architecture from analysis of sequencing studies can differ depending on which methods are used. Our results imply that tens of thousands of individuals, extensive functional annotation, or highly targeted hypothesis testing will be required to confidently detect or exclude rare variant signals at complex disease loci.

  5. Minimum Energy Requirements in Complex Distillation Arrangements

    Energy Technology Data Exchange (ETDEWEB)

    Halvorsen, Ivar J

    2001-07-01

    Distillation is the most widely used industrial separation technology and distillation units are responsible for a significant part of the total heat consumption in the world's process industry. In this work we focus on directly (fully thermally) coupled column arrangements for separation of multicomponent mixtures. These systems are also denoted Petlyuk arrangements, where a particular implementation is the dividing wall column. Energy savings in the range of 20-40% have been reported with ternary feed mixtures. In addition to energy savings, such integrated units have also a potential for reduced capital cost, making them extra attractive. However, the industrial use has been limited, and difficulties in design and control have been reported as the main reasons. Minimum energy results have only been available for ternary feed mixtures and sharp product splits. This motivates further research in this area, and this thesis will hopefully give some contributions to better understanding of complex column systems. In the first part we derive the general analytic solution for minimum energy consumption in directly coupled columns for a multicomponent feed and any number of products. To our knowledge, this is a new contribution in the field. The basic assumptions are constant relative volatility, constant pressure and constant molar flows and the derivation is based on Underwood's classical methods. An important conclusion is that the minimum energy consumption in a complex directly integrated multi-product arrangement is the same as for the most difficult split between any pair of the specified products when we consider the performance of a conventional two-product column. We also present the Vmin-diagram, which is a simple graphical tool for visualisation of minimum energy related to feed distribution. The Vmin-diagram provides a simple mean to assess the detailed flow requirements for all parts of a complex directly coupled arrangement. The main purpose in the first

  6. Minimum Energy Requirements in Complex Distillation Arrangements

    Energy Technology Data Exchange (ETDEWEB)

    Halvorsen, Ivar J.

    2001-07-01

    Distillation is the most widely used industrial separation technology and distillation units are responsible for a significant part of the total heat consumption in the world's process industry. In this work we focus on directly (fully thermally) coupled column arrangements for separation of multicomponent mixtures. These systems are also denoted Petlyuk arrangements, where a particular implementation is the dividing wall column. Energy savings in the range of 20-40% have been reported with ternary feed mixtures. In addition to energy savings, such integrated units have also a potential for reduced capital cost, making them extra attractive. However, the industrial use has been limited, and difficulties in design and control have been reported as the main reasons. Minimum energy results have only been available for ternary feed mixtures and sharp product splits. This motivates further research in this area, and this thesis will hopefully give some contributions to better understanding of complex column systems. In the first part we derive the general analytic solution for minimum energy consumption in directly coupled columns for a multicomponent feed and any number of products. To our knowledge, this is a new contribution in the field. The basic assumptions are constant relative volatility, constant pressure and constant molar flows and the derivation is based on Underwood's classical methods. An important conclusion is that the minimum energy consumption in a complex directly integrated multi-product arrangement is the same as for the most difficult split between any pair of the specified products when we consider the performance of a conventional two-product column. We also present the Vmin-diagram, which is a simple graphical tool for visualisation of minimum energy related to feed distribution. The Vmin-diagram provides a simple mean to assess the detailed flow requirements for all parts of a complex directly coupled arrangement. The main purpose in

  7. Complexity Quantification for Overhead Transmission Line Emergency Repair Scheme via a Graph Entropy Method Improved with Petri Net and AHP Weighting Method

    Directory of Open Access Journals (Sweden)

    Jing Zhou

    2014-01-01

    Full Text Available According to the characteristics of emergency repair in overhead transmission line accidents, a complexity quantification method for emergency repair scheme is proposed based on the entropy method in software engineering, which is improved by using group AHP (analytical hierarchical process method and Petri net. Firstly, information structure chart model and process control flowchart model could be built by Petri net. Then impact factors on complexity of emergency repair scheme could be quantified into corresponding entropy values, respectively. Finally, by using group AHP method, weight coefficient of each entropy value would be given before calculating the overall entropy value for the whole emergency repair scheme. By comparing group AHP weighting method with average weighting method, experiment results for the former showed a stronger correlation between quantified entropy values of complexity and the actual consumed time in repair, which indicates that this new method is more valid.

  8. Measuring the Complexity of Self-Organizing Traffic Lights

    Directory of Open Access Journals (Sweden)

    Darío Zubillaga

    2014-04-01

    Full Text Available We apply measures of complexity, emergence, and self-organization to an urban traffic model for comparing a traditional traffic-light coordination method with a self-organizing method in two scenarios: cyclic boundaries and non-orientable boundaries. We show that the measures are useful to identify and characterize different dynamical phases. It becomes clear that different operation regimes are required for different traffic demands. Thus, not only is traffic a non-stationary problem, requiring controllers to adapt constantly; controllers must also change drastically the complexity of their behavior depending on the demand. Based on our measures and extending Ashby’s law of requisite variety, we can say that the self-organizing method achieves an adaptability level comparable to that of a living system.

  9. A simple method for determining polymeric IgA-containing immune complexes.

    Science.gov (United States)

    Sancho, J; Egido, J; González, E

    1983-06-10

    A simplified assay to measure polymeric IgA-immune complexes in biological fluids is described. The assay is based upon the specific binding of a secretory component for polymeric IgA. In the first step, multimeric IgA (monomeric and polymeric) immune complexes are determined by the standard Raji cell assay. Secondly, labeled secretory component added to the assay is bound to polymeric IgA-immune complexes previously fixed to Raji cells, but not to monomeric IgA immune complexes. To avoid false positives due to possible complement-fixing IgM immune complexes, prior IgM immunoadsorption is performed. Using anti-IgM antiserum coupled to CNBr-activated Sepharose 4B this step is not time-consuming. Polymeric IgA has a low affinity constant and binds weakly to Raji cells, as Scatchard analysis of the data shows. Thus, polymeric IgA immune complexes do not bind to Raji cells directly through Fc receptors, but through complement breakdown products, as with IgG-immune complexes. Using this method, we have been successful in detecting specific polymeric-IgA immune complexes in patients with IgA nephropathy (Berger's disease) and alcoholic liver disease, as well as in normal subjects after meals of high protein content. This new, simple, rapid and reproducible assay might help to study the physiopathological role of polymeric IgA immune complexes in humans and animals.

  10. Iteratively-coupled propagating exterior complex scaling method for electron-hydrogen collisions

    International Nuclear Information System (INIS)

    Bartlett, Philip L; Stelbovics, Andris T; Bray, Igor

    2004-01-01

    A newly-derived iterative coupling procedure for the propagating exterior complex scaling (PECS) method is used to efficiently calculate the electron-impact wavefunctions for atomic hydrogen. An overview of this method is given along with methods for extracting scattering cross sections. Differential scattering cross sections at 30 eV are presented for the electron-impact excitation to the n = 1, 2, 3 and 4 final states, for both PECS and convergent close coupling (CCC), which are in excellent agreement with each other and with experiment. PECS results are presented at 27.2 eV and 30 eV for symmetric and asymmetric energy-sharing triple differential cross sections, which are in excellent agreement with CCC and exterior complex scaling calculations, and with experimental data. At these intermediate energies, the efficiency of the PECS method with iterative coupling has allowed highly accurate partial-wave solutions of the full Schroedinger equation, for L ≤ 50 and a large number of coupled angular momentum states, to be obtained with minimal computing resources. (letter to the editor)

  11. A Method to Predict the Structure and Stability of RNA/RNA Complexes.

    Science.gov (United States)

    Xu, Xiaojun; Chen, Shi-Jie

    2016-01-01

    RNA/RNA interactions are essential for genomic RNA dimerization and regulation of gene expression. Intermolecular loop-loop base pairing is a widespread and functionally important tertiary structure motif in RNA machinery. However, computational prediction of intermolecular loop-loop base pairing is challenged by the entropy and free energy calculation due to the conformational constraint and the intermolecular interactions. In this chapter, we describe a recently developed statistical mechanics-based method for the prediction of RNA/RNA complex structures and stabilities. The method is based on the virtual bond RNA folding model (Vfold). The main emphasis in the method is placed on the evaluation of the entropy and free energy for the loops, especially tertiary kissing loops. The method also uses recursive partition function calculations and two-step screening algorithm for large, complicated structures of RNA/RNA complexes. As case studies, we use the HIV-1 Mal dimer and the siRNA/HIV-1 mutant (T4) to illustrate the method.

  12. Determination of fuel irradiation parameters. Required accuracies and available methods

    International Nuclear Information System (INIS)

    Mas, P.

    1977-01-01

    This paper reports on the present point of some main methods to determine the nuclear parameters of fuel irradiation in testing reactors (nuclear power, burn up, ...) The different methods (theoretical or experimental) are reviewed: neutron measurements and calculations, gamma scanning, heat balance, ... . The required accuracies are reviewed: they are of 3-5 % on flux, fluences, nuclear power, burn-up, conversion factor. These required accuracies are compared with the real accuracies available which are the present time of order of 5-20 % on these parameters

  13. Determination of material irradiation parameters. Required accuracies and available methods

    International Nuclear Information System (INIS)

    Cerles, J.M.; Mas, P.

    1978-01-01

    In this paper, the author reports some main methods to determine the nuclear parameters of material irradiation in testing reactor (nuclear power, burn-up, fluxes, fluences, ...). The different methods (theoretical or experimental) are reviewed: neutronics measurements and calculations, gamma scanning, thermal balance, ... The required accuracies are reviewed: they are of 3-5% on flux, fluences, nuclear power, burn-up, conversion factor, ... These required accuracies are compared with the real accuracies available which are at the present time of order of 5-20% on these parameters

  14. BRAND program complex for neutron-physical experiment simulation by the Monte-Carlo method

    International Nuclear Information System (INIS)

    Androsenko, A.A.; Androsenko, P.A.

    1984-01-01

    Possibilities of the BRAND program complex for neutron and γ-radiation transport simulation by the Monte-Carlo method are described in short. The complex includes the following modules: geometric module, source module, detector module, modules of simulation of a vector of particle motion direction after interaction and a free path. The complex is written in the FORTRAN langauage and realized by the BESM-6 computer

  15. Structure of the automated uchebno-methodical complex on technical disciplines

    Directory of Open Access Journals (Sweden)

    Вячеслав Михайлович Дмитриев

    2010-12-01

    Full Text Available In article it is put and the problem of automation and information of process of training of students on the basis of the entered system-organizational forms which have received in aggregate the name of education methodical complexes on discipline dares.

  16. Complexity-aware high efficiency video coding

    CERN Document Server

    Correa, Guilherme; Agostini, Luciano; Cruz, Luis A da Silva

    2016-01-01

    This book discusses computational complexity of High Efficiency Video Coding (HEVC) encoders with coverage extending from the analysis of HEVC compression efficiency and computational complexity to the reduction and scaling of its encoding complexity. After an introduction to the topic and a review of the state-of-the-art research in the field, the authors provide a detailed analysis of the HEVC encoding tools compression efficiency and computational complexity.  Readers will benefit from a set of algorithms for scaling the computational complexity of HEVC encoders, all of which take advantage from the flexibility of the frame partitioning structures allowed by the standard.  The authors also provide a set of early termination methods based on data mining and machine learning techniques, which are able to reduce the computational complexity required to find the best frame partitioning structures. The applicability of the proposed methods is finally exemplified with an encoding time control system that emplo...

  17. Functional Mobility Testing: A Novel Method to Create Suit Design Requirements

    Science.gov (United States)

    England, Scott A.; Benson, Elizabeth A.; Rajulu, Sudhakar L.

    2008-01-01

    This study was performed to aide in the creation of design requirements for the next generation of space suits that more accurately describe the level of mobility necessary for a suited crewmember through the use of an innovative methodology utilizing functional mobility. A novel method was utilized involving the collection of kinematic data while 20 subjects (10 male, 10 female) performed pertinent functional tasks that will be required of a suited crewmember during various phases of a lunar mission. These tasks were selected based on relevance and criticality from a larger list of tasks that may be carried out by the crew. Kinematic data was processed through Vicon BodyBuilder software to calculate joint angles for the ankle, knee, hip, torso, shoulder, elbow, and wrist. Maximum functional mobility was consistently lower than maximum isolated mobility. This study suggests that conventional methods for establishing design requirements for human-systems interfaces based on maximal isolated joint capabilities may overestimate the required mobility. Additionally, this method provides a valuable means of evaluating systems created from these requirements by comparing the mobility available in a new spacesuit, or the mobility required to use a new piece of hardware, to this newly established database of functional mobility.

  18. The problem of complex eigensystems in the semianalytical solution for advancement of time in solute transport simulations: a new method using real arithmetic

    Science.gov (United States)

    Umari, Amjad M.J.; Gorelick, Steven M.

    1986-01-01

    In the numerical modeling of groundwater solute transport, explicit solutions may be obtained for the concentration field at any future time without computing concentrations at intermediate times. The spatial variables are discretized and time is left continuous in the governing differential equation. These semianalytical solutions have been presented in the literature and involve the eigensystem of a coefficient matrix. This eigensystem may be complex (i.e., have imaginary components) due to the asymmetry created by the advection term in the governing advection-dispersion equation. Previous investigators have either used complex arithmetic to represent a complex eigensystem or chosen large dispersivity values for which the imaginary components of the complex eigenvalues may be ignored without significant error. It is shown here that the error due to ignoring the imaginary components of complex eigenvalues is large for small dispersivity values. A new algorithm that represents the complex eigensystem by converting it to a real eigensystem is presented. The method requires only real arithmetic.

  19. Comparison of topotactic fluorination methods for complex oxide films

    Energy Technology Data Exchange (ETDEWEB)

    Moon, E. J., E-mail: em582@drexel.edu; Choquette, A. K.; Huon, A.; Kulesa, S. Z.; May, S. J., E-mail: smay@coe.drexel.edu [Department of Materials Science and Engineering, Drexel University, Philadelphia, Pennsylvania 19104 (United States); Barbash, D. [Centralized Research Facilities, Drexel University, Philadelphia, Pennsylvania 19104 (United States)

    2015-06-01

    We have investigated the synthesis of SrFeO{sub 3−α}F{sub γ} (α and γ ≤ 1) perovskite films using topotactic fluorination reactions utilizing poly(vinylidene fluoride) as a fluorine source. Two different fluorination methods, a spin-coating and a vapor transport approach, were performed on as-grown SrFeO{sub 2.5} films. We highlight differences in the structural, compositional, and optical properties of the oxyfluoride films obtained via the two methods, providing insight into how fluorination reactions can be used to modify electronic and optical behavior in complex oxide heterostructures.

  20. Comparison of topotactic fluorination methods for complex oxide films

    Directory of Open Access Journals (Sweden)

    E. J. Moon

    2015-06-01

    Full Text Available We have investigated the synthesis of SrFeO3−αFγ (α and γ ≤ 1 perovskite films using topotactic fluorination reactions utilizing poly(vinylidene fluoride as a fluorine source. Two different fluorination methods, a spin-coating and a vapor transport approach, were performed on as-grown SrFeO2.5 films. We highlight differences in the structural, compositional, and optical properties of the oxyfluoride films obtained via the two methods, providing insight into how fluorination reactions can be used to modify electronic and optical behavior in complex oxide heterostructures.

  1. Simplified Method for Predicting a Functional Class of Proteins in Transcription Factor Complexes

    KAUST Repository

    Piatek, Marek J.

    2013-07-12

    Background:Initiation of transcription is essential for most of the cellular responses to environmental conditions and for cell and tissue specificity. This process is regulated through numerous proteins, their ligands and mutual interactions, as well as interactions with DNA. The key such regulatory proteins are transcription factors (TFs) and transcription co-factors (TcoFs). TcoFs are important since they modulate the transcription initiation process through interaction with TFs. In eukaryotes, transcription requires that TFs form different protein complexes with various nuclear proteins. To better understand transcription regulation, it is important to know the functional class of proteins interacting with TFs during transcription initiation. Such information is not fully available, since not all proteins that act as TFs or TcoFs are yet annotated as such, due to generally partial functional annotation of proteins. In this study we have developed a method to predict, using only sequence composition of the interacting proteins, the functional class of human TF binding partners to be (i) TF, (ii) TcoF, or (iii) other nuclear protein. This allows for complementing the annotation of the currently known pool of nuclear proteins. Since only the knowledge of protein sequences is required in addition to protein interaction, the method should be easily applicable to many species.Results:Based on experimentally validated interactions between human TFs with different TFs, TcoFs and other nuclear proteins, our two classification systems (implemented as a web-based application) achieve high accuracies in distinguishing TFs and TcoFs from other nuclear proteins, and TFs from TcoFs respectively.Conclusion:As demonstrated, given the fact that two proteins are capable of forming direct physical interactions and using only information about their sequence composition, we have developed a completely new method for predicting a functional class of TF interacting protein partners

  2. X-ray-enhanced cancer cell migration requires the linker of nucleoskeleton and cytoskeleton complex.

    Science.gov (United States)

    Imaizumi, Hiromasa; Sato, Katsutoshi; Nishihara, Asuka; Minami, Kazumasa; Koizumi, Masahiko; Matsuura, Nariaki; Hieda, Miki

    2018-04-01

    The linker of nucleoskeleton and cytoskeleton (LINC) complex is a multifunctional protein complex that is involved in various processes at the nuclear envelope, including nuclear migration, mechanotransduction, chromatin tethering and DNA damage response. We recently showed that a nuclear envelope protein, Sad1 and UNC84 domain protein 1 (SUN1), a component of the LINC complex, has a critical function in cell migration. Although ionizing radiation activates cell migration and invasion in vivo and in vitro, the underlying molecular mechanism remains unknown. Here, we examined the involvement of the LINC complex in radiation-enhanced cell migration and invasion. A sublethal dose of X-ray radiation promoted human breast cancer MDA-MB-231 cell migration and invasion, whereas carbon ion beam radiation suppressed these processes in a dose-dependent manner. Depletion of SUN1 and SUN2 significantly suppressed X-ray-enhanced cell migration and invasion. Moreover, depletion or overexpression of each SUN1 splicing variant revealed that SUN1_888 containing 888 amino acids of SUN1 but not SUN1_916 was required for X-ray-enhanced migration and invasion. In addition, the results suggested that X-ray irradiation affected the expression level of SUN1 splicing variants and a SUN protein binding partner, nesprins. Taken together, our findings supported that the LINC complex contributed to photon-enhanced cell migration and invasion. © 2018 The Authors. Cancer Science published by John Wiley & Sons Australia, Ltd on behalf of Japanese Cancer Association.

  3. Measurement of complex permittivity of composite materials using waveguide method

    NARCIS (Netherlands)

    Tereshchenko, O.V.; Buesink, Frederik Johannes Karel; Leferink, Frank Bernardus Johannes

    2011-01-01

    Complex dielectric permittivity of 4 different composite materials has been measured using the transmissionline method. A waveguide fixture in L, S, C and X band was used for the measurements. Measurement accuracy is influenced by air gaps between test fixtures and the materials tested. One of the

  4. Generation of new solutions of the stationary axisymmetric Einstein equations by a double complex function method

    International Nuclear Information System (INIS)

    Zhong, Z.

    1985-01-01

    A new approach to the solution of certain differential equations, the double complex function method, is developed, combining ordinary complex numbers and hyperbolic complex numbers. This method is applied to the theory of stationary axisymmetric Einstein equations in general relativity. A family of exact double solutions, double transformation groups, and n-soliton double solutions are obtained

  5. A heuristic method for simulating open-data of arbitrary complexity that can be used to compare and evaluate machine learning methods.

    Science.gov (United States)

    Moore, Jason H; Shestov, Maksim; Schmitt, Peter; Olson, Randal S

    2018-01-01

    A central challenge of developing and evaluating artificial intelligence and machine learning methods for regression and classification is access to data that illuminates the strengths and weaknesses of different methods. Open data plays an important role in this process by making it easy for computational researchers to easily access real data for this purpose. Genomics has in some examples taken a leading role in the open data effort starting with DNA microarrays. While real data from experimental and observational studies is necessary for developing computational methods it is not sufficient. This is because it is not possible to know what the ground truth is in real data. This must be accompanied by simulated data where that balance between signal and noise is known and can be directly evaluated. Unfortunately, there is a lack of methods and software for simulating data with the kind of complexity found in real biological and biomedical systems. We present here the Heuristic Identification of Biological Architectures for simulating Complex Hierarchical Interactions (HIBACHI) method and prototype software for simulating complex biological and biomedical data. Further, we introduce new methods for developing simulation models that generate data that specifically allows discrimination between different machine learning methods.

  6. Software Safety Analysis of Digital Protection System Requirements Using a Qualitative Formal Method

    International Nuclear Information System (INIS)

    Lee, Jang-Soo; Kwon, Kee-Choon; Cha, Sung-Deok

    2004-01-01

    The safety analysis of requirements is a key problem area in the development of software for the digital protection systems of a nuclear power plant. When specifying requirements for software of the digital protection systems and conducting safety analysis, engineers find that requirements are often known only in qualitative terms and that existing fault-tree analysis techniques provide little guidance on formulating and evaluating potential failure modes. A framework for the requirements engineering process is proposed that consists of a qualitative method for requirements specification, called the qualitative formal method (QFM), and a safety analysis method for the requirements based on causality information, called the causal requirements safety analysis (CRSA). CRSA is a technique that qualitatively evaluates causal relationships between software faults and physical hazards. This technique, extending the qualitative formal method process and utilizing information captured in the state trajectory, provides specific guidelines on how to identify failure modes and the relationship among them. The QFM and CRSA processes are described using shutdown system 2 of the Wolsong nuclear power plants as the digital protection system example

  7. Sleep deprivation in parents caring for children with complex needs at home: a mixed methods systematic review.

    Science.gov (United States)

    McCann, Damhnat; Bull, Rosalind; Winzenberg, Tania

    2015-02-01

    A significant number of children with a range of complex conditions and health care needs are being cared for by parents in the home environment. This mixed methods systematic review aimed to determine the amount of sleep obtained by these parents and the extent to which the child-related overnight health or care needs affected parental sleep experience and daily functioning. Summary statistics were not able to be determined due to the heterogeneity of included studies, but the common themes that emerged are that parents of children with complex needs experience sleep deprivation that can be both relentless and draining and affects the parents themselves and their relationships. The degree of sleep deprivation varies by diagnosis, but a key contributing factor is the need for parents to be vigilant at night. Of particular importance to health care professionals is the inadequate overnight support provided to parents of children with complex needs, potentially placing these parents at risk of poorer health outcomes associated with sleep deprivation and disturbance. This needs to be addressed to enable parents to remain well and continue to provide the care that their child and family require. © The Author(s) 2014.

  8. Complexation of biological ligands with lanthanides(III) for MRI: Structure, thermodynamic and methods; Complexation des cations lanthanides trivalents par des ligands d'origine biologique pour l'IRM: Structure, thermodynamique et methodes

    Energy Technology Data Exchange (ETDEWEB)

    Bonnet, C

    2006-07-15

    New cyclic ligands derived from sugars and amino-acids form a scaffold carrying a coordination sphere of oxygen atoms suitable to complex Ln(III) ions. In spite of their rather low molecular weights, the complexes display surprisingly high relaxivity values, especially at high field. The ACX and BCX ligands, which are acidic derivatives of modified and cyclo-dextrins, form mono and bimetallic complexes with Ln(III). The LnACX and LnBCX complexes show affinities towards Ln(III) similar to those of tri-acidic ligands. In the bimetallic Lu2ACX complex, the cations are deeply embedded in the cavity of the ligand, as shown by the X-ray structure. In aqueous solution, the number of water molecules coordinated to the cation in the LnACX complex depends on the nature and concentration of the alkali ions of the supporting electrolyte, as shown by luminescence and relaxometric measurements. There is only one water molecule coordinated in the LnBCX complex, which enables us to highlight an important second sphere contribution to relaxivity. The NMR study of the RAFT peptidic ligand shows the complexation of Ln(III), with an affinity similar to those of natural ligands derived from calmodulin. The relaxometric study also shows an important second sphere contribution to relaxivity. To better understand the intricate molecular factors affecting relaxivity, we developed new relaxometric methods based on probe solutes. These methods allow us to determine the charge of the complex, weak affinity constants, trans-metallation constants, and the electronic relaxation rate. (author)

  9. 5 CFR 610.404 - Requirement for time-accounting method.

    Science.gov (United States)

    2010-01-01

    ... REGULATIONS HOURS OF DUTY Flexible and Compressed Work Schedules § 610.404 Requirement for time-accounting method. An agency that authorizes a flexible work schedule or a compressed work schedule under this...

  10. Complex transformation method and resonances in one-body quantum systems

    International Nuclear Information System (INIS)

    Sigal, I.M.

    1984-01-01

    We develop a new spectral deformation method in order to treat the resonance problem in one-body systems. Our result on the meromorphic continuation of matrix elements of the resolvent across the continuous spectrum overlaps considerably with an earlier result of E. Balslev [B] but our method is much simpler and more convenient, we believe, in applications. It is inspired by the local distortion technique of Nuttall-Thomas-Babbitt-Balslev, further developed in [B] but patterned on the complex scaling method of Combes and Balslev. The method is applicable to the multicenter problems in which each potential can be represented, roughly speaking, as a sum of exponentially decaying and dilation-analytic, spherically symmetric parts

  11. Model for peace support operations: an overview of the ICT and interoperability requirements

    CSIR Research Space (South Africa)

    Leenen, L

    2009-03-01

    Full Text Available requires a reciprocal interdependence among these various elements, and this necessitates complex coordination and a great demand for ongoing and accurate communication (Chisholm 1986). Higher technological complexity requires higher levels... interoperability requirements thereof. Such methods, when fully developed, give the military planner the ability to rapidly assess the requirements as circumstances change. From interviews with SANDF staff (Ross 2007), we gathered that the SANDF planning...

  12. Effective teaching methods in higher education: requirements and barriers

    Directory of Open Access Journals (Sweden)

    NAHID SHIRANI BIDABADI

    2016-10-01

    Full Text Available Introduction: Teaching is one of the main components in educational planning which is a key factor in conducting educational plans. Despite the importance of good teaching, the outcomes are far from ideal. The present qualitative study aimed to investigate effective teaching in higher education in Iran based on the experiences of best professors in the country and the best local professors of Isfahan University of Technology. Methods: This qualitative content analysis study was conducted through purposeful sampling. Semi-structured interviews were conducted with ten faculty members (3 of them from the best professors in the country and 7 from the best local professors. Content analysis was performed by MAXQDA software. The codes, categories and themes were explored through an inductive process that began from semantic units or direct quotations to general themes. Results: According to the results of this study, the best teaching approach is the mixed method (student-centered together with teacher-centered plus educational planning and previous readiness. But whenever the teachers can teach using this method confront with some barriers and requirements; some of these requirements are prerequisite in professors’ behavior and some of these are prerequisite in professors’ outlook. Also, there are some major barriers, some of which are associated with the professors’ operation and others are related to laws and regulations. Implications of these findings for teachers’ preparation in education are discussed. Conclusion: In the present study, it was illustrated that a good teaching method helps the students to question their preconceptions, and motivates them to learn, by putting them in a situation in which they come to see themselves as the authors of answers, as the agents of responsibility for change. But training through this method has some barriers and requirements. To have an effective teaching; the faculty members of the universities

  13. A fluorescence anisotropy method for measuring protein concentration in complex cell culture media.

    Science.gov (United States)

    Groza, Radu Constantin; Calvet, Amandine; Ryder, Alan G

    2014-04-22

    The rapid, quantitative analysis of the complex cell culture media used in biopharmaceutical manufacturing is of critical importance. Requirements for cell culture media composition profiling, or changes in specific analyte concentrations (e.g. amino acids in the media or product protein in the bioprocess broth) often necessitate the use of complicated analytical methods and extensive sample handling. Rapid spectroscopic methods like multi-dimensional fluorescence (MDF) spectroscopy have been successfully applied for the routine determination of compositional changes in cell culture media and bioprocess broths. Quantifying macromolecules in cell culture media is a specific challenge as there is a need to implement measurements rapidly on the prepared media. However, the use of standard fluorescence spectroscopy is complicated by the emission overlap from many media components. Here, we demonstrate how combining anisotropy measurements with standard total synchronous fluorescence spectroscopy (TSFS) provides a rapid, accurate quantitation method for cell culture media. Anisotropy provides emission resolution between large and small fluorophores while TSFS provides a robust measurement space. Model cell culture media was prepared using yeastolate (2.5 mg mL(-1)) spiked with bovine serum albumin (0 to 5 mg mL(-1)). Using this method, protein emission is clearly discriminated from background yeastolate emission, allowing for accurate bovine serum albumin (BSA) quantification over a 0.1 to 4.0 mg mL(-1) range with a limit of detection (LOD) of 13.8 μg mL(-1). Copyright © 2014. Published by Elsevier B.V.

  14. Computational fluid dynamics: complex flows requiring supercomputers. January 1975-July 1988 (Citations from the INSPEC: Information Services for the Physics and Engineering Communities data base). Report for January 1975-July 1988

    International Nuclear Information System (INIS)

    1988-08-01

    This bibliography contains citations concerning computational fluid dynamics (CFD), a new method in computational science to perform complex flow simulations in three dimensions. Applications include aerodynamic design and analysis for aircraft, rockets, and missiles, and automobiles; heat-transfer studies; and combustion processes. Included are references to supercomputers, array processors, and parallel processors where needed for complete, integrated design. Also included are software packages and grid-generation techniques required to apply CFD numerical solutions. Numerical methods for fluid dynamics, not requiring supercomputers, are found in a separate published search. (Contains 83 citations fully indexed and including a title list.)

  15. Rapid methods for jugular bleeding of dogs requiring one technician.

    Science.gov (United States)

    Frisk, C S; Richardson, M R

    1979-06-01

    Two methods were used to collect blood from the jugular vein of dogs. In both techniques, only one technician was required. A rope with a slip knot was placed around the base of the neck to assist in restraint and act as a tourniquet for the vein. The technician used one hand to restrain the dog by the muzzle and position the head. The other hand was used for collecting the sample. One of the methods could be accomplished with the dog in its cage. The bleeding techniques were rapid, requiring approximately 1 minute per dog.

  16. Assessment of exposure to the Penicillium glabrum complex in cork industry using complementing methods.

    Science.gov (United States)

    Viegas, Carla; Sabino, Raquel; Botelho, Daniel; dos Santos, Mateus; Gomes, Anita Quintal

    2015-09-01

    Cork oak is the second most dominant forest species in Portugal and makes this country the world leader in cork export. Occupational exposure to Chrysonilia sitophila and the Penicillium glabrum complex in cork industry is common, and the latter fungus is associated with suberosis. However, as conventional methods seem to underestimate its presence in occupational environments, the aim of our study was to see whether information obtained by polymerase chain reaction (PCR), a molecular-based method, can complement conventional findings and give a better insight into occupational exposure of cork industry workers. We assessed fungal contamination with the P. glabrum complex in three cork manufacturing plants in the outskirts of Lisbon using both conventional and molecular methods. Conventional culturing failed to detect the fungus at six sampling sites in which PCR did detect it. This confirms our assumption that the use of complementing methods can provide information for a more accurate assessment of occupational exposure to the P. glabrum complex in cork industry.

  17. Measurement methods on the complexity of network

    Institute of Scientific and Technical Information of China (English)

    LIN Lin; DING Gang; CHEN Guo-song

    2010-01-01

    Based on the size of network and the number of paths in the network,we proposed a model of topology complexity of a network to measure the topology complexity of the network.Based on the analyses of the effects of the number of the equipment,the types of equipment and the processing time of the node on the complexity of the network with the equipment-constrained,a complexity model of equipment-constrained network was constructed to measure the integrated complexity of the equipment-constrained network.The algorithms for the two models were also developed.An automatic generator of the random single label network was developed to test the models.The results show that the models can correctly evaluate the topology complexity and the integrated complexity of the networks.

  18. Studies on the complexation of diclofenac sodium with β-cyclodextrin: Influence of method of preparation

    Science.gov (United States)

    Das, Subhraseema; Subuddhi, Usharani

    2015-11-01

    Inclusion complexes of diclofenac sodium (DS) with β-cyclodextrin (β-CD) were prepared in order to improve the solubility, dissolution and oral bioavailability of the poorly water soluble drug. The effect of method of preparation of the DS/β-CD inclusion complexes (ICs) was investigated. The ICs were prepared by microwave irradiation and also by the conventional methods of kneading, co-precipitation and freeze drying. Though freeze drying method is usually referred to as the gold standard among all the conventional methods, its long processing time limits the utility. Microwave irradiation accomplishes the process in a very short span of time and is a more environmentally benign method. Better efficacy of the microwaved inclusion product (MW) was observed in terms of dissolution, antimicrobial activity and antibiofilm properties of the drug. Thus microwave irradiation can be utilized as an improved, time-saving and cost-effective method for the generation of DS/β-CD inclusion complexes.

  19. The relationship between the Wigner-Weyl kinetic formalism and the complex geometrical optics method

    OpenAIRE

    Maj, Omar

    2004-01-01

    The relationship between two different asymptotic techniques developed in order to describe the propagation of waves beyond the standard geometrical optics approximation, namely, the Wigner-Weyl kinetic formalism and the complex geometrical optics method, is addressed. More specifically, a solution of the wave kinetic equation, relevant to the Wigner-Weyl formalism, is obtained which yields the same wavefield intensity as the complex geometrical optics method. Such a relationship is also disc...

  20. Simultaneous analysis of qualitative parameters of solid fuel using complex neutron gamma method

    International Nuclear Information System (INIS)

    Dombrovskij, V.P.; Ajtsev, N.I.; Ryashchikov, V.I.; Frolov, V.K.

    1983-01-01

    A study was made on complex neutron gamma method for simultaneous analysis of carbon content, ash content and humidity of solid fuel according to gamma radiation of inelastic fast neutron scattering and radiation capture of thermal neutrons. Metrological characteristics of pulse and stationary neutron gamma methods for determination of qualitative solid fuel parameters were analyzed, taking coke breeze as an example. Optimal energy ranges of gamma radiation detection (2-8 MeV) were determined. The advantages of using pulse neutron generator for complex analysis of qualitative parameters of solid fuel in large masses were shown

  1. The Effect of Pressure and Temperature on Separation of Free Gadolinium(III) From Gd-DTPA Complex by Nanofiltration-Complexation Method

    Science.gov (United States)

    Rahayu, Iman; Anggraeni, Anni; Ukun, MSS; Bahti, Husein H.

    2017-05-01

    Nowdays, the utilization of rare earth elements has been carried out widely in industry and medicine, one of them is gadolinium in Gd-DTPA complex is used as a contrast agent in a magnetic resonance imaging (MRI) diagnostic to increase the visual contrast between normal tissue and diseased. Although the stability of a given complex may be high enough, the complexation step couldnot have been completed, so there is possible to gadolinium(III) in the complex compound. Therefore, the function of that compounds should be dangerous because of the toxicity of gadolinium(III) in human body. So, it is necessarry to separate free gadolinium(III) from Gd-DTPA complex by nanofiltration-complexation. The method of this study is complexing of Gd2O3 with DTPA ligand by reflux and separation of Gd-DTPA complex from gadolinium(III) with a nanofiltration membrane on the variation of pressures(2, 3, 4, 5, 6 bars) and temperature (25, 30, 35, 40 °C) and determined the flux and rejection. The results of this study are the higher of pressures and temperatures, permeation flux are increasing and ion rejections are decreasing and gave the free gadolinium(III) rejection until 86.26%.

  2. An image overall complexity evaluation method based on LSD line detection

    Science.gov (United States)

    Li, Jianan; Duan, Jin; Yang, Xu; Xiao, Bo

    2017-04-01

    In the artificial world, whether it is the city's traffic roads or engineering buildings contain a lot of linear features. Therefore, the research on the image complexity of linear information has become an important research direction in digital image processing field. This paper, by detecting the straight line information in the image and using the straight line as the parameter index, establishing the quantitative and accurate mathematics relationship. In this paper, we use LSD line detection algorithm which has good straight-line detection effect to detect the straight line, and divide the detected line by the expert consultation strategy. Then we use the neural network to carry on the weight training and get the weight coefficient of the index. The image complexity is calculated by the complexity calculation model. The experimental results show that the proposed method is effective. The number of straight lines in the image, the degree of dispersion, uniformity and so on will affect the complexity of the image.

  3. Biocoordination chemistry. pH-metry titration method during study of biometal complexing with bioligands

    International Nuclear Information System (INIS)

    Dobrynina, N.A.

    1992-01-01

    Position of bioinorganic chemistry in the system of naturl science, as well as relations between bioinorganic and biocoordination chemistry, were considered. The content of chemical elements in geosphere and biosphere was analyzed. Characteristic features of biometal complexing with bioligands were pointed out. By way of example complex equilibria in solution were studie by the method of pH-metric titration using mathematical simulation. Advantages of the methods totality, when studying biosystems, were emphasized

  4. A direct algebraic method applied to obtain complex solutions of some nonlinear partial differential equations

    International Nuclear Information System (INIS)

    Zhang Huiqun

    2009-01-01

    By using some exact solutions of an auxiliary ordinary differential equation, a direct algebraic method is described to construct the exact complex solutions for nonlinear partial differential equations. The method is implemented for the NLS equation, a new Hamiltonian amplitude equation, the coupled Schrodinger-KdV equations and the Hirota-Maccari equations. New exact complex solutions are obtained.

  5. A new sub-equation method applied to obtain exact travelling wave solutions of some complex nonlinear equations

    International Nuclear Information System (INIS)

    Zhang Huiqun

    2009-01-01

    By using a new coupled Riccati equations, a direct algebraic method, which was applied to obtain exact travelling wave solutions of some complex nonlinear equations, is improved. And the exact travelling wave solutions of the complex KdV equation, Boussinesq equation and Klein-Gordon equation are investigated using the improved method. The method presented in this paper can also be applied to construct exact travelling wave solutions for other nonlinear complex equations.

  6. 40 CFR 63.344 - Performance test requirements and test methods.

    Science.gov (United States)

    2010-07-01

    ... electroplating tanks or chromium anodizing tanks. The sampling time and sample volume for each run of Methods 306... Chromium Anodizing Tanks § 63.344 Performance test requirements and test methods. (a) Performance test... Emissions From Decorative and Hard Chromium Electroplating and Anodizing Operations,” appendix A of this...

  7. Fluid leadership: inviting diverse inputs to address complex problems

    OpenAIRE

    Moir, Sylvia

    2016-01-01

    Approved for public release; distribution is unlimited History is replete with examples of misapplied leadership strategies. When singular methods are used to solve multifaceted problems, negative results are often the consequence. Complex issues in a complex environment require complex perspectives; the homeland security enterprise (HSE) needs leaders who can adapt their leadership styles according to emerging environments. Furthermore, the diverse agencies within the HSE must work togeth...

  8. The complex nature of mixed farming systems requires multidimensional actions supported by integrative research and development efforts

    DEFF Research Database (Denmark)

    González-García, E; Gourdine, J L; Alexandre, G

    2012-01-01

    the requirement for a change in research strategies and initiatives through the development of a complex but necessary multi-/inter-/trans-disciplinary teamwork spirit. We stress as essential the collaboration and active participation of local and regional actors, stakeholders and end-users in the identification...

  9. Modern methods of surveyor observations in opencast mining under complex hydrogeological conditions.

    Science.gov (United States)

    Usoltseva, L. A.; Lushpei, V. P.; Mursin, VA

    2017-10-01

    The article considers the possibility of linking the modern methods of surveying security of open mining works to improve industrial safety in the Primorsky Territory, as well as their use in the educational process. Industrial Safety in the management of Surface Mining depends largely on the applied assessment methods and methods of stability of pit walls and slopes of dumps in the complex mining and hydro-geological conditions.

  10. The dynein regulatory complex is required for ciliary motility and otolith biogenesis in the inner ear.

    Science.gov (United States)

    Colantonio, Jessica R; Vermot, Julien; Wu, David; Langenbacher, Adam D; Fraser, Scott; Chen, Jau-Nian; Hill, Kent L

    2009-01-08

    In teleosts, proper balance and hearing depend on mechanical sensors in the inner ear. These sensors include actin-based microvilli and microtubule-based cilia that extend from the surface of sensory hair cells and attach to biomineralized 'ear stones' (or otoliths). Otolith number, size and placement are under strict developmental control, but the mechanisms that ensure otolith assembly atop specific cells of the sensory epithelium are unclear. Here we demonstrate that cilia motility is required for normal otolith assembly and localization. Using in vivo video microscopy, we show that motile tether cilia at opposite poles of the otic vesicle create fluid vortices that attract otolith precursor particles, thereby biasing an otherwise random distribution to direct localized otolith seeding on tether cilia. Independent knockdown of subunits for the dynein regulatory complex and outer-arm dynein disrupt cilia motility, leading to defective otolith biogenesis. These results demonstrate a requirement for the dynein regulatory complex in vertebrates and show that cilia-driven flow is a key epigenetic factor in controlling otolith biomineralization.

  11. Fractional Complex Transform and exp-Function Methods for Fractional Differential Equations

    Directory of Open Access Journals (Sweden)

    Ahmet Bekir

    2013-01-01

    Full Text Available The exp-function method is presented for finding the exact solutions of nonlinear fractional equations. New solutions are constructed in fractional complex transform to convert fractional differential equations into ordinary differential equations. The fractional derivatives are described in Jumarie's modified Riemann-Liouville sense. We apply the exp-function method to both the nonlinear time and space fractional differential equations. As a result, some new exact solutions for them are successfully established.

  12. Identifying Hierarchical and Overlapping Protein Complexes Based on Essential Protein-Protein Interactions and “Seed-Expanding” Method

    Directory of Open Access Journals (Sweden)

    Jun Ren

    2014-01-01

    Full Text Available Many evidences have demonstrated that protein complexes are overlapping and hierarchically organized in PPI networks. Meanwhile, the large size of PPI network wants complex detection methods have low time complexity. Up to now, few methods can identify overlapping and hierarchical protein complexes in a PPI network quickly. In this paper, a novel method, called MCSE, is proposed based on λ-module and “seed-expanding.” First, it chooses seeds as essential PPIs or edges with high edge clustering values. Then, it identifies protein complexes by expanding each seed to a λ-module. MCSE is suitable for large PPI networks because of its low time complexity. MCSE can identify overlapping protein complexes naturally because a protein can be visited by different seeds. MCSE uses the parameter λ_th to control the range of seed expanding and can detect a hierarchical organization of protein complexes by tuning the value of λ_th. Experimental results of S. cerevisiae show that this hierarchical organization is similar to that of known complexes in MIPS database. The experimental results also show that MCSE outperforms other previous competing algorithms, such as CPM, CMC, Core-Attachment, Dpclus, HC-PIN, MCL, and NFC, in terms of the functional enrichment and matching with known protein complexes.

  13. Effective Teaching Methods in Higher Education: Requirements and Barriers.

    Science.gov (United States)

    Shirani Bidabadi, Nahid; Nasr Isfahani, Ahmmadreza; Rouhollahi, Amir; Khalili, Roya

    2016-10-01

    Teaching is one of the main components in educational planning which is a key factor in conducting educational plans. Despite the importance of good teaching, the outcomes are far from ideal. The present qualitative study aimed to investigate effective teaching in higher education in Iran based on the experiences of best professors in the country and the best local professors of Isfahan University of Technology. This qualitative content analysis study was conducted through purposeful sampling. Semi-structured interviews were conducted with ten faculty members (3 of them from the best professors in the country and 7 from the best local professors). Content analysis was performed by MAXQDA software. The codes, categories and themes were explored through an inductive process that began from semantic units or direct quotations to general themes. According to the results of this study, the best teaching approach is the mixed method (student-centered together with teacher-centered) plus educational planning and previous readiness. But whenever the teachers can teach using this method confront with some barriers and requirements; some of these requirements are prerequisite in professors' behavior and some of these are prerequisite in professors' outlook. Also, there are some major barriers, some of which are associated with the professors' operation and others are related to laws and regulations. Implications of these findings for teachers' preparation in education are discussed. In the present study, it was illustrated that a good teaching method helps the students to question their preconceptions, and motivates them to learn, by putting them in a situation in which they come to see themselves as the authors of answers, as the agents of responsibility for change. But training through this method has some barriers and requirements. To have an effective teaching; the faculty members of the universities should be awarded of these barriers and requirements as a way to

  14. Methods for deconvoluting and interpreting complex gamma- and x-ray spectral regions

    International Nuclear Information System (INIS)

    Gunnink, R.

    1983-06-01

    Germanium and silicon detectors are now widely used for the detection and measurement of x and gamma radiation. However, some analysis situations and spectral regions have heretofore been too complex to deconvolute and interpret by techniques in general use. One example is the L x-ray spectrum of an element taken with a Ge or Si detector. This paper describes some new tools and methods that were developed to analyze complex spectral regions; they are illustrated with examples

  15. Purohit's spectrophotometric method for determination of stability constants of complexes using Job's curves

    International Nuclear Information System (INIS)

    Purohit, D.N.; Goswami, A.K.; Chauhan, R.S.; Ressalan, S.

    1999-01-01

    A spectrophotometric method for determination of stability constants making use of Job's curves has been developed. Using this method stability constants of Zn(II), Cd(II), Mo(VI) and V(V) complexes of hydroxytriazenes have been determined. For the sake of comparison, values of the stability constants were also determined using Harvey and Manning's method. The values of the stability constants developed by two methods compare well. This new method has been named as Purohit's method. (author)

  16. Learning with Generalization Capability by Kernel Methods of Bounded Complexity

    Czech Academy of Sciences Publication Activity Database

    Kůrková, Věra; Sanguineti, M.

    2005-01-01

    Roč. 21, č. 3 (2005), s. 350-367 ISSN 0885-064X R&D Projects: GA AV ČR 1ET100300419 Institutional research plan: CEZ:AV0Z10300504 Keywords : supervised learning * generalization * model complexity * kernel methods * minimization of regularized empirical errors * upper bounds on rates of approximate optimization Subject RIV: BA - General Mathematics Impact factor: 1.186, year: 2005

  17. Stress Intensity Factor for Interface Cracks in Bimaterials Using Complex Variable Meshless Manifold Method

    Directory of Open Access Journals (Sweden)

    Hongfen Gao

    2014-01-01

    Full Text Available This paper describes the application of the complex variable meshless manifold method (CVMMM to stress intensity factor analyses of structures containing interface cracks between dissimilar materials. A discontinuous function and the near-tip asymptotic displacement functions are added to the CVMMM approximation using the framework of complex variable moving least-squares (CVMLS approximation. This enables the domain to be modeled by CVMMM without explicitly meshing the crack surfaces. The enriched crack-tip functions are chosen as those that span the asymptotic displacement fields for an interfacial crack. The complex stress intensity factors for bimaterial interfacial cracks were numerically evaluated using the method. Good agreement between the numerical results and the reference solutions for benchmark interfacial crack problems is realized.

  18. Development of an Evaluation Method for the Design Complexity of Computer-Based Displays

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyoung Ju; Lee, Seung Woo; Kang, Hyun Gook; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of); Park, Jin Kyun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2011-10-15

    The importance of the design of human machine interfaces (HMIs) for human performance and the safety of process industries has long been continuously recognized for many decades. Especially, in the case of nuclear power plants (NPPs), HMIs have significant implications for the safety of the NPPs because poor HMIs can impair the decision making ability of human operators. In order to support and increase the decision making ability of human operators, advanced HMIs based on the up-to-date computer technology are provided. Human operators in advanced main control room (MCR) acquire information through video display units (VDUs) and large display panel (LDP), which is required for the operation of NPPs. These computer-based displays contain a huge amount of information and present it with a variety of formats compared to those of a conventional MCR. For example, these displays contain more display elements such as abbreviations, labels, icons, symbols, coding, etc. As computer-based displays contain more information, the complexity of advanced displays becomes greater due to less distinctiveness of each display element. A greater understanding is emerging about the effectiveness of designs of computer-based displays, including how distinctively display elements should be designed. This study covers the early phase in the development of an evaluation method for the design complexity of computer-based displays. To this end, a series of existing studies were reviewed to suggest an appropriate concept that is serviceable to unravel this problem

  19. A method of reconstructing complex stratigraphic surfaces with multitype fault constraints

    Science.gov (United States)

    Deng, Shi-Wu; Jia, Yu; Yao, Xing-Miao; Liu, Zhi-Ning

    2017-06-01

    The construction of complex stratigraphic surfaces is widely employed in many fields, such as petroleum exploration, geological modeling, and geological structure analysis. It also serves as an important foundation for data visualization and visual analysis in these fields. The existing surface construction methods have several deficiencies and face various difficulties, such as the presence of multitype faults and roughness of resulting surfaces. In this paper, a surface modeling method that uses geometric partial differential equations (PDEs) is introduced for the construction of stratigraphic surfaces. It effectively solves the problem of surface roughness caused by the irregularity of stratigraphic data distribution. To cope with the presence of multitype complex faults, a two-way projection algorithm between threedimensional space and a two-dimensional plane is proposed. Using this algorithm, a unified method based on geometric PDEs is developed for dealing with multitype faults. Moreover, the corresponding geometric PDE is derived, and an algorithm based on an evolutionary solution is developed. The algorithm proposed for constructing spatial surfaces with real data verifies its computational efficiency and its ability to handle irregular data distribution. In particular, it can reconstruct faulty surfaces, especially those with overthrust faults.

  20. Complexation of biological ligands with lanthanides(III) for MRI: Structure, thermodynamic and methods; Complexation des cations lanthanides trivalents par des ligands d'origine biologique pour l'IRM: Structure, thermodynamique et methodes

    Energy Technology Data Exchange (ETDEWEB)

    Bonnet, C

    2006-07-15

    New cyclic ligands derived from sugars and amino-acids form a scaffold carrying a coordination sphere of oxygen atoms suitable to complex Ln(III) ions. In spite of their rather low molecular weights, the complexes display surprisingly high relaxivity values, especially at high field. The ACX and BCX ligands, which are acidic derivatives of modified and cyclo-dextrins, form mono and bimetallic complexes with Ln(III). The LnACX and LnBCX complexes show affinities towards Ln(III) similar to those of tri-acidic ligands. In the bimetallic Lu2ACX complex, the cations are deeply embedded in the cavity of the ligand, as shown by the X-ray structure. In aqueous solution, the number of water molecules coordinated to the cation in the LnACX complex depends on the nature and concentration of the alkali ions of the supporting electrolyte, as shown by luminescence and relaxometric measurements. There is only one water molecule coordinated in the LnBCX complex, which enables us to highlight an important second sphere contribution to relaxivity. The NMR study of the RAFT peptidic ligand shows the complexation of Ln(III), with an affinity similar to those of natural ligands derived from calmodulin. The relaxometric study also shows an important second sphere contribution to relaxivity. To better understand the intricate molecular factors affecting relaxivity, we developed new relaxometric methods based on probe solutes. These methods allow us to determine the charge of the complex, weak affinity constants, trans-metallation constants, and the electronic relaxation rate. (author)

  1. Evaluating a complex system-wide intervention using the difference in differences method: the Delivering Choice Programme.

    Science.gov (United States)

    Round, Jeff; Drake, Robyn; Kendall, Edward; Addicott, Rachael; Agelopoulos, Nicky; Jones, Louise

    2015-03-01

    We report the use of difference in differences (DiD) methodology to evaluate a complex, system-wide healthcare intervention. We use the worked example of evaluating the Marie Curie Delivering Choice Programme (DCP) for advanced illness in a large urban healthcare economy. DiD was selected because a randomised controlled trial was not feasible. The method allows for before and after comparison of changes that occur in an intervention site with a matched control site. This enables analysts to control for the effect of the intervention in the absence of a local control. Any policy, seasonal or other confounding effects over the test period are assumed to have occurred in a balanced way at both sites. Data were obtained from primary care trusts. Outcomes were place of death, inpatient admissions, length of stay and costs. Small changes were identified between pre- and post-DCP outputs in the intervention site. The proportion of home deaths and median cost increased slightly, while the number of admissions per patient and the average length of stay per admission decreased slightly. None of these changes was statistically significant. Effects estimates were limited by small numbers accessing new services and selection bias in sample population and comparator site. In evaluating the effect of a complex healthcare intervention, the choice of analysis method and output measures is crucial. Alternatives to randomised controlled trials may be required for evaluating large scale complex interventions and the DiD approach is suitable, subject to careful selection of measured outputs and control population. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  2. Training requirements and responsibilities for the Buried Waste Integrated Demonstration at the Radioactive Waste Management Complex

    International Nuclear Information System (INIS)

    Vega, H.G.; French, S.B.; Rick, D.L.

    1992-09-01

    The Buried Waste Integrated Demonstration (BWID) is scheduled to conduct intrusive (hydropunch screening tests, bore hole installation, soil sampling, etc.) and nonintrusive (geophysical surveys) studies at the Radioactive Waste Management Complex (RWMC). These studies and activities will be limited to specific locations at the RWMC. The duration of these activities will vary, but most tasks are not expected to exceed 90 days. The BWID personnel requested that the Waste Management Operational Support Group establish the training requirements and training responsibilities for BWID personnel and BWID subcontractor personnel. This document specifies these training requirements and responsibilities. While the responsibilities of BWID and the RWMC are, in general, defined in the interface agreement, the training elements are based on regulatory requirements, DOE orders, DOE-ID guidance, state law, and the nature of the work to be performed

  3. Energy conserving numerical methods for the computation of complex vortical flows

    Science.gov (United States)

    Allaneau, Yves

    One of the original goals of this thesis was to develop numerical tools to help with the design of micro air vehicles. Micro Air Vehicles (MAVs) are small flying devices of only a few inches in wing span. Some people consider that as their size becomes smaller and smaller, it would be increasingly more difficult to keep all the classical control surfaces such as the rudders, the ailerons and the usual propellers. Over the years, scientists took inspiration from nature. Birds, by flapping and deforming their wings, are capable of accurate attitude control and are able to generate propulsion. However, the biomimicry design has its own limitations and it is difficult to place a hummingbird in a wind tunnel to study precisely the motion of its wings. Our approach was to use numerical methods to tackle this challenging problem. In order to precisely evaluate the lift and drag generated by the wings, one needs to be able to capture with high fidelity the extremely complex vortical flow produced in the wake. This requires a numerical method that is stable yet not too dissipative, so that the vortices do not get diffused in an unphysical way. We solved this problem by developing a new Discontinuous Galerkin scheme that, in addition to conserving mass, momentum and total energy locally, also preserves kinetic energy globally. This property greatly improves the stability of the simulations, especially in the special case p=0 when the approximation polynomials are taken to be piecewise constant (we recover a finite volume scheme). In addition to needing an adequate numerical scheme, a high fidelity solution requires many degrees of freedom in the computations to represent the flow field. The size of the smallest eddies in the flow is given by the Kolmogoroff scale. Capturing these eddies requires a mesh counting in the order of Re³ cells, where Re is the Reynolds number of the flow. We show that under-resolving the system, to a certain extent, is acceptable. However our

  4. Identifying and prioritizing customer requirements from tractor production by QFD method

    Directory of Open Access Journals (Sweden)

    H Taghizadeh

    2017-05-01

    Full Text Available Introduction Discovering and understanding customer needs and expectations are considered as important factors on customer satisfaction and play vital role to maintain the current activity among its competitors, proceeding and obtaining customer satisfaction which are critical factors to design a successful production; thus the successful organizations must meet their needs containing the quality of the products or services to customers. Quality Function Deployment (QFD is a technique for studying demands and needs of customers which is going to give more emphasis to the customer's interests in this way. The QFD method in general implemented various tools and methods for reaching qualitative goals; but the most important and the main tool of this method is the house of quality diagrams. The Analytic Hierarchy Process (AHP is a famous and common MADM method based on pair wise comparisons used for determining the priority of understudied factors in various studies until now. With considering effectiveness of QFD method to explicating customer's demands and obtaining customer satisfaction, generally, the researchers followed this question's suite and scientific answer: how can QFD explicate real demands and requirements of customers from tractor final production and what is the prioritization of these demands and requirements in view of customers. Accordingly, the aim of this study was to identify and prioritize the customer requirements of Massey Ferguson (MF 285 tractor production in Iran tractor manufacturing company with t- student statistical test, AHP and QFD methods. Materials and Methods Research method was descriptive and statistical population included all of the tractor customers of Tractor Manufacturing Company in Iran from March 2011 to March 2015. The statistical sample size was 171 which are determined with Cochran index. Moreover, 20 experts' opinion has been considered for determining product's technical requirements. Literature

  5. Glycosaminoglycan-resistant and pH-sensitive lipid-coated DNA complexes produced by detergent removal method.

    Science.gov (United States)

    Lehtinen, Julia; Hyvönen, Zanna; Subrizi, Astrid; Bunjes, Heike; Urtti, Arto

    2008-10-21

    Cationic polymers are efficient gene delivery vectors in in vitro conditions, but these carriers can fail in vivo due to interactions with extracellular polyanions, i.e. glycosaminoglycans (GAG). The aim of this study was to develop a stable gene delivery vector that is activated at the acidic endosomal pH. Cationic DNA/PEI complexes were coated by 1,2-dioleylphosphatidylethanolamine (DOPE) and cholesteryl hemisuccinate (CHEMS) (3:2 mol/mol) using two coating methods: detergent removal and mixing with liposomes prepared by ethanol injection. Only detergent removal produced lipid-coated DNA complexes that were stable against GAGs, but were membrane active at low pH towards endosome mimicking liposomes. In relation to the low cellular uptake of the coated complexes, their transfection efficacy was relatively high. PEGylation of the coated complexes increased their cellular uptake but reduced the pH-sensitivity. Detergent removal was thus a superior method for the production of stable, but acid activatable, lipid-coated DNA complexes.

  6. Detection of circulating immune complexes in hepatitis by means of a new method employing /sup 125/I-antibody. Circulating immune complexes in hepatitis

    Energy Technology Data Exchange (ETDEWEB)

    Fresco, G F [Genoa Univ. (Italy). Dept. of Internal Medicine

    1978-06-01

    A new RIA method for the detection of circulating immune complexes and antibodies arising in the course of viral hepatitis is described. It involves the use of /sup 125/I-labeled antibodies and foresees the possibility of employing immune complex-coated polypropylene tubes. This simple and sensitive procedure takes into account the possibility that the immune complexes may be absorbed by the surface of polypropylene tubes during the period in which the serum remains there.

  7. Computer-Aided Identification and Validation of Privacy Requirements

    Directory of Open Access Journals (Sweden)

    Rene Meis

    2016-05-01

    Full Text Available Privacy is a software quality that is closely related to security. The main difference is that security properties aim at the protection of assets that are crucial for the considered system, and privacy aims at the protection of personal data that are processed by the system. The identification of privacy protection needs in complex systems is a hard and error prone task. Stakeholders whose personal data are processed might be overlooked, or the sensitivity and the need of protection of the personal data might be underestimated. The later personal data and the needs to protect them are identified during the development process, the more expensive it is to fix these issues, because the needed changes of the system-to-be often affect many functionalities. In this paper, we present a systematic method to identify the privacy needs of a software system based on a set of functional requirements by extending the problem-based privacy analysis (ProPAn method. Our method is tool-supported and automated where possible to reduce the effort that has to be spent for the privacy analysis, which is especially important when considering complex systems. The contribution of this paper is a semi-automatic method to identify the relevant privacy requirements for a software-to-be based on its functional requirements. The considered privacy requirements address all dimensions of privacy that are relevant for software development. As our method is solely based on the functional requirements of the system to be, we enable users of our method to identify the privacy protection needs that have to be addressed by the software-to-be at an early stage of the development. As initial evaluation of our method, we show its applicability on a small electronic health system scenario.

  8. The NDUFB6 subunit of the mitochondrial respiratory chain complex I is required for electron transfer activity: A proof of principle study on stable and controlled RNA interference in human cell lines

    International Nuclear Information System (INIS)

    Loublier, Sandrine; Bayot, Aurelien; Rak, Malgorzata; El-Khoury, Riyad; Benit, Paule; Rustin, Pierre

    2011-01-01

    Highlights: → NDUFB6 is required for activity of mitochondrial complex I in human cell lines. → Lentivirus based RNA interference results in frequent off target insertions. → Flp-In recombinase mediated miRNA insertion allows gene-specific extinction. -- Abstract: Molecular bases of inherited deficiencies of mitochondrial respiratory chain complex I are still unknown in a high proportion of patients. Among 45 subunits making up this large complex, more than half has unknown function(s). Understanding the function of these subunits would contribute to our knowledge on mitochondrial physiology but might also reveal that some of these subunits are not required for the catalytic activity of the complex. A direct consequence of this finding would be the reduction of the number of candidate genes to be sequenced in patients with decreased complex I activity. In this study, we tested two different methods to stably extinct complex I subunits in cultured cells. We first found that lentivirus-mediated shRNA expression frequently resulted in the unpredicted extinction of additional gene(s) beside targeted ones. This can be ascribed to uncontrolled genetic material insertions in the genome of the host cell. This approach thus appeared inappropriate to study unknown functions of a gene. Next, we found it possible to specifically extinct a CI subunit gene by direct insertion of a miR targeting CI subunits in a Flp site (HEK293 Flp-In cells). By using this strategy we unambiguously demonstrated that the NDUFB6 subunit is required for complex I activity, and defined conditions suitable to undertake a systematic and stable extinction of the different supernumerary subunits in human cells.

  9. The NDUFB6 subunit of the mitochondrial respiratory chain complex I is required for electron transfer activity: A proof of principle study on stable and controlled RNA interference in human cell lines

    Energy Technology Data Exchange (ETDEWEB)

    Loublier, Sandrine; Bayot, Aurelien; Rak, Malgorzata; El-Khoury, Riyad; Benit, Paule [Inserm U676, Hopital Robert Debre, F-75019 Paris (France); Universite Paris 7, Faculte de medecine Denis Diderot, IFR02 Paris (France); Rustin, Pierre, E-mail: pierre.rustin@inserm.fr [Inserm U676, Hopital Robert Debre, F-75019 Paris (France); Universite Paris 7, Faculte de medecine Denis Diderot, IFR02 Paris (France)

    2011-10-22

    Highlights: {yields} NDUFB6 is required for activity of mitochondrial complex I in human cell lines. {yields} Lentivirus based RNA interference results in frequent off target insertions. {yields} Flp-In recombinase mediated miRNA insertion allows gene-specific extinction. -- Abstract: Molecular bases of inherited deficiencies of mitochondrial respiratory chain complex I are still unknown in a high proportion of patients. Among 45 subunits making up this large complex, more than half has unknown function(s). Understanding the function of these subunits would contribute to our knowledge on mitochondrial physiology but might also reveal that some of these subunits are not required for the catalytic activity of the complex. A direct consequence of this finding would be the reduction of the number of candidate genes to be sequenced in patients with decreased complex I activity. In this study, we tested two different methods to stably extinct complex I subunits in cultured cells. We first found that lentivirus-mediated shRNA expression frequently resulted in the unpredicted extinction of additional gene(s) beside targeted ones. This can be ascribed to uncontrolled genetic material insertions in the genome of the host cell. This approach thus appeared inappropriate to study unknown functions of a gene. Next, we found it possible to specifically extinct a CI subunit gene by direct insertion of a miR targeting CI subunits in a Flp site (HEK293 Flp-In cells). By using this strategy we unambiguously demonstrated that the NDUFB6 subunit is required for complex I activity, and defined conditions suitable to undertake a systematic and stable extinction of the different supernumerary subunits in human cells.

  10. Only one ATP-binding DnaX subunit is required for initiation complex formation by the Escherichia coli DNA polymerase III holoenzyme.

    Science.gov (United States)

    Wieczorek, Anna; Downey, Christopher D; Dallmann, H Garry; McHenry, Charles S

    2010-09-17

    The DnaX complex (DnaX(3)δδ'χ psi) within the Escherichia coli DNA polymerase III holoenzyme serves to load the dimeric sliding clamp processivity factor, β(2), onto DNA. The complex contains three DnaX subunits, which occur in two forms: τ and the shorter γ, produced by translational frameshifting. Ten forms of E. coli DnaX complex containing all possible combinations of wild-type or a Walker A motif K51E variant τ or γ have been reconstituted and rigorously purified. DnaX complexes containing three DnaX K51E subunits do not bind ATP. Comparison of their ability to support formation of initiation complexes, as measured by processive replication by the DNA polymerase III holoenzyme, indicates a minimal requirement for one ATP-binding DnaX subunit. DnaX complexes containing two mutant DnaX subunits support DNA synthesis at about two-thirds the level of their wild-type counterparts. β(2) binding (determined functionally) is diminished 12-30-fold for DnaX complexes containing two K51E subunits, suggesting that multiple ATPs must be bound to place the DnaX complex into a conformation with maximal affinity for β(2). DNA synthesis activity can be restored by increased concentrations of β(2). In contrast, severe defects in ATP hydrolysis are observed upon introduction of a single K51E DnaX subunit. Thus, ATP binding, hydrolysis, and the ability to form initiation complexes are not tightly coupled. These results suggest that although ATP hydrolysis likely enhances β(2) loading, it is not absolutely required in a mechanistic sense for formation of functional initiation complexes.

  11. Computational study of formamide-water complexes using the SAPT and AIM methods

    International Nuclear Information System (INIS)

    Parreira, Renato L.T.; Valdes, Haydee; Galembeck, Sergio E.

    2006-01-01

    In this work, the complexes formed between formamide and water were studied by means of the SAPT and AIM methods. Complexation leads to significant alterations in the geometries and electronic structure of formamide. Intermolecular interactions in the complexes are intense, especially in the cases where the solvent interacts with the carbonyl and amide groups simultaneously. In the transition states, the interaction between the water molecule and the lone pair on the amide nitrogen is also important. In all the complexes studied herein, the electrostatic interactions between formamide and water are the main attractive force, and their contribution may be five times as large as the corresponding contribution from dispersion, and twice as large as the contribution from induction. However, an increase in the resonance of planar formamide with the successive addition of water molecules may suggest that the hydrogen bonds taking place between formamide and water have some covalent character

  12. Simulation As a Method To Support Complex Organizational Transformations in Healthcare

    NARCIS (Netherlands)

    Rothengatter, D.C.F.; Katsma, Christiaan; van Hillegersberg, Jos

    2010-01-01

    In this paper we study the application of simulation as a method to support information system and process design in complex organizational transitions. We apply a combined use of a collaborative workshop approach with the use of a detailed and accurate graphical simulation model in a hospital that

  13. Identification of unknown protein complex members by radiolocalization and analysis of low-abundance complexes resolved using native polyacrylamide gel electrophoresis.

    Science.gov (United States)

    Bose, Mahuya; Adams, Brian P; Whittal, Randy M; Bose, Himangshu S

    2008-02-01

    Identification of unknown binding partners of a protein of interest can be a difficult process. Current strategies to determine protein binding partners result in a high amount of false-positives, requiring use of several different methods to confirm the accuracy of the apparent association. We have developed and utilized a method that is reliable and easily substantiated. Complexes are isolated from cell extract after exposure to the radiolabeled protein of interest, followed by resolution on a native polyacrylamide gel. Native conformations are preserved, allowing the complex members to maintain associations. By radiolabeling the protein of interest, the complex can be easily identified at detection levels below the threshold of Serva Blue, Coomassie, and silver stains. The visualized radioactive band is analyzed by MS to identify binding partners, which can be subsequently verified by antibody shift and immunoprecipitation of the complex. By using this method we have successfully identified binding partners of two proteins that reside in different locations of a cellular organelle.

  14. Requirements in engineering projects

    CERN Document Server

    Fernandes, João M

    2016-01-01

    This book focuses on various topics related to engineering and management of requirements, in particular elicitation, negotiation, prioritisation, and documentation (whether with natural languages or with graphical models). The book provides methods and techniques that help to characterise, in a systematic manner, the requirements of the intended engineering system.  It was written with the goal of being adopted as the main text for courses on requirements engineering, or as a strong reference to the topics of requirements in courses with a broader scope. It can also be used in vocational courses, for professionals interested in the software and information systems domain.   Readers who have finished this book will be able to: - establish and plan a requirements engineering process within the development of complex engineering systems; - define and identify the types of relevant requirements in engineering projects; - choose and apply the most appropriate techniques to elicit the requirements of a giv...

  15. A Systematic Optimization Design Method for Complex Mechatronic Products Design and Development

    Directory of Open Access Journals (Sweden)

    Jie Jiang

    2018-01-01

    Full Text Available Designing a complex mechatronic product involves multiple design variables, objectives, constraints, and evaluation criteria as well as their nonlinearly coupled relationships. The design space can be very big consisting of many functional design parameters, structural design parameters, and behavioral design (or running performances parameters. Given a big design space and inexplicit relations among them, how to design a product optimally in an optimization design process is a challenging research problem. In this paper, we propose a systematic optimization design method based on design space reduction and surrogate modelling techniques. This method firstly identifies key design parameters from a very big design space to reduce the design space, secondly uses the identified key design parameters to establish a system surrogate model based on data-driven modelling principles for optimization design, and thirdly utilizes the multiobjective optimization techniques to achieve an optimal design of a product in the reduced design space. This method has been tested with a high-speed train design. With comparison to others, the research results show that this method is practical and useful for optimally designing complex mechatronic products.

  16. A hybrid 3D SEM reconstruction method optimized for complex geologic material surfaces.

    Science.gov (United States)

    Yan, Shang; Adegbule, Aderonke; Kibbey, Tohren C G

    2017-08-01

    Reconstruction methods are widely used to extract three-dimensional information from scanning electron microscope (SEM) images. This paper presents a new hybrid reconstruction method that combines stereoscopic reconstruction with shape-from-shading calculations to generate highly-detailed elevation maps from SEM image pairs. The method makes use of an imaged glass sphere to determine the quantitative relationship between observed intensity and angles between the beam and surface normal, and the detector and surface normal. Two specific equations are derived to make use of image intensity information in creating the final elevation map. The equations are used together, one making use of intensities in the two images, the other making use of intensities within a single image. The method is specifically designed for SEM images captured with a single secondary electron detector, and is optimized to capture maximum detail from complex natural surfaces. The method is illustrated with a complex structured abrasive material, and a rough natural sand grain. Results show that the method is capable of capturing details such as angular surface features, varying surface roughness, and surface striations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. The development of quantitative determination method of organic acids in complex poly herbal extraction

    Directory of Open Access Journals (Sweden)

    I. L. Dyachok

    2016-08-01

    Full Text Available Aim. The development of sensible, economical and expressive method of quantitative determination of organic acids in complex poly herbal extraction counted on izovaleric acid with the use of digital technologies. Materials and methods. Model complex poly herbal extraction of sedative action was chosen as a research object. Extraction is composed of these medical plants: Valeriana officinalis L., Crataégus, Melissa officinalis L., Hypericum, Mentha piperita L., Húmulus lúpulus, Viburnum. Based on chemical composition of plant components, we consider that main pharmacologically active compounds, which can be found in complex poly herbal extraction are: polyphenolic substances (flavonoids, which are contained in Crataégus, Viburnum, Hypericum, Mentha piperita L., Húmulus lúpulus; also organic acids, including izovaleric acid, which are contained in Valeriana officinalis L., Mentha piperita L., Melissa officinalis L., Viburnum; the aminoacid are contained in Valeriana officinalis L. For the determination of organic acids content in low concentration we applied instrumental method of analysis, namely conductometry titration which consisted in the dependences of water solution conductivity of complex poly herbal extraction on composition of organic acids. Result. The got analytical dependences, which describes tangent lines to the conductometry curve before and after the point of equivalence, allow to determine the volume of solution expended on titration and carry out procedure of quantitative determination of organic acids in the digital mode. Conclusion. The proposed method enables to determine the point of equivalence and carry out quantitative determination of organic acids counted on izovaleric acid with the use of digital technologies, that allows to computerize the method on the whole.

  18. A Novel Method for Assessing Task Complexity in Outpatient Clinical-Performance Measures.

    Science.gov (United States)

    Hysong, Sylvia J; Amspoker, Amber B; Petersen, Laura A

    2016-04-01

    Clinical-performance measurement has helped improve the quality of health-care; yet success in attaining high levels of quality across multiple domains simultaneously still varies considerably. Although many sources of variability in care quality have been studied, the difficulty required to complete the clinical work itself has received little attention. We present a task-based methodology for evaluating the difficulty of clinical-performance measures (CPMs) by assessing the complexity of their component requisite tasks. Using Functional Job Analysis (FJA), subject-matter experts (SMEs) generated task lists for 17 CPMs; task lists were rated on ten dimensions of complexity, and then aggregated into difficulty composites. Eleven outpatient work SMEs; 133 VA Medical Centers nationwide. Clinical Performance: 17 outpatient CPMs (2000-2008) at 133 VA Medical Centers nationwide. Measure Difficulty: for each CPM, the number of component requisite tasks and the average rating across ten FJA complexity scales for the set of tasks comprising the measure. Measures varied considerably in the number of component tasks (M = 10.56, SD = 6.25, min = 5, max = 25). Measures of chronic care following acute myocardial infarction exhibited significantly higher measure difficulty ratings compared to diabetes or screening measures, but not to immunization measures ([Formula: see text] = 0.45, -0.04, -0.05, and -0.06 respectively; F (3, 186) = 3.57, p = 0.015). Measure difficulty ratings were not significantly correlated with the number of component tasks (r = -0.30, p = 0.23). Evaluating the difficulty of achieving recommended CPM performance levels requires more than simply counting the tasks involved; using FJA to assess the complexity of CPMs' component tasks presents an alternate means of assessing the difficulty of primary-care CPMs and accounting for performance variation among measures and performers. This in turn could be used in designing

  19. Quality assurance requirements and methods for high level waste package acceptability

    International Nuclear Information System (INIS)

    1992-12-01

    This document should serve as guidance for assigning the necessary items to control the conditioning process in such a way that waste packages are produced in compliance with the waste acceptance requirements. It is also provided to promote the exchange of information on quality assurance requirements and on the application of quality assurance methods associated with the production of high level waste packages, to ensure that these waste packages comply with the requirements for transportation, interim storage and waste disposal in deep geological formations. The document is intended to assist both the operators of conditioning facilities and repositories as well as national authorities and regulatory bodies, involved in the licensing of the conditioning of high level radioactive wastes or in the development of deep underground disposal systems. The document recommends the quality assurance requirements and methods which are necessary to generate data for these parameters identified in IAEA-TECDOC-560 on qualitative acceptance criteria, and indicates where and when the control methods can be applied, e.g. in the operation or commissioning of a process or in the development of a waste package design. Emphasis is on the control of the process and little reliance is placed on non-destructive or destructive testing. Qualitative criteria, relevant to disposal of high level waste, are repository dependent and are not addressed here. 37 refs, 3 figs, 2 tabs

  20. Hybrid RANS/LES applied to complex terrain

    DEFF Research Database (Denmark)

    Bechmann, Andreas; Sørensen, Niels N.

    2011-01-01

    Large Eddy Simulation (LES) of the wind in complex terrain is limited by computational cost. The number of computational grid points required to resolve the near-ground turbulent structures (eddies) are very high. The traditional solution to the problem has been to apply a wall function...... aspect ratio in the RANS layer and thereby resolve the mean near-wall velocity profile. The method is applicable to complex terrain and the benefits of traditional LES are kept intact. Using the hybrid method, simulations of the wind over a natural complex terrain near Wellington in New Zealand...... that accounts for the whole near-wall region. Recently, a hybrid method was proposed in which the eddies close to the ground were modelled in a Reynolds-averaged sense (RANS) and the eddies above this region were simulated using LES. The advantage of the approach is the ability to use shallow cells of high...

  1. Memory Indexing: A Novel Method for Tracing Memory Processes in Complex Cognitive Tasks

    Science.gov (United States)

    Renkewitz, Frank; Jahn, Georg

    2012-01-01

    We validate an eye-tracking method applicable for studying memory processes in complex cognitive tasks. The method is tested with a task on probabilistic inferences from memory. It provides valuable data on the time course of processing, thus clarifying previous results on heuristic probabilistic inference. Participants learned cue values of…

  2. Optimation and Determination of Fe-Oxinate Complex by Using High Performance Liquid Chromatography

    Science.gov (United States)

    Oktavia, B.; Nasra, E.; Sary, R. C.

    2018-04-01

    The need for iron will improve the industrial processes that require iron as its raw material. Control of industrial iron waste is very important to do. One method of iron analysis is to conduct indirect analysis of iron (III) ions by complexing with 8-Hydroxyquinoline or oxine. In this research, qualitative and quantitative tests of iron (III) ions in the form of complex with oxine. The analysis was performed using HPLC at a wavelength of 470 nm with an ODS C18 column. Three methods of analysis were performed: 1) Fe-oxinate complexes were prepared in an ethanol solvent so no need for separation anymore, (2) Fe-oxinate complexes were made in chloroform so that a solvent extraction was required before the complex was injected into the column while the third complex was formed in the column, wherein the eluent contains the oxide and the metal ions are then injected. The resulting chromatogram shows that the 3rd way provides a better chromatogram for iron analysis.

  3. Requirement of the Mre11 complex and exonuclease 1 for activation of the Mec1 signaling pathway.

    Science.gov (United States)

    Nakada, Daisuke; Hirano, Yukinori; Sugimoto, Katsunori

    2004-11-01

    The large protein kinases, ataxia-telangiectasia mutated (ATM) and ATM-Rad3-related (ATR), orchestrate DNA damage checkpoint pathways. In budding yeast, ATM and ATR homologs are encoded by TEL1 and MEC1, respectively. The Mre11 complex consists of two highly related proteins, Mre11 and Rad50, and a third protein, Xrs2 in budding yeast or Nbs1 in mammals. The Mre11 complex controls the ATM/Tel1 signaling pathway in response to double-strand break (DSB) induction. We show here that the Mre11 complex functions together with exonuclease 1 (Exo1) in activation of the Mec1 signaling pathway after DNA damage and replication block. Mec1 controls the checkpoint responses following UV irradiation as well as DSB induction. Correspondingly, the Mre11 complex and Exo1 play an overlapping role in activation of DSB- and UV-induced checkpoints. The Mre11 complex and Exo1 collaborate in producing long single-stranded DNA (ssDNA) tails at DSB ends and promote Mec1 association with the DSBs. The Ddc1-Mec3-Rad17 complex associates with sites of DNA damage and modulates the Mec1 signaling pathway. However, Ddc1 association with DSBs does not require the function of the Mre11 complex and Exo1. Mec1 controls checkpoint responses to stalled DNA replication as well. Accordingly, the Mre11 complex and Exo1 contribute to activation of the replication checkpoint pathway. Our results provide a model in which the Mre11 complex and Exo1 cooperate in generating long ssDNA tracts and thereby facilitate Mec1 association with sites of DNA damage or replication block.

  4. Detection of circulating immune complexes in breast cancer and melanoma by three different methods

    Energy Technology Data Exchange (ETDEWEB)

    Krapf, F; Renger, D; Fricke, M; Kemper, A; Schedel, I; Deicher, H

    1982-08-01

    By the simultaneous application of three methods, C1q-binding-test (C1q-BA), a two antibody conglutinin binding ELISA and a polyethylene-glycol 6000 precipitation with subsequent quantitative determination of immunoglobulins and complement factors in the redissolved precipitates (PPLaNT), circulating immune complexes could be demonstrated in the sera of 94% of patients with malignant melanoma and of 75% of breast cancer patients. The specific detection rates of the individual methods varied between 23% (C1q-BA) and 46% (PPLaNT), presumably due to the presence of qualitatively different immune complexes in the investigated sera. Accordingly, the simultaneous use of the afore mentioned assays resulted in an increased diagnostic sensitivity and a duplication of the predictive value. Nevertheless, because of the relatively low incidence of malignant diseases in the total population, and due to the fact that circulating immune complexes occur in other non-malignant diseases with considerable frequency, tests for circulating immune complexes must be regarded as less useful parameters in the early diagnostic of cancer.

  5. EPISTEMOLOGY AND INVESTIGATION WITHIN THE CURRENT ORGANIZATIONAL COMPLEX SYSTEMS

    Directory of Open Access Journals (Sweden)

    Karla Torres

    2015-11-01

    Full Text Available The way of approaching reality and generate knowledge is now different from those applied in the past ; It is why the aim of this paper was to analyze the changing elements in organizational structures framed in complex systems , addressing the study from the interpretive perspective with the use of hermeneutical method in theory , documentary context. It is concluding that the research methods require adaptation to this new reality for knowledge production. The complexity plays an important role in organizational systems and the environment in general, raising the need for revision in the way of thinking and actually faces this new complex, full of uncertainty and organizational chaos.

  6. Comparison of Ho and Y complexation data obtained by electromigration methods, potentiometry and spectrophotometry

    International Nuclear Information System (INIS)

    Vinsova, H.; Koudelkova, M.; Ernestova, M.; Jedinakova-Krizova, V.

    2003-01-01

    Many of holmium and yttrium complex compounds of both organic and inorganic origin have been studied recently from the point of view of their radiopharmaceutical behavior. Complexes with Ho-166 and Y-90 can be either directly used as pharmaceutical preparations or they can be applied in a conjugate form with selected monoclonal antibody. Appropriate bifunctional chelation agents are necessary in the latter case for indirect binding of monoclonal antibody and selected radionuclide. Our present study has been focused on the characterization of radionuclide (metal) - ligand interaction using various analytical methods. Electromigration methods (capillary electrophoresis, capillary isotachophoresis), potentiometric titration and spectrophotometry have been tested from the point of view of their potential to determine conditional stability constants of holmium and yttrium complexes. A principle of an isotachophoretic determination of stability constants is based on the linear relation between logarithms of stability constant and a reduction of a zone of complex. For the calculation of thermodynamic constants using potentiometry it was necessary at first to determine the protonation constants of acid. Those were calculated using the computer program LETAGROP Etitr from data obtained by potentiometric acid-base titration. Consequently, the titration curves of holmium and yttrium with studied ligands and protonation constants of corresponding acid were applied for the calculation of metal-ligand stability constants. Spectrophotometric determination of stability constants of selected systems was based on the titration of holmium and yttrium nitrate solutions by Arsenazo III following by the titration of metal-Arsenazo III complex by selected ligand. Data obtained have been evaluated using the computation program OPIUM. Results obtained by all analytical methods tested in this study have been compared. It was found that direct potentiometric titration technique could not be

  7. A Protein Complex Required for Polymerase V Transcripts and RNA- Directed DNA Methylation in Arabidopsis

    KAUST Repository

    Law, Julie A.; Ausí n, Israel; Johnson, Lianna M.; Vashisht, Ajay  A Amar; Zhu, Jian-Kang; Wohlschlegel, James  A A.; Jacobsen, Steven E.

    2010-01-01

    DNA methylation is an epigenetic modification associated with gene silencing. In Arabidopsis, DNA methylation is established by DOMAINS REARRANGED METHYLTRANSFERASE 2 (DRM2), which is targeted by small interfering RNAs through a pathway termed RNA-directed DNA methylation (RdDM) [1, 2]. Recently, RdDM was shown to require intergenic noncoding (IGN) transcripts that are dependent on the Pol V polymerase. These transcripts are proposed to function as scaffolds for the recruitment of downstream RdDM proteins, including DRM2, to loci that produce both siRNAs and IGN transcripts [3]. However, the mechanism(s) through which Pol V is targeted to specific genomic loci remains largely unknown. Through affinity purification of two known RdDM components, DEFECTIVE IN RNA-DIRECTED DNA METHYLATION 1 (DRD1) [4] and DEFECTIVE IN MERISTEM SILENCING 3 (DMS3) [5, 6], we found that they copurify with each other and with a novel protein, RNA-DIRECTED DNA METHYLATION 1 (RDM1), forming a complex we term DDR. We also found that DRD1 copurified with Pol V subunits and that RDM1, like DRD1 [3] and DMS3 [7], is required for the production of Pol V-dependent transcripts. These results suggest that the DDR complex acts in RdDM at a step upstream of the recruitment or activation of Pol V. © 2010 Elsevier Ltd. All rights reserved.

  8. A Protein Complex Required for Polymerase V Transcripts and RNA- Directed DNA Methylation in Arabidopsis

    KAUST Repository

    Law, Julie A.

    2010-05-01

    DNA methylation is an epigenetic modification associated with gene silencing. In Arabidopsis, DNA methylation is established by DOMAINS REARRANGED METHYLTRANSFERASE 2 (DRM2), which is targeted by small interfering RNAs through a pathway termed RNA-directed DNA methylation (RdDM) [1, 2]. Recently, RdDM was shown to require intergenic noncoding (IGN) transcripts that are dependent on the Pol V polymerase. These transcripts are proposed to function as scaffolds for the recruitment of downstream RdDM proteins, including DRM2, to loci that produce both siRNAs and IGN transcripts [3]. However, the mechanism(s) through which Pol V is targeted to specific genomic loci remains largely unknown. Through affinity purification of two known RdDM components, DEFECTIVE IN RNA-DIRECTED DNA METHYLATION 1 (DRD1) [4] and DEFECTIVE IN MERISTEM SILENCING 3 (DMS3) [5, 6], we found that they copurify with each other and with a novel protein, RNA-DIRECTED DNA METHYLATION 1 (RDM1), forming a complex we term DDR. We also found that DRD1 copurified with Pol V subunits and that RDM1, like DRD1 [3] and DMS3 [7], is required for the production of Pol V-dependent transcripts. These results suggest that the DDR complex acts in RdDM at a step upstream of the recruitment or activation of Pol V. © 2010 Elsevier Ltd. All rights reserved.

  9. Using cognitive modeling for requirements engineering in anesthesiology

    NARCIS (Netherlands)

    Pott, C; le Feber, J

    2005-01-01

    Cognitive modeling is a complexity reducing method to describe significant cognitive processes under a specified research focus. Here, a cognitive process model for decision making in anesthesiology is presented and applied in requirements engineering. Three decision making situations of

  10. Decomposition of overlapping protein complexes: A graph theoretical method for analyzing static and dynamic protein associations

    Directory of Open Access Journals (Sweden)

    Guimarães Katia S

    2006-04-01

    Full Text Available Abstract Background Most cellular processes are carried out by multi-protein complexes, groups of proteins that bind together to perform a specific task. Some proteins form stable complexes, while other proteins form transient associations and are part of several complexes at different stages of a cellular process. A better understanding of this higher-order organization of proteins into overlapping complexes is an important step towards unveiling functional and evolutionary mechanisms behind biological networks. Results We propose a new method for identifying and representing overlapping protein complexes (or larger units called functional groups within a protein interaction network. We develop a graph-theoretical framework that enables automatic construction of such representation. We illustrate the effectiveness of our method by applying it to TNFα/NF-κB and pheromone signaling pathways. Conclusion The proposed representation helps in understanding the transitions between functional groups and allows for tracking a protein's path through a cascade of functional groups. Therefore, depending on the nature of the network, our representation is capable of elucidating temporal relations between functional groups. Our results show that the proposed method opens a new avenue for the analysis of protein interaction networks.

  11. PAFit: A Statistical Method for Measuring Preferential Attachment in Temporal Complex Networks.

    Directory of Open Access Journals (Sweden)

    Thong Pham

    Full Text Available Preferential attachment is a stochastic process that has been proposed to explain certain topological features characteristic of complex networks from diverse domains. The systematic investigation of preferential attachment is an important area of research in network science, not only for the theoretical matter of verifying whether this hypothesized process is operative in real-world networks, but also for the practical insights that follow from knowledge of its functional form. Here we describe a maximum likelihood based estimation method for the measurement of preferential attachment in temporal complex networks. We call the method PAFit, and implement it in an R package of the same name. PAFit constitutes an advance over previous methods primarily because we based it on a nonparametric statistical framework that enables attachment kernel estimation free of any assumptions about its functional form. We show this results in PAFit outperforming the popular methods of Jeong and Newman in Monte Carlo simulations. What is more, we found that the application of PAFit to a publically available Flickr social network dataset yielded clear evidence for a deviation of the attachment kernel from the popularly assumed log-linear form. Independent of our main work, we provide a correction to a consequential error in Newman's original method which had evidently gone unnoticed since its publication over a decade ago.

  12. A Comparison of Multidimensional Item Selection Methods in Simple and Complex Test Designs

    Directory of Open Access Journals (Sweden)

    Eren Halil ÖZBERK

    2017-03-01

    Full Text Available In contrast with the previous studies, this study employed various test designs (simple and complex which allow the evaluation of the overall ability score estimations across multiple real test conditions. In this study, four factors were manipulated, namely the test design, number of items per dimension, correlation between dimensions and item selection methods. Using the generated item and ability parameters, dichotomous item responses were generated in by using M3PL compensatory multidimensional IRT model with specified correlations. MCAT composite ability score accuracy was evaluated using absolute bias (ABSBIAS, correlation and the root mean square error (RMSE between true and estimated ability scores. The results suggest that the multidimensional test structure, number of item per dimension and correlation between dimensions had significant effect on item selection methods for the overall score estimations. For simple structure test design it was found that V1 item selection has the lowest absolute bias estimations for both long and short tests while estimating overall scores. As the model gets complex KL item selection method performed better than other two item selection method.

  13. Accurate and simple measurement method of complex decay schemes radionuclide activity

    International Nuclear Information System (INIS)

    Legrand, J.; Clement, C.; Bac, C.

    1975-01-01

    A simple method for the measurement of the activity is described. It consists of using a well-type sodium iodide crystal whose efficiency mith monoenergetic photon rays has been computed or measured. For each radionuclide with a complex decay scheme a total efficiency is computed; it is shown that the efficiency is very high, near 100%. The associated incertainty is low, in spite of the important uncertainties on the different parameters used in the computation. The method has been applied to the measurement of the 152 Eu primary reference [fr

  14. Neural Interfaces for Intracortical Recording: Requirements, Fabrication Methods, and Characteristics.

    Science.gov (United States)

    Szostak, Katarzyna M; Grand, Laszlo; Constandinou, Timothy G

    2017-01-01

    Implantable neural interfaces for central nervous system research have been designed with wire, polymer, or micromachining technologies over the past 70 years. Research on biocompatible materials, ideal probe shapes, and insertion methods has resulted in building more and more capable neural interfaces. Although the trend is promising, the long-term reliability of such devices has not yet met the required criteria for chronic human application. The performance of neural interfaces in chronic settings often degrades due to foreign body response to the implant that is initiated by the surgical procedure, and related to the probe structure, and material properties used in fabricating the neural interface. In this review, we identify the key requirements for neural interfaces for intracortical recording, describe the three different types of probes-microwire, micromachined, and polymer-based probes; their materials, fabrication methods, and discuss their characteristics and related challenges.

  15. Estimation methods of eco-environmental water requirements: Case study

    Institute of Scientific and Technical Information of China (English)

    YANG Zhifeng; CUI Baoshan; LIU Jingling

    2005-01-01

    Supplying water to the ecological environment with certain quantity and quality is significant for the protection of diversity and the realization of sustainable development. The conception and connotation of eco-environmental water requirements, including the definition of the conception, the composition and characteristics of eco-environmental water requirements, are evaluated in this paper. The classification and estimation methods of eco-environmental water requirements are then proposed. On the basis of the study on the Huang-Huai-Hai Area, the present water use, the minimum and suitable water requirement are estimated and the corresponding water shortage is also calculated. According to the interrelated programs, the eco-environmental water requirements in the coming years (2010, 2030, 2050) are estimated. The result indicates that the minimum and suitable eco-environmental water requirements fluctuate with the differences of function setting and the referential standard of water resources, and so as the water shortage. Moreover, the study indicates that the minimum eco-environmental water requirement of the study area ranges from 2.84×1010m3 to 1.02×1011m3, the suitable water requirement ranges from 6.45×1010m3 to 1.78×1011m3, the water shortage ranges from 9.1×109m3 to 2.16×1010m3 under the minimum water requirement, and it is from 3.07×1010m3 to 7.53×1010m3 under the suitable water requirement. According to the different values of the water shortage, the water priority can be allocated. The ranges of the eco-environmental water requirements in the three coming years (2010, 2030, 2050) are 4.49×1010m3-1.73×1011m3, 5.99×10m3?2.09×1011m3, and 7.44×1010m3-2.52×1011m3, respectively.

  16. Impact of a Modified Jigsaw Method for Learning an Unfamiliar, Complex Topic

    Directory of Open Access Journals (Sweden)

    Denise Kolanczyk

    2017-09-01

    Full Text Available Objective: The aim of this study was to use the jigsaw method with an unfamiliar, complex topic and to evaluate the effectiveness of the jigsaw teaching method on student learning of assigned material (“jigsaw expert” versus non-assigned material (“jigsaw learner”. Innovation: The innovation was implemented in an advanced cardiology elective. Forty students were assigned a pre-reading and one of four valvular heart disorders, a topic not previously taught in the curriculum. A pre-test and post-test evaluated overall student learning. Student performance on pre/post tests as the “jigsaw expert” and “jigsaw learner” was also compared. Critical Analysis: Overall, the post-test mean score of 85.75% was significantly higher than that of the pre-test score of 56.75% (p<0.05. There was significant improvement in scores regardless of whether the material was assigned (“jigsaw experts” pre=58.8% and post=82.5%; p<0.05 or not assigned (“jigsaw learners” pre= 56.25% and post= 86.56%, p<0.05 for pre-study. Next Steps: The use of the jigsaw method to teach unfamiliar, complex content helps students to become both teachers and active listeners, which are essential to the skills and professionalism of a health care provider. Further studies are needed to evaluate use of the jigsaw method to teach unfamiliar, complex content on long-term retention and to further examine the effects of expert vs. non-expert roles. Conflict of Interest We declare no conflicts of interest or financial interests that the authors or members of their immediate families have in any product or service discussed in the manuscript, including grants (pending or received, employment, gifts, stock holdings or options, honoraria, consultancies, expert testimony, patents and royalties.   Type: Note

  17. Managing today's complex healthcare business enterprise: reflections on distinctive requirements of healthcare management education.

    Science.gov (United States)

    Welton, William E

    2004-01-01

    In early 2001, the community of educational programs offering master's-level education in healthcare management began an odyssey to modernize its approach to the organization and delivery of healthcare management education. The community recognized that cumulative long-term changes within healthcare management practice required a careful examination of healthcare management context and manpower requirements. This article suggests an evidence-based rationale for defining the distinctive elements of healthcare management, thus suggesting a basis for review and transformation of master's-level healthcare management curricula. It also suggests ways to modernize these curricula in a manner that recognizes the distinctiveness of the healthcare business enterprise as well as the changing management roles and careers within these complex organizations and systems. Through such efforts, the healthcare management master's-level education community would be better prepared to meet current and future challenges, to increase its relevance to the management practice community, and to allocate scarce faculty and program resources more effectively.

  18. Principal Physicochemical Methods Used to Characterize Dendrimer Molecule Complexes Used as Genetic Therapy Agents, Nanovaccines or Drug Carriers.

    Science.gov (United States)

    Alberto, Rodríguez Fonseca Rolando; Joao, Rodrigues; de Los Angeles, Muñoz-Fernández María; Alberto, Martínez Muñoz; Manuel Jonathan, Fragoso Vázquez; José, Correa Basurto

    2017-08-30

    Nanomedicine is the application of nanotechnology to medicine. This field is related to the study of nanodevices and nanomaterials applied to various medical uses, such as in improving the pharmacological properties of different molecules. Dendrimers are synthetic nanoparticles whose physicochemical properties vary according to their chemical structure. These molecules have been extensively investigated as drug nanocarriers to improve drug solubility and as sustained-release systems. New therapies such as gene therapy and the development of nanovaccines can be improved by the use of dendrimers. The biophysical and physicochemical characterization of nucleic acid/peptide-dendrimer complexes is crucial to identify their functional properties prior to biological evaluation. In that sense, it is necessary to first identify whether the peptide-dendrimer or nucleic aciddendrimer complexes can be formed and whether the complex can dissociate under the appropriate conditions at the target cells. In addition, biophysical and physicochemical characterization is required to determine how long the complexes remain stable, what proportion of peptide or nucleic acid is required to form the complex or saturate the dendrimer, and the size of the complex formed. In this review, we present the latest information on characterization systems for dendrimer-nucleic acid, dendrimer-peptide and dendrimer-drug complexes with several biotechnological and pharmacological applications. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  19. Computational Experiment Study on Selection Mechanism of Project Delivery Method Based on Complex Factors

    Directory of Open Access Journals (Sweden)

    Xiang Ding

    2014-01-01

    Full Text Available Project delivery planning is a key stage used by the project owner (or project investor for organizing design, construction, and other operations in a construction project. The main task in this stage is to select an appropriate project delivery method. In order to analyze different factors affecting the PDM selection, this paper establishes a multiagent model mainly to show how project complexity, governance strength, and market environment affect the project owner’s decision on PDM. Experiment results show that project owner usually choose Design-Build method when the project is very complex within a certain range. Besides, this paper points out that Design-Build method will be the prior choice when the potential contractors develop quickly. This paper provides the owners with methods and suggestions in terms of showing how the factors affect PDM selection, and it may improve the project performance.

  20. Protein complex detection in PPI networks based on data integration and supervised learning method.

    Science.gov (United States)

    Yu, Feng; Yang, Zhi; Hu, Xiao; Sun, Yuan; Lin, Hong; Wang, Jian

    2015-01-01

    Revealing protein complexes are important for understanding principles of cellular organization and function. High-throughput experimental techniques have produced a large amount of protein interactions, which makes it possible to predict protein complexes from protein-protein interaction (PPI) networks. However, the small amount of known physical interactions may limit protein complex detection. The new PPI networks are constructed by integrating PPI datasets with the large and readily available PPI data from biomedical literature, and then the less reliable PPI between two proteins are filtered out based on semantic similarity and topological similarity of the two proteins. Finally, the supervised learning protein complex detection (SLPC), which can make full use of the information of available known complexes, is applied to detect protein complex on the new PPI networks. The experimental results of SLPC on two different categories yeast PPI networks demonstrate effectiveness of the approach: compared with the original PPI networks, the best average improvements of 4.76, 6.81 and 15.75 percentage units in the F-score, accuracy and maximum matching ratio (MMR) are achieved respectively; compared with the denoising PPI networks, the best average improvements of 3.91, 4.61 and 12.10 percentage units in the F-score, accuracy and MMR are achieved respectively; compared with ClusterONE, the start-of the-art complex detection method, on the denoising extended PPI networks, the average improvements of 26.02 and 22.40 percentage units in the F-score and MMR are achieved respectively. The experimental results show that the performances of SLPC have a large improvement through integration of new receivable PPI data from biomedical literature into original PPI networks and denoising PPI networks. In addition, our protein complexes detection method can achieve better performance than ClusterONE.

  1. Dual chromatin recognition by the histone deacetylase complex HCHC is required for proper DNA methylation in Neurospora crassa

    Science.gov (United States)

    Honda, Shinji; Bicocca, Vincent T.; Gessaman, Jordan D.; Rountree, Michael R.; Yokoyama, Ayumi; Yu, Eun Y.; Selker, Jeanne M. L.; Selker, Eric U.

    2016-01-01

    DNA methylation, heterochromatin protein 1 (HP1), histone H3 lysine 9 (H3K9) methylation, histone deacetylation, and highly repeated sequences are prototypical heterochromatic features, but their interrelationships are not fully understood. Prior work showed that H3K9 methylation directs DNA methylation and histone deacetylation via HP1 in Neurospora crassa and that the histone deacetylase complex HCHC is required for proper DNA methylation. The complex consists of the chromodomain proteins HP1 and chromodomain protein 2 (CDP-2), the histone deacetylase HDA-1, and the AT-hook motif protein CDP-2/HDA-1–associated protein (CHAP). We show that the complex is required for proper chromosome segregation, dissect its function, and characterize interactions among its components. Our analyses revealed the existence of an HP1-based DNA methylation pathway independent of its chromodomain. The pathway partially depends on CHAP but not on the CDP-2 chromodomain. CDP-2 serves as a bridge between the recognition of H3K9 trimethylation (H3K9me3) by HP1 and the histone deacetylase activity of HDA-1. CHAP is also critical for HDA-1 localization to heterochromatin. Specifically, the CHAP zinc finger interacts directly with the HDA-1 argonaute-binding protein 2 (Arb2) domain, and the CHAP AT-hook motifs recognize heterochromatic regions by binding to AT-rich DNA. Our data shed light on the interrelationships among the prototypical heterochromatic features and support a model in which dual recognition by the HP1 chromodomain and the CHAP AT-hooks are required for proper heterochromatin formation. PMID:27681634

  2. A digital processing method for the analysis of complex nuclear spectra

    International Nuclear Information System (INIS)

    Madan, V.K.; Abani, M.C.; Bairi, B.R.

    1994-01-01

    This paper describes a digital processing method using frequency power spectra for the analysis of complex nuclear spectra. The power spectra were estimated by employing modified discrete Fourier transform. The method was applied to observed spectral envelopes. The results for separating closely-spaced doublets in nuclear spectra of low statistical precision compared favorably with those obtained by using a popular peak fitting program SAMPO. The paper also describes limitations of the peak fitting methods. It describes the advantages of digital processing techniques for type II digital signals including nuclear spectra. A compact computer program occupying less than 2.5 kByte of memory space was written in BASIC for the processing of observed spectral envelopes. (orig.)

  3. Automated local line rolling forming and simplified deformation simulation method for complex curvature plate of ships

    Directory of Open Access Journals (Sweden)

    Y. Zhao

    2017-06-01

    Full Text Available Local line rolling forming is a common forming approach for the complex curvature plate of ships. However, the processing mode based on artificial experience is still applied at present, because it is difficult to integrally determine relational data for the forming shape, processing path, and process parameters used to drive automation equipment. Numerical simulation is currently the major approach for generating such complex relational data. Therefore, a highly precise and effective numerical computation method becomes crucial in the development of the automated local line rolling forming system for producing complex curvature plates used in ships. In this study, a three-dimensional elastoplastic finite element method was first employed to perform numerical computations for local line rolling forming, and the corresponding deformation and strain distribution features were acquired. In addition, according to the characteristics of strain distributions, a simplified deformation simulation method, based on the deformation obtained by applying strain was presented. Compared to the results of the three-dimensional elastoplastic finite element method, this simplified deformation simulation method was verified to provide high computational accuracy, and this could result in a substantial reduction in calculation time. Thus, the application of the simplified deformation simulation method was further explored in the case of multiple rolling loading paths. Moreover, it was also utilized to calculate the local line rolling forming for the typical complex curvature plate of ships. Research findings indicated that the simplified deformation simulation method was an effective tool for rapidly obtaining relationships between the forming shape, processing path, and process parameters.

  4. Evolutionary analysis of apolipoprotein E by Maximum Likelihood and complex network methods

    Directory of Open Access Journals (Sweden)

    Leandro de Jesus Benevides

    Full Text Available Abstract Apolipoprotein E (apo E is a human glycoprotein with 299 amino acids, and it is a major component of very low density lipoproteins (VLDL and a group of high-density lipoproteins (HDL. Phylogenetic studies are important to clarify how various apo E proteins are related in groups of organisms and whether they evolved from a common ancestor. Here, we aimed at performing a phylogenetic study on apo E carrying organisms. We employed a classical and robust method, such as Maximum Likelihood (ML, and compared the results using a more recent approach based on complex networks. Thirty-two apo E amino acid sequences were downloaded from NCBI. A clear separation could be observed among three major groups: mammals, fish and amphibians. The results obtained from ML method, as well as from the constructed networks showed two different groups: one with mammals only (C1 and another with fish (C2, and a single node with the single sequence available for an amphibian. The accordance in results from the different methods shows that the complex networks approach is effective in phylogenetic studies. Furthermore, our results revealed the conservation of apo E among animal groups.

  5. POWER ANALYSIS FOR COMPLEX MEDIATIONAL DESIGNS USING MONTE CARLO METHODS

    OpenAIRE

    Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.

    2010-01-01

    Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex mediational models. The approach is based on the well known technique of generating a large number of samples in a Monte Carlo study, and estimating power...

  6. A ghost-cell immersed boundary method for flow in complex geometry

    International Nuclear Information System (INIS)

    Tseng, Y.-H.; Ferziger, Joel H.

    2003-01-01

    An efficient ghost-cell immersed boundary method (GCIBM) for simulating turbulent flows in complex geometries is presented. A boundary condition is enforced through a ghost cell method. The reconstruction procedure allows systematic development of numerical schemes for treating the immersed boundary while preserving the overall second-order accuracy of the base solver. Both Dirichlet and Neumann boundary conditions can be treated. The current ghost cell treatment is both suitable for staggered and non-staggered Cartesian grids. The accuracy of the current method is validated using flow past a circular cylinder and large eddy simulation of turbulent flow over a wavy surface. Numerical results are compared with experimental data and boundary-fitted grid results. The method is further extended to an existing ocean model (MITGCM) to simulate geophysical flow over a three-dimensional bump. The method is easily implemented as evidenced by our use of several existing codes

  7. Colorimetric method for enzymatic screening assay of ATP using Fe(III)-xylenol orange complex formation.

    Science.gov (United States)

    Ishida, Akihiko; Yamada, Yasuko; Kamidate, Tamio

    2008-11-01

    In hygiene management, recently there has been a significant need for screening methods for microbial contamination by visual observation or with commonly used colorimetric apparatus. The amount of adenosine triphosphate (ATP) can serve as the index of a microorganism. This paper describes the development of a colorimetric method for the assay of ATP, using enzymatic cycling and Fe(III)-xylenol orange (XO) complex formation. The color characteristics of the Fe(III)-XO complexes, which show a distinct color change from yellow to purple, assist the visual observation in screening work. In this method, a trace amount of ATP was converted to pyruvate, which was further amplified exponentially with coupled enzymatic reactions. Eventually, pyruvate was converted to the Fe(III)-XO complexes through pyruvate oxidase reaction and Fe(II) oxidation. As the assay result, yellow or purple color was observed: A yellow color indicates that the ATP concentration is lower than the criterion of the test, and a purple color indicates that the ATP concentration is higher than the criterion. The method was applied to the assay of ATP extracted from Escherichia coli cells added to cow milk.

  8. Neural Interfaces for Intracortical Recording: Requirements, Fabrication Methods, and Characteristics

    Directory of Open Access Journals (Sweden)

    Katarzyna M. Szostak

    2017-12-01

    Full Text Available Implantable neural interfaces for central nervous system research have been designed with wire, polymer, or micromachining technologies over the past 70 years. Research on biocompatible materials, ideal probe shapes, and insertion methods has resulted in building more and more capable neural interfaces. Although the trend is promising, the long-term reliability of such devices has not yet met the required criteria for chronic human application. The performance of neural interfaces in chronic settings often degrades due to foreign body response to the implant that is initiated by the surgical procedure, and related to the probe structure, and material properties used in fabricating the neural interface. In this review, we identify the key requirements for neural interfaces for intracortical recording, describe the three different types of probes—microwire, micromachined, and polymer-based probes; their materials, fabrication methods, and discuss their characteristics and related challenges.

  9. The GARP complex is required for cellular sphingolipid homeostasis

    DEFF Research Database (Denmark)

    Fröhlich, Florian; Petit, Constance; Kory, Nora

    2015-01-01

    (GARP) complex, which functions in endosome-to-Golgi retrograde vesicular transport, as a critical player in sphingolipid homeostasis. GARP deficiency leads to accumulation of sphingolipid synthesis intermediates, changes in sterol distribution, and lysosomal dysfunction. A GARP complex mutation...... analogous to a VPS53 allele causing progressive cerebello-cerebral atrophy type 2 (PCCA2) in humans exhibits similar, albeit weaker, phenotypes in yeast, providing mechanistic insights into disease pathogenesis. Inhibition of the first step of de novo sphingolipid synthesis is sufficient to mitigate many...

  10. Quantitative Research Methods in Chaos and Complexity: From Probability to Post Hoc Regression Analyses

    Science.gov (United States)

    Gilstrap, Donald L.

    2013-01-01

    In addition to qualitative methods presented in chaos and complexity theories in educational research, this article addresses quantitative methods that may show potential for future research studies. Although much in the social and behavioral sciences literature has focused on computer simulations, this article explores current chaos and…

  11. Complex-plane strategy for computing rotating polytropic models - efficiency and accuracy of the complex first-order perturbation theory

    International Nuclear Information System (INIS)

    Geroyannis, V.S.

    1988-01-01

    In this paper, a numerical method is developed for determining the structure distortion of a polytropic star which rotates either uniformly or differentially. This method carries out the required numerical integrations in the complex plane. The method is implemented to compute indicative quantities, such as the critical perturbation parameter which represents an upper limit in the rotational behavior of the star. From such indicative results, it is inferred that this method achieves impressive improvement against other relevant methods; most important, it is comparable to some of the most elaborate and accurate techniques on the subject. It is also shown that the use of this method with Chandrasekhar's first-order perturbation theory yields an immediate drastic improvement of the results. Thus, there is no neeed - for most applications concerning rotating polytropic models - to proceed to the further use of the method with higher order techniques, unless the maximum accuracy of the method is required. 31 references

  12. Summer School Mathematical Foundations of Complex Networked Information Systems

    CERN Document Server

    Fosson, Sophie; Ravazzi, Chiara

    2015-01-01

    Introducing the reader to the mathematics beyond complex networked systems, these lecture notes investigate graph theory, graphical models, and methods from statistical physics. Complex networked systems play a fundamental role in our society, both in everyday life and in scientific research, with applications ranging from physics and biology to economics and finance. The book is self-contained, and requires only an undergraduate mathematical background.

  13. Functional analytic methods in complex analysis and applications to partial differential equations

    International Nuclear Information System (INIS)

    Mshimba, A.S.A.; Tutschke, W.

    1990-01-01

    The volume contains 24 lectures given at the Workshop on Functional Analytic Methods in Complex Analysis and Applications to Partial Differential Equations held in Trieste, Italy, between 8-19 February 1988, at the ICTP. A separate abstract was prepared for each of these lectures. Refs and figs

  14. Evaluating polymer degradation with complex mixtures using a simplified surface area method.

    Science.gov (United States)

    Steele, Kandace M; Pelham, Todd; Phalen, Robert N

    2017-09-01

    Chemical-resistant gloves, designed to protect workers from chemical hazards, are made from a variety of polymer materials such as plastic, rubber, and synthetic rubber. One material does not provide protection against all chemicals, thus proper polymer selection is critical. Standardized testing, such as chemical degradation tests, are used to aid in the selection process. The current methods of degradation ratings based on changes in weight or tensile properties can be expensive and data often do not exist for complex chemical mixtures. There are hundreds of thousands of chemical products on the market that do not have chemical resistance data for polymer selection. The method described in this study provides an inexpensive alternative to gravimetric analysis. This method uses surface area change to evaluate degradation of a polymer material. Degradation tests for 5 polymer types against 50 complex mixtures were conducted using both gravimetric and surface area methods. The percent change data were compared between the two methods. The resulting regression line was y = 0.48x + 0.019, in units of percent, and the Pearson correlation coefficient was r = 0.9537 (p ≤ 0.05), which indicated a strong correlation between percent weight change and percent surface area change. On average, the percent change for surface area was about half that of the weight change. Using this information, an equivalent rating system was developed for determining the chemical degradation of polymer gloves using surface area.

  15. The application of HP-GFC chromatographic method for the analysis of oligosaccharides in bioactive complexes

    Directory of Open Access Journals (Sweden)

    Savić Ivan

    2009-01-01

    Full Text Available The aim of this work was to optimize a GFC method for the analysis of bioactive metal (Cu, Co and Fe complexes with olygosaccharides (dextran and pullulan. Bioactive metal complexes with olygosaccharides were synthesized by original procedure. GFC was used to study the molecular weight distribution, polymerization degree of oligosaccharides and bioactive metal complexes. The metal bounding in complexes depends on the ligand polymerization degree and the presence of OH groups in coordinative sphere of the central metal ion. The interaction between oligosaccharide and metal ions are very important in veterinary medicine, agriculture, pharmacy and medicine.

  16. A new high-throughput LC-MS method for the analysis of complex fructan mixtures

    DEFF Research Database (Denmark)

    Verspreet, Joran; Hansen, Anders Holmgaard; Dornez, Emmie

    2014-01-01

    In this paper, a new liquid chromatography-mass spectrometry (LC-MS) method for the analysis of complex fructan mixtures is presented. In this method, columns with a trifunctional C18 alkyl stationary phase (T3) were used and their performance compared with that of a porous graphitized carbon (PGC...

  17. Finding optimal interaction interface alignments between biological complexes

    KAUST Repository

    Cui, Xuefeng

    2015-06-13

    Motivation: Biological molecules perform their functions through interactions with other molecules. Structure alignment of interaction interfaces between biological complexes is an indispensable step in detecting their structural similarities, which are keys to understanding their evolutionary histories and functions. Although various structure alignment methods have been developed to successfully access the similarities of protein structures or certain types of interaction interfaces, existing alignment tools cannot directly align arbitrary types of interfaces formed by protein, DNA or RNA molecules. Specifically, they require a \\'blackbox preprocessing\\' to standardize interface types and chain identifiers. Yet their performance is limited and sometimes unsatisfactory. Results: Here we introduce a novel method, PROSTA-inter, that automatically determines and aligns interaction interfaces between two arbitrary types of complex structures. Our method uses sequentially remote fragments to search for the optimal superimposition. The optimal residue matching problem is then formulated as a maximum weighted bipartite matching problem to detect the optimal sequence order-independent alignment. Benchmark evaluation on all non-redundant protein-DNA complexes in PDB shows significant performance improvement of our method over TM-align and iAlign (with the \\'blackbox preprocessing\\'). Two case studies where our method discovers, for the first time, structural similarities between two pairs of functionally related protein-DNA complexes are presented. We further demonstrate the power of our method on detecting structural similarities between a protein-protein complex and a protein-RNA complex, which is biologically known as a protein-RNA mimicry case. © The Author 2015. Published by Oxford University Press.

  18. Application of a non-contiguous grid generation method to complex configurations

    International Nuclear Information System (INIS)

    Chen, S.; McIlwain, S.; Khalid, M.

    2003-01-01

    An economical non-contiguous grid generation method was developed to efficiently generate structured grids for complex 3D problems. Compared with traditional contiguous grids, this new approach generated grids for different block clusters independently and was able to distribute the grid points more economically according to the user's specific topology design. The method was evaluated by applying it to a Navier-Stokes computation of flow past a hypersonic projectile. Both the flow velocity and the heat transfer characteristics of the projectile agreed qualitatively with other numerical data in the literature and with available field data. Detailed grid topology designs for 3D geometries were addressed, and the advantages of this approach were analysed and compared with traditional contiguous grid generation methods. (author)

  19. Hexographic Method of Complex Town-Planning Terrain Estimate

    Science.gov (United States)

    Khudyakov, A. Ju

    2017-11-01

    The article deals with the vital problem of a complex town-planning analysis based on the “hexographic” graphic analytic method, makes a comparison with conventional terrain estimate methods and contains the method application examples. It discloses a procedure of the author’s estimate of restrictions and building of a mathematical model which reflects not only conventional town-planning restrictions, but also social and aesthetic aspects of the analyzed territory. The method allows one to quickly get an idea of the territory potential. It is possible to use an unlimited number of estimated factors. The method can be used for the integrated assessment of urban areas. In addition, it is possible to use the methods of preliminary evaluation of the territory commercial attractiveness in the preparation of investment projects. The technique application results in simple informative graphics. Graphical interpretation is straightforward from the experts. A definite advantage is the free perception of the subject results as they are not prepared professionally. Thus, it is possible to build a dialogue between professionals and the public on a new level allowing to take into account the interests of various parties. At the moment, the method is used as a tool for the preparation of integrated urban development projects at the Department of Architecture in Federal State Autonomous Educational Institution of Higher Education “South Ural State University (National Research University)”, FSAEIHE SUSU (NRU). The methodology is included in a course of lectures as the material on architectural and urban design for architecture students. The same methodology was successfully tested in the preparation of business strategies for the development of some territories in the Chelyabinsk region. This publication is the first in a series of planned activities developing and describing the methodology of hexographical analysis in urban and architectural practice. It is also

  20. Microreactor and method for preparing a radiolabeled complex or a biomolecule conjugate

    Energy Technology Data Exchange (ETDEWEB)

    Reichert, David E; Kenis, Paul J. A.; Wheeler, Tobias D; Desai, Amit V; Zeng, Dexing; Onal, Birce C

    2015-03-17

    A microreactor for preparing a radiolabeled complex or a biomolecule conjugate comprises a microchannel for fluid flow, where the microchannel comprises a mixing portion comprising one or more passive mixing elements, and a reservoir for incubating a mixed fluid. The reservoir is in fluid communication with the microchannel and is disposed downstream of the mixing portion. A method of preparing a radiolabeled complex includes flowing a radiometal solution comprising a metallic radionuclide through a downstream mixing portion of a microchannel, where the downstream mixing portion includes one or more passive mixing elements, and flowing a ligand solution comprising a bifunctional chelator through the downstream mixing portion. The ligand solution and the radiometal solution are passively mixed while in the downstream mixing portion to initiate a chelation reaction between the metallic radionuclide and the bifunctional chelator. The chelation reaction is completed to form a radiolabeled complex.

  1. Directed forgetting of complex pictures in an item method paradigm.

    Science.gov (United States)

    Hauswald, Anne; Kissler, Johanna

    2008-11-01

    An item-cued directed forgetting paradigm was used to investigate the ability to control episodic memory and selectively encode complex coloured pictures. A series of photographs was presented to 21 participants who were instructed to either remember or forget each picture after it was presented. Memory performance was later tested with a recognition task where all presented items had to be retrieved, regardless of the initial instructions. A directed forgetting effect--that is, better recognition of "to-be-remembered" than of "to-be-forgotten" pictures--was observed, although its size was smaller than previously reported for words or line drawings. The magnitude of the directed forgetting effect correlated negatively with participants' depression and dissociation scores. The results indicate that, at least in an item method, directed forgetting occurs for complex pictures as well as words and simple line drawings. Furthermore, people with higher levels of dissociative or depressive symptoms exhibit altered memory encoding patterns.

  2. Sensitivity Analysis of Hydraulic Methods Regarding Hydromorphologic Data Derivation Methods to Determine Environmental Water Requirements

    Directory of Open Access Journals (Sweden)

    Alireza Shokoohi

    2015-07-01

    Full Text Available This paper studies the accuracy of hydraulic methods in determining environmental flow requirements. Despite the vital importance of deriving river cross sectional data for hydraulic methods, few studies have focused on the criteria for deriving this data. The present study shows that the depth of cross section has a meaningful effect on the results obtained from hydraulic methods and that, considering fish as the index species for river habitat analysis, an optimum depth of 1 m should be assumed for deriving information from cross sections. The second important parameter required for extracting the geometric and hydraulic properties of rivers is the selection of an appropriate depth increment; ∆y. In the present research, this parameter was found to be equal to 1 cm. The uncertainty of the environmental discharge evaluation, when allocating water in areas with water scarcity, should be kept as low as possible. The Manning friction coefficient (n is an important factor in river discharge calculation. Using a range of "n" equal to 3 times the standard deviation for the study area, it is shown that the influence of friction coefficient on the estimation of environmental flow is much less than that on the calculation of river discharge.

  3. Immersed boundary methods for high-resolution simulation of atmospheric boundary-layer flow over complex terrain

    Science.gov (United States)

    Lundquist, Katherine Ann

    Mesoscale models, such as the Weather Research and Forecasting (WRF) model, are increasingly used for high resolution simulations, particularly in complex terrain, but errors associated with terrain-following coordinates degrade the accuracy of the solution. Use of an alternative Cartesian gridding technique, known as an immersed boundary method (IBM), alleviates coordinate transformation errors and eliminates restrictions on terrain slope which currently limit mesoscale models to slowly varying terrain. In this dissertation, an immersed boundary method is developed for use in numerical weather prediction. Use of the method facilitates explicit resolution of complex terrain, even urban terrain, in the WRF mesoscale model. First, the errors that arise in the WRF model when complex terrain is present are presented. This is accomplished using a scalar advection test case, and comparing the numerical solution to the analytical solution. Results are presented for different orders of advection schemes, grid resolutions and aspect ratios, as well as various degrees of terrain slope. For comparison, results from the same simulation are presented using the IBM. Both two-dimensional and three-dimensional immersed boundary methods are then described, along with details that are specific to the implementation of IBM in the WRF code. Our IBM is capable of imposing both Dirichlet and Neumann boundary conditions. Additionally, a method for coupling atmospheric physics parameterizations at the immersed boundary is presented, making IB methods much more functional in the context of numerical weather prediction models. The two-dimensional IB method is verified through comparisons of solutions for gentle terrain slopes when using IBM and terrain-following grids. The canonical case of flow over a Witch of Agnesi hill provides validation of the basic no-slip and zero gradient boundary conditions. Specified diurnal heating in a valley, producing anabatic winds, is used to validate the

  4. Immersed Boundary Methods for High-Resolution Simulation of Atmospheric Boundary-Layer Flow Over Complex Terrain

    Energy Technology Data Exchange (ETDEWEB)

    Lundquist, K A [Univ. of California, Berkeley, CA (United States)

    2010-05-12

    Mesoscale models, such as the Weather Research and Forecasting (WRF) model, are increasingly used for high resolution simulations, particularly in complex terrain, but errors associated with terrain-following coordinates degrade the accuracy of the solution. Use of an alternative Cartesian gridding technique, known as an immersed boundary method (IBM), alleviates coordinate transformation errors and eliminates restrictions on terrain slope which currently limit mesoscale models to slowly varying terrain. In this dissertation, an immersed boundary method is developed for use in numerical weather prediction. Use of the method facilitates explicit resolution of complex terrain, even urban terrain, in the WRF mesoscale model. First, the errors that arise in the WRF model when complex terrain is present are presented. This is accomplished using a scalar advection test case, and comparing the numerical solution to the analytical solution. Results are presented for different orders of advection schemes, grid resolutions and aspect ratios, as well as various degrees of terrain slope. For comparison, results from the same simulation are presented using the IBM. Both two-dimensional and three-dimensional immersed boundary methods are then described, along with details that are specific to the implementation of IBM in the WRF code. Our IBM is capable of imposing both Dirichlet and Neumann boundary conditions. Additionally, a method for coupling atmospheric physics parameterizations at the immersed boundary is presented, making IB methods much more functional in the context of numerical weather prediction models. The two-dimensional IB method is verified through comparisons of solutions for gentle terrain slopes when using IBM and terrain-following grids. The canonical case of flow over a Witch of Agnesi hill provides validation of the basic no-slip and zero gradient boundary conditions. Specified diurnal heating in a valley, producing anabatic winds, is used to validate the

  5. Purification of 2-oxo acid dehydrogenase multienzyme complexes from ox heart by a new method.

    OpenAIRE

    Stanley, C J; Perham, R N

    1980-01-01

    A new method is described that allows the parallel purification of the pyruvate dehydrogenase and 2-oxoglutarate dehydrogenase multienzyme complexes from ox heart without the need for prior isolation of mitochondria. All the assayable activity of the 2-oxo acid dehydrogenase complexes in the disrupted tissue is made soluble by the inclusion of non-ionic detergents such as Triton X-100 or Tween-80 in the buffer used for the initial extraction of the enzyme complexes. The yields of the pyruvate...

  6. An argumentation-based method for managing complex issues in design of infrastructural systems

    International Nuclear Information System (INIS)

    Marashi, Emad; Davis, John P.

    2006-01-01

    The many interacting and conflicting requirements of a wide range of stakeholders are the main sources of complexity in the infrastructure and utility systems. We propose a systemic methodology based on negotiation and argumentation to help in the resolution of complex issues and to facilitate options appraisal during design of such systems. A process-based approach is used to assemble and propagate the evidence on performance and reliability of the system and its components, providing a success measure for different scenarios or design alternatives. The reliability of information sources and experts opinions are dealt with through an extension of the mathematical theory of evidence. This framework helps not only in capturing the reasoning behind design decisions, but also enables the decision-makers to assess and compare the evidential support for each design option

  7. A COMPARISON OF A SPECTROPHOTOMETRIC (QUERCETIN) METHOD AND AN ATOMIC-ABSORPTION METHOD FOR DETERMINATION OF TIN IN FOOD

    DEFF Research Database (Denmark)

    Engberg, Å

    1973-01-01

    Procedures for the determination of tin in food, which involve a spectrophotometric method (with the quercetin-tin complex) and an atomic-absorption method, are described. The precision of the complete methods and of the individual analytical steps required is evaluated, and the parameters...

  8. Systems and context modeling approach to requirements analysis

    Science.gov (United States)

    Ahuja, Amrit; Muralikrishna, G.; Patwari, Puneet; Subhrojyoti, C.; Swaminathan, N.; Vin, Harrick

    2014-08-01

    Ensuring completeness and correctness of the requirements for a complex system such as the SKA is challenging. Current system engineering practice includes developing a stakeholder needs definition, a concept of operations, and defining system requirements in terms of use cases and requirements statements. We present a method that enhances this current practice into a collection of system models with mutual consistency relationships. These include stakeholder goals, needs definition and system-of-interest models, together with a context model that participates in the consistency relationships among these models. We illustrate this approach by using it to analyze the SKA system requirements.

  9. KDM2B recruitment of the Polycomb group complex, PRC1.1, requires cooperation between PCGF1 and BCORL1

    OpenAIRE

    Wong, Sarah J.; Gearhart, Micah D.; Taylor, Alexander B.; Nanyes, David R.; Ha, Daniel J.; Robinson, Angela K.; Artigas, Jason A.; Lee, Oliver J.; Demeler, Borries; Hart, P. John; Bardwell, Vivian J.; Kim, Chongwoo A.

    2016-01-01

    KDM2B recruits H2A-ubiquitinating activity of a non-canonical Polycomb Repression Complex 1 (PRC1.1) to CpG islands, facilitating gene repression. We investigated the molecular basis of recruitment using in vitro assembly assays to identify minimal components, subcomplexes and domains required for recruitment. A minimal four-component PRC1.1 complex can be assembled by combining two separately isolated subcomplexes: the DNA binding KDM2B/SKP1 heterodimer and the heterodimer of BCORL1 and the ...

  10. Proposed New Method of Interpretation of Infrared Ship Signature Requirements

    NARCIS (Netherlands)

    Neele, F.P.; Wilson, M.T.; Youern, K.

    2005-01-01

    new method of deriving and defining requirements for the infrared signature of new ships is presented. The current approach is to specify the maximum allowed temperature or radiance contrast of the ship with respect to its background. At present, in most NATO countries, it is the contractor’s

  11. Recruitment of a SAP18-HDAC1 complex into HIV-1 virions and its requirement for viral replication.

    Directory of Open Access Journals (Sweden)

    Masha Sorin

    2009-06-01

    Full Text Available HIV-1 integrase (IN is a virally encoded protein required for integration of viral cDNA into host chromosomes. INI1/hSNF5 is a component of the SWI/SNF complex that interacts with HIV-1 IN, is selectively incorporated into HIV-1 (but not other retroviral virions, and modulates multiple steps, including particle production and infectivity. To gain further insight into the role of INI1 in HIV-1 replication, we screened for INI1-interacting proteins using the yeast two-hybrid system. We found that SAP18 (Sin3a associated protein 18 kD, a component of the Sin3a-HDAC1 complex, directly binds to INI1 in yeast, in vitro and in vivo. Interestingly, we found that IN also binds to SAP18 in vitro and in vivo. SAP18 and components of a Sin3A-HDAC1 complex were specifically incorporated into HIV-1 (but not SIV and HTLV-1 virions in an HIV-1 IN-dependent manner. Using a fluorescence-based assay, we found that HIV-1 (but not SIV virion preparations harbour significant deacetylase activity, indicating the specific recruitment of catalytically active HDAC into the virions. To determine the requirement of virion-associated HDAC1 to HIV-1 replication, an inactive, transdominant negative mutant of HDAC1 (HDAC1(H141A was utilized. Incorporation of HDAC1(H141A decreased the virion-associated histone deacetylase activity. Furthermore, incorporation of HDAC1(H141A decreased the infectivity of HIV-1 (but not SIV virions. The block in infectivity due to virion-associated HDAC1(H141A occurred specifically at the early reverse transcription stage, while entry of the virions was unaffected. RNA-interference mediated knock-down of HDAC1 in producer cells resulted in decreased virion-associated HDAC1 activity and a reduction in infectivity of these virions. These studies indicate that HIV-1 IN and INI1/hSNF5 bind SAP18 and selectively recruit components of Sin3a-HDAC1 complex into HIV-1 virions. Furthermore, HIV-1 virion-associated HDAC1 is required for efficient early post

  12. Technique of Substantiating Requirements for the Vision Systems of Industrial Robotic Complexes

    Directory of Open Access Journals (Sweden)

    V. Ya. Kolyuchkin

    2015-01-01

    Full Text Available In references, there is a lack of approaches to describe the justified technical requirements for the vision systems (VS of industrial robotics complexes (IRC. Therefore, an objective of the work is to develop a technique that allows substantiating requirements for the main quality indicators of VS, functioning as a part of the IRC.The proposed technique uses a model representation of VS, which, as a part of the IRC information system, sorts the objects in the work area, as well as measures their linear and angular coordinates. To solve the problem of statement there is a proposal to define the target function of a designed IRC as a dependence of the IRC indicator efficiency on the VS quality indicators. The paper proposes to use, as an indicator of the IRC efficiency, the probability of a lack of fault products when manufacturing. Based on the functions the VS perform as a part of the IRC information system, the accepted indicators of VS quality are as follows: a probability of the proper recognition of objects in the working IRC area, and confidential probabilities of measuring linear and angular orientation coordinates of objects with the specified values of permissible error. Specific values of these errors depend on the orientation errors of working bodies of manipulators that are a part of the IRC. The paper presents mathematical expressions that determine the functional dependence of the probability of a lack of fault products when manufacturing on the VS quality indicators and the probability of failures of IRC technological equipment.The offered technique for substantiating engineering requirements for the VS of IRC has novelty. The results obtained in this work can be useful for professionals involved in IRC VS development, and, in particular, in development of VS algorithms and software.

  13. Complex problems require complex solutions: the utility of social quality theory for addressing the Social Determinants of Health

    Directory of Open Access Journals (Sweden)

    Ward Paul R

    2011-08-01

    Full Text Available Abstract Background In order to improve the health of the most vulnerable groups in society, the WHO Commission on Social Determinants of Health (CSDH called for multi-sectoral action, which requires research and policy on the multiple and inter-linking factors shaping health outcomes. Most conceptual tools available to researchers tend to focus on singular and specific social determinants of health (SDH (e.g. social capital, empowerment, social inclusion. However, a new and innovative conceptual framework, known as social quality theory, facilitates a more complex and complete understanding of the SDH, with its focus on four domains: social cohesion, social inclusion, social empowerment and socioeconomic security, all within the same conceptual framework. This paper provides both an overview of social quality theory in addition to findings from a national survey of social quality in Australia, as a means of demonstrating the operationalisation of the theory. Methods Data were collected using a national random postal survey of 1044 respondents in September, 2009. Multivariate logistic regression analysis was conducted. Results Statistical analysis revealed that people on lower incomes (less than $45000 experience worse social quality across all of the four domains: lower socio-economic security, lower levels of membership of organisations (lower social cohesion, higher levels of discrimination and less political action (lower social inclusion and lower social empowerment. The findings were mixed in terms of age, with people over 65 years experiencing lower socio-economic security, but having higher levels of social cohesion, experiencing lower levels of discrimination (higher social inclusion and engaging in more political action (higher social empowerment. In terms of gender, women had higher social cohesion than men, although also experienced more discrimination (lower social inclusion. Conclusions Applying social quality theory allows

  14. Method for calculating required shielding in medical x-ray rooms

    International Nuclear Information System (INIS)

    Karppinen, J.

    1997-10-01

    The new annual radiation dose limits - 20 mSv (previously 50 mSv) for radiation workers and 1 mSv (previously 5 mSv) for other persons - implies that the adequacy of existing radiation shielding must be re-evaluated. In principle, one could assume that the thicknesses of old radiation shields should be increased by about one or two half-value layers in order to comply with the new dose limits. However, the assumptions made in the earlier shielding calculations are highly conservative; the required shielding was often determined by applying the maximum high-voltage of the x-ray tube for the whole workload. A more realistic calculation shows that increased shielding is typically not necessary if more practical x-ray tube voltages are used in the evaluation. We have developed a PC-based calculation method for calculating the x-ray shielding which is more realistic than the highly conservative method formerly used. The method may be used to evaluate an existing shield for compliance with new regulations. As examples of these calculations, typical x-ray rooms are considered. The lead and concrete thickness requirements as a function of x-ray tube voltage and workload are also given in tables. (author)

  15. Latency in Visionic Systems: Test Methods and Requirements

    Science.gov (United States)

    Bailey, Randall E.; Arthur, J. J., III; Williams, Steven P.; Kramer, Lynda J.

    2005-01-01

    A visionics device creates a pictorial representation of the external scene for the pilot. The ultimate objective of these systems may be to electronically generate a form of Visual Meteorological Conditions (VMC) to eliminate weather or time-of-day as an operational constraint and provide enhancement over actual visual conditions where eye-limiting resolution may be a limiting factor. Empirical evidence has shown that the total system delays or latencies including the imaging sensors and display systems, can critically degrade their utility, usability, and acceptability. Definitions and measurement techniques are offered herein as common test and evaluation methods for latency testing in visionics device applications. Based upon available data, very different latency requirements are indicated based upon the piloting task, the role in which the visionics device is used in this task, and the characteristics of the visionics cockpit display device including its resolution, field-of-regard, and field-of-view. The least stringent latency requirements will involve Head-Up Display (HUD) applications, where the visionics imagery provides situational information as a supplement to symbology guidance and command information. Conversely, the visionics system latency requirement for a large field-of-view Head-Worn Display application, providing a Virtual-VMC capability from which the pilot will derive visual guidance, will be the most stringent, having a value as low as 20 msec.

  16. Analytical difficulties facing today's regulatory laboratories: issues in method validation.

    Science.gov (United States)

    MacNeil, James D

    2012-08-01

    The challenges facing analytical laboratories today are not unlike those faced in the past, although both the degree of complexity and the rate of change have increased. Challenges such as development and maintenance of expertise, maintenance and up-dating of equipment, and the introduction of new test methods have always been familiar themes for analytical laboratories, but international guidelines for laboratories involved in the import and export testing of food require management of such changes in a context which includes quality assurance, accreditation, and method validation considerations. Decisions as to when a change in a method requires re-validation of the method or on the design of a validation scheme for a complex multi-residue method require a well-considered strategy, based on a current knowledge of international guidance documents and regulatory requirements, as well the laboratory's quality system requirements. Validation demonstrates that a method is 'fit for purpose', so the requirement for validation should be assessed in terms of the intended use of a method and, in the case of change or modification of a method, whether that change or modification may affect a previously validated performance characteristic. In general, method validation involves method scope, calibration-related parameters, method precision, and recovery. Any method change which may affect method scope or any performance parameters will require re-validation. Some typical situations involving change in methods are discussed and a decision process proposed for selection of appropriate validation measures. © 2012 John Wiley & Sons, Ltd.

  17. A complex guided spectral transform Lanczos method for studying quantum resonance states

    International Nuclear Information System (INIS)

    Yu, Hua-Gen

    2014-01-01

    A complex guided spectral transform Lanczos (cGSTL) algorithm is proposed to compute both bound and resonance states including energies, widths and wavefunctions. The algorithm comprises of two layers of complex-symmetric Lanczos iterations. A short inner layer iteration produces a set of complex formally orthogonal Lanczos (cFOL) polynomials. They are used to span the guided spectral transform function determined by a retarded Green operator. An outer layer iteration is then carried out with the transform function to compute the eigen-pairs of the system. The guided spectral transform function is designed to have the same wavefunctions as the eigenstates of the original Hamiltonian in the spectral range of interest. Therefore the energies and/or widths of bound or resonance states can be easily computed with their wavefunctions or by using a root-searching method from the guided spectral transform surface. The new cGSTL algorithm is applied to bound and resonance states of HO, and compared to previous calculations

  18. Synthetic Multiple-Imputation Procedure for Multistage Complex Samples

    Directory of Open Access Journals (Sweden)

    Zhou Hanzhi

    2016-03-01

    Full Text Available Multiple imputation (MI is commonly used when item-level missing data are present. However, MI requires that survey design information be built into the imputation models. For multistage stratified clustered designs, this requires dummy variables to represent strata as well as primary sampling units (PSUs nested within each stratum in the imputation model. Such a modeling strategy is not only operationally burdensome but also inferentially inefficient when there are many strata in the sample design. Complexity only increases when sampling weights need to be modeled. This article develops a generalpurpose analytic strategy for population inference from complex sample designs with item-level missingness. In a simulation study, the proposed procedures demonstrate efficient estimation and good coverage properties. We also consider an application to accommodate missing body mass index (BMI data in the analysis of BMI percentiles using National Health and Nutrition Examination Survey (NHANES III data. We argue that the proposed methods offer an easy-to-implement solution to problems that are not well-handled by current MI techniques. Note that, while the proposed method borrows from the MI framework to develop its inferential methods, it is not designed as an alternative strategy to release multiply imputed datasets for complex sample design data, but rather as an analytic strategy in and of itself.

  19. Laser beam complex amplitude measurement by phase diversity.

    Science.gov (United States)

    Védrenne, Nicolas; Mugnier, Laurent M; Michau, Vincent; Velluet, Marie-Thérèse; Bierent, Rudolph

    2014-02-24

    The control of the optical quality of a laser beam requires a complex amplitude measurement able to deal with strong modulus variations and potentially highly perturbed wavefronts. The method proposed here consists in an extension of phase diversity to complex amplitude measurements that is effective for highly perturbed beams. Named camelot for Complex Amplitude MEasurement by a Likelihood Optimization Tool, it relies on the acquisition and processing of few images of the beam section taken along the optical path. The complex amplitude of the beam is retrieved from the images by the minimization of a Maximum a Posteriori error metric between the images and a model of the beam propagation. The analytical formalism of the method and its experimental validation are presented. The modulus of the beam is compared to a measurement of the beam profile, the phase of the beam is compared to a conventional phase diversity estimate. The precision of the experimental measurements is investigated by numerical simulations.

  20. Evaluation of methods to estimate the essential amino acids requirements of fish from the muscle amino acid profile

    Directory of Open Access Journals (Sweden)

    Álvaro José de Almeida Bicudo

    2014-03-01

    Full Text Available Many methods to estimate amino acid requirement based on amino acid profile of fish have been proposed. This study evaluates the methodology proposed by Meyer & Fracalossi (2005 and by Tacon (1989 to estimate amino acids requirement of fish, which do exempt knowledge on previous nutritional requirement of reference amino acid. Data on amino acid requirement of pacu, Piaractus mesopotamicus, were used to validate de accuracy of those methods. Meyer & Fracalossi's and Tacon's methodology estimated the lysine requirement of pacu, respectively, at 13 and 23% above requirement determined using dose-response method. The values estimated by both methods lie within the range of requirements determined for other omnivorous fish species, the Meyer & Fracalossi (2005 method showing better accuracy.

  1. Determination of rhenium in ores of complex composition by the kinetic method

    Energy Technology Data Exchange (ETDEWEB)

    Pavlova, L G; Gurkina, T V [Kazakhskij Gosudarstvennyj Univ., Alma-Ata (USSR); Tsentral' naya Lab. Yuzhno-Kazakhstanskogo Geologicheskogo Upravleniya, Alma-Ata (USSR))

    1979-09-01

    The kinetic rhenium determination method is proposed based on rhenium catalytic effect in the reaction of malachite green with thiourea. The accompanying elements, excluding molybdenum, do not interfere with the rhenium determination at their concentration of up to 0.1 M. The interfering influence of molybdenum can be eliminated by addition of tartaric acid to the solution up to the concentration of 0.1 M. This enables to determine rhenium in presence of 1000-fold quantity of molybdenum. The method is applicable for the analysis of complex copper-zinc sulphide ores.

  2. Evaluation of integration methods for hybrid simulation of complex structural systems through collapse

    Science.gov (United States)

    Del Carpio R., Maikol; Hashemi, M. Javad; Mosqueda, Gilberto

    2017-10-01

    This study examines the performance of integration methods for hybrid simulation of large and complex structural systems in the context of structural collapse due to seismic excitations. The target application is not necessarily for real-time testing, but rather for models that involve large-scale physical sub-structures and highly nonlinear numerical models. Four case studies are presented and discussed. In the first case study, the accuracy of integration schemes including two widely used methods, namely, modified version of the implicit Newmark with fixed-number of iteration (iterative) and the operator-splitting (non-iterative) is examined through pure numerical simulations. The second case study presents the results of 10 hybrid simulations repeated with the two aforementioned integration methods considering various time steps and fixed-number of iterations for the iterative integration method. The physical sub-structure in these tests consists of a single-degree-of-freedom (SDOF) cantilever column with replaceable steel coupons that provides repeatable highlynonlinear behavior including fracture-type strength and stiffness degradations. In case study three, the implicit Newmark with fixed-number of iterations is applied for hybrid simulations of a 1:2 scale steel moment frame that includes a relatively complex nonlinear numerical substructure. Lastly, a more complex numerical substructure is considered by constructing a nonlinear computational model of a moment frame coupled to a hybrid model of a 1:2 scale steel gravity frame. The last two case studies are conducted on the same porotype structure and the selection of time steps and fixed number of iterations are closely examined in pre-test simulations. The generated unbalance forces is used as an index to track the equilibrium error and predict the accuracy and stability of the simulations.

  3. KidReporter : a method for engaging children in making a newspaper to gather user requirements

    NARCIS (Netherlands)

    Bekker, M.M.; Beusmans, J.; Keyson, D.V.; Lloyd, P.A.; Bekker, M.M.; Markopoulos, P.; Tsikalkina, M.

    2002-01-01

    We describe a design method, called the KidReporter method, for gathering user requirements from children. Two school classes participated in making a newspaper about a zoo, to gather requirements for the design process of an interactive educational game. The educational game was developed to

  4. A parallel FE-FV scheme to solve fluid flow in complex geologic media

    NARCIS (Netherlands)

    Coumou, Dim; Matthäi, Stephan; Geiger, Sebastian; Driesner, Thomas

    2008-01-01

    Field data-based simulations of geologic systems require much computational time because of their mathematical complexity and the often desired large scales in space and time. To conduct accurate simulations in an acceptable time period, methods to reduce runtime are required. A parallelization

  5. Solar control: A general method for modelling of solar gains through complex facades in building simulation programs

    Energy Technology Data Exchange (ETDEWEB)

    Kuhn, Tilmann E.; Herkel, Sebastian [Fraunhofer Institute for Solar Energy Systems ISE, Heidenhofstr. 2, 79110 Freiburg (Germany); Frontini, Francesco [Fraunhofer Institute for Solar Energy Systems ISE, Heidenhofstr. 2, 79110 Freiburg (Germany); Politecnico di Milano, Dipartimento BEST, Via Bonardi 9, 20133 Milano (Italy); Strachan, Paul; Kokogiannakis, Georgios [ESRU, Dept. of Mechanical Eng., University of Strathclyde, Glasgow G1 1XJ (United Kingdom)

    2011-01-15

    This paper describes a new general method for building simulation programs which is intended to be used for the modelling of complex facades. The term 'complex facades' is used to designate facades with venetian blinds, prismatic layers, light re-directing surfaces, etc. In all these cases, the facade properties have a complex angular dependence. In addition to this, such facades very often have non-airtight layers and/or imperfect components (e.g. non-ideal sharp edges, non-flat surfaces,..). Therefore building planners often had to neglect some of the innovative features and to use 'work-arounds' in order to approximate the properties of complex facades in building simulation programs. A well-defined methodology for these cases was missing. This paper presents such a general methodology. The main advantage of the new method is that it only uses measureable quantities of the transparent or translucent part of the facade as a whole. This is the main difference in comparison with state of the art modelling based on the characteristics of the individual subcomponents, which is often impossible due to non-existing heat- and/or light-transfer models within the complex facade. It is shown that the new method can significantly increase the accuracy of heating/cooling loads and room temperatures. (author)

  6. NetMHCcons: a consensus method for the major histocompatibility complex class I predictions

    DEFF Research Database (Denmark)

    Karosiene, Edita; Lundegaard, Claus; Lund, Ole

    2012-01-01

    A key role in cell-mediated immunity is dedicated to the major histocompatibility complex (MHC) molecules that bind peptides for presentation on the cell surface. Several in silico methods capable of predicting peptide binding to MHC class I have been developed. The accuracy of these methods depe...... at www.cbs.dtu.dk/services/NetMHCcons, and allows the user in an automatic manner to obtain the most accurate predictions for any given MHC molecule....

  7. Methods Dealing with Complexity in Selecting Joint Venture Contractors for Large-Scale Infrastructure Projects

    Directory of Open Access Journals (Sweden)

    Ru Liang

    2018-01-01

    Full Text Available The magnitude of business dynamics has increased rapidly due to increased complexity, uncertainty, and risk of large-scale infrastructure projects. This fact made it increasingly tough to “go alone” into a contractor. As a consequence, joint venture contractors with diverse strengths and weaknesses cooperatively bid for bidding. Understanding project complexity and making decision on the optimal joint venture contractor is challenging. This paper is to study how to select joint venture contractors for undertaking large-scale infrastructure projects based on a multiattribute mathematical model. Two different methods are developed to solve the problem. One is based on ideal points and the other one is based on balanced ideal advantages. Both of the two methods consider individual difference in expert judgment and contractor attributes. A case study of Hong Kong-Zhuhai-Macao-Bridge (HZMB project in China is used to demonstrate how to apply these two methods and their advantages.

  8. Fault tree construction of hybrid system requirements using qualitative formal method

    International Nuclear Information System (INIS)

    Lee, Jang-Soo; Cha, Sung-Deok

    2005-01-01

    When specifying requirements for software controlling hybrid systems and conducting safety analysis, engineers experience that requirements are often known only in qualitative terms and that existing fault tree analysis techniques provide little guidance on formulating and evaluating potential failure modes. In this paper, we propose Causal Requirements Safety Analysis (CRSA) as a technique to qualitatively evaluate causal relationship between software faults and physical hazards. This technique, extending qualitative formal method process and utilizing information captured in the state trajectory, provides specific guidelines on how to identify failure modes and relationship among them. Using a simplified electrical power system as an example, we describe step-by-step procedures of conducting CRSA. Our experience of applying CRSA to perform fault tree analysis on requirements for the Wolsong nuclear power plant shutdown system indicates that CRSA is an effective technique in assisting safety engineers

  9. Developments based on stochastic and determinist methods for studying complex nuclear systems; Developpements utilisant des methodes stochastiques et deterministes pour l'analyse de systemes nucleaires complexes

    Energy Technology Data Exchange (ETDEWEB)

    Giffard, F X

    2000-05-19

    In the field of reactor and fuel cycle physics, particle transport plays and important role. Neutronic design, operation and evaluation calculations of nuclear system make use of large and powerful computer codes. However, current limitations in terms of computer resources make it necessary to introduce simplifications and approximations in order to keep calculation time and cost within reasonable limits. Two different types of methods are available in these codes. The first one is the deterministic method, which is applicable in most practical cases but requires approximations. The other method is the Monte Carlo method, which does not make these approximations but which generally requires exceedingly long running times. The main motivation of this work is to investigate the possibility of a combined use of the two methods in such a way as to retain their advantages while avoiding their drawbacks. Our work has mainly focused on the speed-up of 3-D continuous energy Monte Carlo calculations (TRIPOLI-4 code) by means of an optimized biasing scheme derived from importance maps obtained from the deterministic code ERANOS. The application of this method to two different practical shielding-type problems has demonstrated its efficiency: speed-up factors of 100 have been reached. In addition, the method offers the advantage of being easily implemented as it is not very to the choice of the importance mesh grid. It has also been demonstrated that significant speed-ups can be achieved by this method in the case of coupled neutron-gamma transport problems, provided that the interdependence of the neutron and photon importance maps is taken into account. Complementary studies are necessary to tackle a problem brought out by this work, namely undesirable jumps in the Monte Carlo variance estimates. (author)

  10. The Naïve Overfitting Index Selection (NOIS): A new method to optimize model complexity for hyperspectral data

    Science.gov (United States)

    Rocha, Alby D.; Groen, Thomas A.; Skidmore, Andrew K.; Darvishzadeh, Roshanak; Willemen, Louise

    2017-11-01

    The growing number of narrow spectral bands in hyperspectral remote sensing improves the capacity to describe and predict biological processes in ecosystems. But it also poses a challenge to fit empirical models based on such high dimensional data, which often contain correlated and noisy predictors. As sample sizes, to train and validate empirical models, seem not to be increasing at the same rate, overfitting has become a serious concern. Overly complex models lead to overfitting by capturing more than the underlying relationship, and also through fitting random noise in the data. Many regression techniques claim to overcome these problems by using different strategies to constrain complexity, such as limiting the number of terms in the model, by creating latent variables or by shrinking parameter coefficients. This paper is proposing a new method, named Naïve Overfitting Index Selection (NOIS), which makes use of artificially generated spectra, to quantify the relative model overfitting and to select an optimal model complexity supported by the data. The robustness of this new method is assessed by comparing it to a traditional model selection based on cross-validation. The optimal model complexity is determined for seven different regression techniques, such as partial least squares regression, support vector machine, artificial neural network and tree-based regressions using five hyperspectral datasets. The NOIS method selects less complex models, which present accuracies similar to the cross-validation method. The NOIS method reduces the chance of overfitting, thereby avoiding models that present accurate predictions that are only valid for the data used, and too complex to make inferences about the underlying process.

  11. A New Method to Develop Human Dental Pulp Cells and Platelet-rich Fibrin Complex.

    Science.gov (United States)

    He, Xuan; Chen, Wen-Xia; Ban, Guifei; Wei, Wei; Zhou, Jun; Chen, Wen-Jin; Li, Xian-Yu

    2016-11-01

    Platelet-rich fibrin (PRF) has been used as a scaffold material in various tissue regeneration studies. In the previous methods to combine seed cells with PRF, the structure of PRF was damaged, and the manipulation time in vitro was also increased. The objective of this in vitro study was to explore an appropriate method to develop a PRF-human dental pulp cell (hDPC) complex to maintain PRF structure integrity and to find out the most efficient part of PRF. The PRF-hDPC complex was developed at 3 different time points during PRF preparation: (1) the before centrifugation (BC) group, the hDPC suspension was added to the venous blood before blood centrifugation; (2) the immediately after centrifugation (IAC) group, the hDPC suspension was added immediately after blood centrifugation; (3) the after centrifugation (AC) group, the hDPC suspension was added 10 minutes after blood centrifugation; and (4) the control group, PRF without hDPC suspension. The prepared PRF-hDPC complexes were cultured for 7 days. The samples were fixed for histologic, immunohistochemistry, and scanning electron microscopic evaluation. Real-time polymerase chain reaction was performed to evaluate messenger RNA expression of alkaline phosphatase and dentin sialophosphoprotein. Enzyme-linked immunosorbent assay quantification for growth factors was performed within the different parts of the PRF. Histologic, immunohistochemistry, and scanning electron microscopic results revealed that hDPCs were only found in the BC group and exhibited favorable proliferation. Real-time polymerase chain reaction revealed that alkaline phosphatase and dentin sialophosphoprotein expression increased in the cultured PRF-hDPC complex. The lower part of the PRF released the maximum quantity of growth factors. Our new method to develop a PRF-hDPCs complex maintained PRF structure integrity. The hDPCs were distributed in the buffy coat, which might be the most efficient part of PRF. Copyright © 2016 American

  12. Synchronization in node of complex networks consist of complex chaotic system

    Energy Technology Data Exchange (ETDEWEB)

    Wei, Qiang, E-mail: qiangweibeihua@163.com [Beihua University computer and technology College, BeiHua University, Jilin, 132021, Jilin (China); Digital Images Processing Institute of Beihua University, BeiHua University, Jilin, 132011, Jilin (China); Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024 (China); Xie, Cheng-jun [Beihua University computer and technology College, BeiHua University, Jilin, 132021, Jilin (China); Digital Images Processing Institute of Beihua University, BeiHua University, Jilin, 132011, Jilin (China); Liu, Hong-jun [School of Information Engineering, Weifang Vocational College, Weifang, 261041 (China); Li, Yan-hui [The Library, Weifang Vocational College, Weifang, 261041 (China)

    2014-07-15

    A new synchronization method is investigated for node of complex networks consists of complex chaotic system. When complex networks realize synchronization, different component of complex state variable synchronize up to different scaling complex function by a designed complex feedback controller. This paper change synchronization scaling function from real field to complex field for synchronization in node of complex networks with complex chaotic system. Synchronization in constant delay and time-varying coupling delay complex networks are investigated, respectively. Numerical simulations are provided to show the effectiveness of the proposed method.

  13. Combining biophysical methods for the analysis of protein complex stoichiometry and affinity in SEDPHAT

    International Nuclear Information System (INIS)

    Zhao, Huaying; Schuck, Peter

    2015-01-01

    Global multi-method analysis for protein interactions (GMMA) can increase the precision and complexity of binding studies for the determination of the stoichiometry, affinity and cooperativity of multi-site interactions. The principles and recent developments of biophysical solution methods implemented for GMMA in the software SEDPHAT are reviewed, their complementarity in GMMA is described and a new GMMA simulation tool set in SEDPHAT is presented. Reversible macromolecular interactions are ubiquitous in signal transduction pathways, often forming dynamic multi-protein complexes with three or more components. Multivalent binding and cooperativity in these complexes are often key motifs of their biological mechanisms. Traditional solution biophysical techniques for characterizing the binding and cooperativity are very limited in the number of states that can be resolved. A global multi-method analysis (GMMA) approach has recently been introduced that can leverage the strengths and the different observables of different techniques to improve the accuracy of the resulting binding parameters and to facilitate the study of multi-component systems and multi-site interactions. Here, GMMA is described in the software SEDPHAT for the analysis of data from isothermal titration calorimetry, surface plasmon resonance or other biosensing, analytical ultracentrifugation, fluorescence anisotropy and various other spectroscopic and thermodynamic techniques. The basic principles of these techniques are reviewed and recent advances in view of their particular strengths in the context of GMMA are described. Furthermore, a new feature in SEDPHAT is introduced for the simulation of multi-method data. In combination with specific statistical tools for GMMA in SEDPHAT, simulations can be a valuable step in the experimental design

  14. Cork-resin ablative insulation for complex surfaces and method for applying the same

    Science.gov (United States)

    Walker, H. M.; Sharpe, M. H.; Simpson, W. G. (Inventor)

    1980-01-01

    A method of applying cork-resin ablative insulation material to complex curved surfaces is disclosed. The material is prepared by mixing finely divided cork with a B-stage curable thermosetting resin, forming the resulting mixture into a block, B-stage curing the resin-containing block, and slicing the block into sheets. The B-stage cured sheet is shaped to conform to the surface being insulated, and further curing is then performed. Curing of the resins only to B-stage before shaping enables application of sheet material to complex curved surfaces and avoids limitations and disadvantages presented in handling of fully cured sheet material.

  15. Optimization of a method for preparing solid complexes of essential clove oil with β-cyclodextrins.

    Science.gov (United States)

    Hernández-Sánchez, Pilar; López-Miranda, Santiago; Guardiola, Lucía; Serrano-Martínez, Ana; Gabaldón, José Antonio; Nuñez-Delicado, Estrella

    2017-01-01

    Clove oil (CO) is an aromatic oily liquid used in the food, cosmetics and pharmaceutical industries for its functional properties. However, its disadvantages of pungent taste, volatility, light sensitivity and poor water solubility can be solved by applying microencapsulation or complexation techniques. Essential CO was successfully solubilized in aqueous solution by forming inclusion complexes with β-cyclodextrins (β-CDs). Moreover, phase solubility studies demonstrated that essential CO also forms insoluble complexes with β-CDs. Based on these results, essential CO-β-CD solid complexes were prepared by the novel approach of microwave irradiation (MWI), followed by three different drying methods: vacuum oven drying (VO), freeze-drying (FD) or spray-drying (SD). FD was the best option for drying the CO-β-CD solid complexes, followed by VO and SD. MWI can be used efficiently to prepare essential CO-β-CD complexes with good yield on an industrial scale. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  16. Assessing Requirements Quality through Requirements Coverage

    Science.gov (United States)

    Rajan, Ajitha; Heimdahl, Mats; Woodham, Kurt

    2008-01-01

    In model-based development, the development effort is centered around a formal description of the proposed software system the model. This model is derived from some high-level requirements describing the expected behavior of the software. For validation and verification purposes, this model can then be subjected to various types of analysis, for example, completeness and consistency analysis [6], model checking [3], theorem proving [1], and test-case generation [4, 7]. This development paradigm is making rapid inroads in certain industries, e.g., automotive, avionics, space applications, and medical technology. This shift towards model-based development naturally leads to changes in the verification and validation (V&V) process. The model validation problem determining that the model accurately captures the customer's high-level requirements has received little attention and the sufficiency of the validation activities has been largely determined through ad-hoc methods. Since the model serves as the central artifact, its correctness with respect to the users needs is absolutely crucial. In our investigation, we attempt to answer the following two questions with respect to validation (1) Are the requirements sufficiently defined for the system? and (2) How well does the model implement the behaviors specified by the requirements? The second question can be addressed using formal verification. Nevertheless, the size and complexity of many industrial systems make formal verification infeasible even if we have a formal model and formalized requirements. Thus, presently, there is no objective way of answering these two questions. To this end, we propose an approach based on testing that, when given a set of formal requirements, explores the relationship between requirements-based structural test-adequacy coverage and model-based structural test-adequacy coverage. The proposed technique uses requirements coverage metrics defined in [9] on formal high-level software

  17. Large branched self-assembled DNA complexes

    International Nuclear Information System (INIS)

    Tosch, Paul; Waelti, Christoph; Middelberg, Anton P J; Davies, A Giles

    2007-01-01

    Many biological molecules have been demonstrated to self-assemble into complex structures and networks by using their very efficient and selective molecular recognition processes. The use of biological molecules as scaffolds for the construction of functional devices by self-assembling nanoscale complexes onto the scaffolds has recently attracted significant attention and many different applications in this field have emerged. In particular DNA, owing to its inherent sophisticated self-organization and molecular recognition properties, has served widely as a scaffold for various nanotechnological self-assembly applications, with metallic and semiconducting nanoparticles, proteins, macromolecular complexes, inter alia, being assembled onto designed DNA scaffolds. Such scaffolds may typically contain multiple branch-points and comprise a number of DNA molecules selfassembled into the desired configuration. Previously, several studies have used synthetic methods to produce the constituent DNA of the scaffolds, but this typically constrains the size of the complexes. For applications that require larger self-assembling DNA complexes, several tens of nanometers or more, other techniques need to be employed. In this article, we discuss a generic technique to generate large branched DNA macromolecular complexes

  18. Methods for ensuring compliance with regulatory requirements: regulators and operators

    International Nuclear Information System (INIS)

    Fleischmann, A.W.

    1989-01-01

    Some of the methods of ensuring compliance with regulatory requirements contained in various radiation protection documents such as Regulations, ICRP Recommendations etc. are considered. These include radiation safety officers and radiation safety committees, personnel monitoring services, dissemination of information, inspection services and legislative power of enforcement. Difficulties in ensuring compliance include outmoded legislation, financial and personnel constraints

  19. Rapid Quadrupole-Time-of-Flight Mass Spectrometry Method Quantifies Oxygen-Rich Lignin Compound in Complex Mixtures

    Science.gov (United States)

    Boes, Kelsey S.; Roberts, Michael S.; Vinueza, Nelson R.

    2018-03-01

    Complex mixture analysis is a costly and time-consuming task facing researchers with foci as varied as food science and fuel analysis. When faced with the task of quantifying oxygen-rich bio-oil molecules in a complex diesel mixture, we asked whether complex mixtures could be qualitatively and quantitatively analyzed on a single mass spectrometer with mid-range resolving power without the use of lengthy separations. To answer this question, we developed and evaluated a quantitation method that eliminated chromatography steps and expanded the use of quadrupole-time-of-flight mass spectrometry from primarily qualitative to quantitative as well. To account for mixture complexity, the method employed an ionization dopant, targeted tandem mass spectrometry, and an internal standard. This combination of three techniques achieved reliable quantitation of oxygen-rich eugenol in diesel from 300 to 2500 ng/mL with sufficient linearity (R2 = 0.97 ± 0.01) and excellent accuracy (percent error = 0% ± 5). To understand the limitations of the method, it was compared to quantitation attained on a triple quadrupole mass spectrometer, the gold standard for quantitation. The triple quadrupole quantified eugenol from 50 to 2500 ng/mL with stronger linearity (R2 = 0.996 ± 0.003) than the quadrupole-time-of-flight and comparable accuracy (percent error = 4% ± 5). This demonstrates that a quadrupole-time-of-flight can be used for not only qualitative analysis but also targeted quantitation of oxygen-rich lignin molecules in complex mixtures without extensive sample preparation. The rapid and cost-effective method presented here offers new possibilities for bio-oil research, including: (1) allowing for bio-oil studies that demand repetitive analysis as process parameters are changed and (2) making this research accessible to more laboratories. [Figure not available: see fulltext.

  20. First derivative emission spectrofluorimetric method for the determination of LCZ696, a newly approved FDA supramolecular complex of valsartan and sacubitril in tablets.

    Science.gov (United States)

    Ragab, Marwa A A; Galal, Shereen M; Korany, Mohamed A; Ahmed, Aya R

    2017-12-01

    LCZ696 (sacubitril/valsartan, Entresto™) is a therapy lately approved by United States Food and Drug Administration (US FDA) as a heart failure therapy. It is claimed to decrease the mortality rate and hospitalization for patients with chronic heart failure. This study is considered as the first report to investigate the fluorimetric behavior of sacubitril in addition to pursuing all the different conditions that may affect its fluorescence. Various conditions were studied, for example studying the effects of organized media, solvents and pH, which may affect the fluorescence behavior of sacubitril. For the simultaneous determination of the newly approved supramolecular complex of valsartan (VAL) and sacubitril (SAC) in their tablets, a sensitive and simple first derivative spectrofluorimetric method was developed. The method involved the measurement of native fluorescence at 416 nm and 314 nm (λ ex 249 nm) for VAL and SAC, respectively. The first (D1) derivative technique was applied to the emission data to resolve a partial overlap that appeared in their emission spectra. The proposed method was successfully applied for the assay of the two drugs in their supramolecular complex LCZ696 with no interference from common pharmaceutical additives. International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) guidelines were followed in order to validate the proposed method. Copyright © 2017 John Wiley & Sons, Ltd.

  1. Stop: a fast procedure for the exact computation of the performance of complex probabilistic systems

    International Nuclear Information System (INIS)

    Corynen, G.C.

    1982-01-01

    A new set-theoretic method for the exact and efficient computation of the probabilistic performance of complex systems has been developed. The core of the method is a fast algorithm for disjointing a collection of product sets which is intended for systems with more than 1000 components and 100,000 cut sets. The method is based on a divide-and-conquer approach, in which a multidimensional problem is progressively decomposed into lower-dimensional subproblems along its dimensions. The method also uses a particular pointer system that eliminates the need to store the subproblems by only requiring the storage of pointers to those problems. Examples of the algorithm and the divide-and-conquer strategy are provided, and comparisons with other significant methods are made. Statistical complexity studies show that the expected time and space complexity of other methods is O(me/sup n/), but that our method is O(nm 3 log(m)). Problems which would require days of Cray-1 computer time with present methods can now be solved in seconds. Large-scale systems that can only be approximated with other techniques can now also be evaluated exactly

  2. 42 CFR 84.146 - Method of measuring the power and torque required to operate blowers.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Method of measuring the power and torque required... RESPIRATORY PROTECTIVE DEVICES Supplied-Air Respirators § 84.146 Method of measuring the power and torque.... These are used to facilitate timing. To determine the torque or horsepower required to operate the...

  3. Microscopic methods for the interactions between complex nuclei

    International Nuclear Information System (INIS)

    Ikeda, Kiyomi; Tamagaki, Ryozo; Saito, Sakae; Horiuchi, Hisashi; Tohsaki-Suzuki, Akihiro.

    1978-01-01

    Microscopic study on composite-particle interaction performed in Japan is described in this paper. In chapter 1, brief historical description of the study is presented. In chapter 2, the theory of resonating group method (RGM) for describing microscopically the interaction between nuclei (clusters) is reviewed, and formulation on the description is presented. It is shown that the generator coordinate method (GCM) is a useful one for the description of interaction between shell model clusters, and that the kernels in the RGM are easily obtained from those of the GCM. The inter-cluster interaction can be well described by the orthogonality condition model (OCM). In chapter 3, the calculational procedures for the kernels of GCN, RGM and OCM and some properties related to their calculation are discussed. The GCM kernels for various types of systems are treated. The RGM kernels are evaluated by the integral transformation of GCM kernels. The problems related to the RGM norm kernel (RGM-NK) are discussed. The projection operator onto the Pauli-allowed state in OCM is obtained directly from the solution of the eigenvalue problem of RGM-NK. In chapter 4, the exchange kernels due to the antisymmetrization are derived in analytical way with the symbolical use of computer memory by taking the α + O 16 system as a typical example. New algorisms for deriving analytically the generator coordinate kernel (GCM kernel) are presented. In chapter 5, precise generalization of the Kohn-Hulthen-Kato variational method for scattering matrix is made for the purpose of microscopic study of reactions between complex nuclei with many channels coupled. (Kato, T.)

  4. A path method for finding energy barriers and minimum energy paths in complex micromagnetic systems

    International Nuclear Information System (INIS)

    Dittrich, R.; Schrefl, T.; Suess, D.; Scholz, W.; Forster, H.; Fidler, J.

    2002-01-01

    Minimum energy paths and energy barriers are calculated for complex micromagnetic systems. The method is based on the nudged elastic band method and uses finite-element techniques to represent granular structures. The method was found to be robust and fast for both simple test problems as well as for large systems such as patterned granular media. The method is used to estimate the energy barriers in CoCr-based perpendicular recording media

  5. Managing bioengineering complexity with AI techniques.

    Science.gov (United States)

    Beal, Jacob; Adler, Aaron; Yaman, Fusun

    2016-10-01

    Our capabilities for systematic design and engineering of biological systems are rapidly increasing. Effectively engineering such systems, however, requires the synthesis of a rapidly expanding and changing complex body of knowledge, protocols, and methodologies. Many of the problems in managing this complexity, however, appear susceptible to being addressed by artificial intelligence (AI) techniques, i.e., methods enabling computers to represent, acquire, and employ knowledge. Such methods can be employed to automate physical and informational "routine" work and thus better allow humans to focus their attention on the deeper scientific and engineering issues. This paper examines the potential impact of AI on the engineering of biological organisms through the lens of a typical organism engineering workflow. We identify a number of key opportunities for significant impact, as well as challenges that must be overcome. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  6. Investigation into complexing of pentavalent actinide forms with some anions of organic acids by the coprecipitation method

    International Nuclear Information System (INIS)

    Moskvin, A.I.; Poznyakov, A.N.; AN SSSR, Moscow. Inst. Geokhimii i Analiticheskoj Khimii)

    1979-01-01

    Complexing of pentavolent forms of Np, Pu, Am actinides with anions of acetic, oxalic acids and EDTA is studied using the method of coprecipitation with iron hydroxide. Composition and stability constants of the actinide complexes formed are determined. The acids anions are arranged in a row in the order of decrease of complexing tendency that is EDTA anion>C 2 O 4 2- >CH 3 COO -

  7. Comprehension of complex biological processes by analytical methods: how far can we go using mass spectrometry?

    International Nuclear Information System (INIS)

    Gerner, C.

    2013-01-01

    Comprehensive understanding of complex biological processes is the basis for many biomedical issues of great relevance for modern society including risk assessment, drug development, quality control of industrial products and many more. Screening methods provide means for investigating biological samples without research hypothesis. However, the first boom of analytical screening efforts has passed and we again need to ask whether and how to apply screening methods. Mass spectrometry is a modern tool with unrivalled analytical capacities. This applies to all relevant characteristics of analytical methods such as specificity, sensitivity, accuracy, multiplicity and diversity of applications. Indeed, mass spectrometry qualifies to deal with complexity. Chronic inflammation is a common feature of almost all relevant diseases challenging our modern society; these diseases are apparently highly diverse and include arteriosclerosis, cancer, back pain, neurodegenerative diseases, depression and other. The complexity of mechanisms regulating chronic inflammation is the reason for the practical challenge to deal with it. The presentation shall give an overview of capabilities and limitations of the application of this analytical tool to solve critical questions with great relevance for our society. (author)

  8. Development of indirect spectrophotometric method for quantification of cephalexin in pure form and commercial formulation using complexation reaction

    International Nuclear Information System (INIS)

    Khan, M.N.; Hussain, R.; Kalsoom, S.; Saadiq, M.

    2016-01-01

    A simple, accurate and indirect spectrophotometric method was developed for the quantification of cephalexin in pure form and pharmaceutical products using complexation reaction. The developed method is based on the oxidation of the cephalexin with Fe/sup 3+/ in acidic medium. Then 1, 10- phenanthroline reacts with Fe/sup 2+/ and a red colored complex was formed. The absorbance of the complex was measured at 510 nm by spectrophotometer. Different experimental parameters affecting the complexation reactions were studied and optimized. Beer law was obeyed in the concentration range 0.4 -10 micro gmL/sup -1/ with a good correlation of 0.992. The limit of detection and limit of quantification were found to be 0.065 micro gmL/sup -1/ and 0.218 micro gmL/sup -1/ , respectively. The method have good reproducibility with a relative standard deviation of 6.26 percent (n = 6). The method was successfully applied for the determination of cephalexin in bulk powder and commercial formulation. Percent recoveries were found to range from 95.47 to 103.87 percent for the pure form and 98.62 to 103.35 percent for commercial formulations. (author)

  9. Fast methods for long-range interactions in complex systems. Lecture notes

    Energy Technology Data Exchange (ETDEWEB)

    Sutmann, Godehard; Gibbon, Paul; Lippert, Thomas (eds.)

    2011-10-13

    Parallel computing and computer simulations of complex particle systems including charges have an ever increasing impact in a broad range of fields in the physical sciences, e.g. in astrophysics, statistical physics, plasma physics, material sciences, physical chemistry, and biophysics. The present summer school, funded by the German Heraeus-Foundation, took place at the Juelich Supercomputing Centre from 6 - 10 September 2010. The focus was on providing an introduction and overview over different methods, algorithms and new trends for the computational treatment of long-range interactions in particle systems. The Lecture Notes contain an introduction into particle simulation, as well as five different fast methods, i.e. the Fast Multipole Method, Barnes-Hut Tree Method, Multigrid, FFT based methods, and Fast Summation using the non-equidistant FFT. In addition to introducing the methods, efficient parallelization of the methods is presented in detail. This publication was edited at the Juelich Supercomputing Centre (JSC) which is an integral part of the Institute for Advanced Simulation (IAS). The IAS combines the Juelich simulation sciences and the supercomputer facility in one organizational unit. It includes those parts of the scientific institutes at Forschungszentrum Juelich which use simulation on supercomputers as their main research methodology. (orig.)

  10. Fast methods for long-range interactions in complex systems. Lecture notes

    International Nuclear Information System (INIS)

    Sutmann, Godehard; Gibbon, Paul; Lippert, Thomas

    2011-01-01

    Parallel computing and computer simulations of complex particle systems including charges have an ever increasing impact in a broad range of fields in the physical sciences, e.g. in astrophysics, statistical physics, plasma physics, material sciences, physical chemistry, and biophysics. The present summer school, funded by the German Heraeus-Foundation, took place at the Juelich Supercomputing Centre from 6 - 10 September 2010. The focus was on providing an introduction and overview over different methods, algorithms and new trends for the computational treatment of long-range interactions in particle systems. The Lecture Notes contain an introduction into particle simulation, as well as five different fast methods, i.e. the Fast Multipole Method, Barnes-Hut Tree Method, Multigrid, FFT based methods, and Fast Summation using the non-equidistant FFT. In addition to introducing the methods, efficient parallelization of the methods is presented in detail. This publication was edited at the Juelich Supercomputing Centre (JSC) which is an integral part of the Institute for Advanced Simulation (IAS). The IAS combines the Juelich simulation sciences and the supercomputer facility in one organizational unit. It includes those parts of the scientific institutes at Forschungszentrum Juelich which use simulation on supercomputers as their main research methodology. (orig.)

  11. An Improved Conceptually-Based Method for Analysis of Communication Network Structure of Large Complex Organizations.

    Science.gov (United States)

    Richards, William D., Jr.

    Previous methods for determining the communication structure of organizations work well for small or simple organizations, but are either inadequate or unwieldy for use with large complex organizations. An improved method uses a number of different measures and a series of successive approximations to order the communication matrix such that…

  12. Trajectory Optimization of Spray Painting Robot for Complex Curved Surface Based on Exponential Mean Bézier Method

    Directory of Open Access Journals (Sweden)

    Wei Chen

    2017-01-01

    Full Text Available Automated tool trajectory planning for spray painting robots is still a challenging problem, especially for a large complex curved surface. This paper presents a new method of trajectory optimization for spray painting robot based on exponential mean Bézier method. The definition and the three theorems of exponential mean Bézier curves are discussed. Then a spatial painting path generation method based on exponential mean Bézier curves is developed. A new simple algorithm for trajectory optimization on complex curved surfaces is introduced. A golden section method is adopted to calculate the values. The experimental results illustrate that the exponential mean Bézier curves enhanced flexibility of the path planning, and the trajectory optimization algorithm achieved satisfactory performance. This method can also be extended to other applications.

  13. Complexity methods applied to turbulence in plasma astrophysics

    Science.gov (United States)

    Vlahos, L.; Isliker, H.

    2016-09-01

    In this review many of the well known tools for the analysis of Complex systems are used in order to study the global coupling of the turbulent convection zone with the solar atmosphere where the magnetic energy is dissipated explosively. Several well documented observations are not easy to interpret with the use of Magnetohydrodynamic (MHD) and/or Kinetic numerical codes. Such observations are: (1) The size distribution of the Active Regions (AR) on the solar surface, (2) The fractal and multi fractal characteristics of the observed magnetograms, (3) The Self-Organised characteristics of the explosive magnetic energy release and (4) the very efficient acceleration of particles during the flaring periods in the solar corona. We review briefly the work published the last twenty five years on the above issues and propose solutions by using methods borrowed from the analysis of complex systems. The scenario which emerged is as follows: (a) The fully developed turbulence in the convection zone generates and transports magnetic flux tubes to the solar surface. Using probabilistic percolation models we were able to reproduce the size distribution and the fractal properties of the emerged and randomly moving magnetic flux tubes. (b) Using a Non Linear Force Free (NLFF) magnetic extrapolation numerical code we can explore how the emerged magnetic flux tubes interact nonlinearly and form thin and Unstable Current Sheets (UCS) inside the coronal part of the AR. (c) The fragmentation of the UCS and the redistribution of the magnetic field locally, when the local current exceeds a Critical threshold, is a key process which drives avalanches and forms coherent structures. This local reorganization of the magnetic field enhances the energy dissipation and influences the global evolution of the complex magnetic topology. Using a Cellular Automaton and following the simple rules of Self Organized Criticality (SOC), we were able to reproduce the statistical characteristics of the

  14. Crystallization of protein–ligand complexes

    International Nuclear Information System (INIS)

    Hassell, Anne M.; An, Gang; Bledsoe, Randy K.; Bynum, Jane M.; Carter, H. Luke III; Deng, Su-Jun J.; Gampe, Robert T.; Grisard, Tamara E.; Madauss, Kevin P.; Nolte, Robert T.; Rocque, Warren J.; Wang, Liping; Weaver, Kurt L.; Williams, Shawn P.; Wisely, G. Bruce; Xu, Robert; Shewchuk, Lisa M.

    2007-01-01

    Methods presented for growing protein–ligand complexes fall into the categories of co-expression of the protein with the ligands of interest, use of the ligands during protein purification, cocrystallization and soaking the ligands into existing crystals. Obtaining diffraction-quality crystals has long been a bottleneck in solving the three-dimensional structures of proteins. Often proteins may be stabilized when they are complexed with a substrate, nucleic acid, cofactor or small molecule. These ligands, on the other hand, have the potential to induce significant conformational changes to the protein and ab initio screening may be required to find a new crystal form. This paper presents an overview of strategies in the following areas for obtaining crystals of protein–ligand complexes: (i) co-expression of the protein with the ligands of interest, (ii) use of the ligands during protein purification, (iii) cocrystallization and (iv) soaks

  15. An Evaluation of Fractal Surface Measurement Methods for Characterizing Landscape Complexity from Remote-Sensing Imagery

    Science.gov (United States)

    Lam, Nina Siu-Ngan; Qiu, Hong-Lie; Quattrochi, Dale A.; Emerson, Charles W.; Arnold, James E. (Technical Monitor)

    2001-01-01

    The rapid increase in digital data volumes from new and existing sensors necessitates the need for efficient analytical tools for extracting information. We developed an integrated software package called ICAMS (Image Characterization and Modeling System) to provide specialized spatial analytical functions for interpreting remote sensing data. This paper evaluates the three fractal dimension measurement methods: isarithm, variogram, and triangular prism, along with the spatial autocorrelation measurement methods Moran's I and Geary's C, that have been implemented in ICAMS. A modified triangular prism method was proposed and implemented. Results from analyzing 25 simulated surfaces having known fractal dimensions show that both the isarithm and triangular prism methods can accurately measure a range of fractal surfaces. The triangular prism method is most accurate at estimating the fractal dimension of higher spatial complexity, but it is sensitive to contrast stretching. The variogram method is a comparatively poor estimator for all of the surfaces, particularly those with higher fractal dimensions. Similar to the fractal techniques, the spatial autocorrelation techniques are found to be useful to measure complex images but not images with low dimensionality. These fractal measurement methods can be applied directly to unclassified images and could serve as a tool for change detection and data mining.

  16. A complex method of equipment replacement planning. An advanced plan for the replacement of medical equipment.

    Science.gov (United States)

    Dondelinger, Robert M

    2004-01-01

    This complex method of equipment replacement planning is a methodology; it is a means to an end, a process that focuses on equipment most in need of replacement, rather than the end itself. It uses data available from the maintenance management database, and attempts to quantify those subjective items important [figure: see text] in making equipment replacement decisions. Like the simple method of the last issue, it is a starting point--albeit an advanced starting point--which the user can modify to fit their particular organization, but the complex method leaves room for expansion. It is based on sound logic, documented facts, and is fully defensible during the decision-making process and will serve your organization well as provide a structure for your equipment replacement planning decisions.

  17. Decision paths in complex tasks

    Science.gov (United States)

    Galanter, Eugene

    1991-01-01

    Complex real world action and its prediction and control has escaped analysis by the classical methods of psychological research. The reason is that psychologists have no procedures to parse complex tasks into their constituents. Where such a division can be made, based say on expert judgment, there is no natural scale to measure the positive or negative values of the components. Even if we could assign numbers to task parts, we lack rules i.e., a theory, to combine them into a total task representation. We compare here two plausible theories for the amalgamation of the value of task components. Both of these theories require a numerical representation of motivation, for motivation is the primary variable that guides choice and action in well-learned tasks. We address this problem of motivational quantification and performance prediction by developing psychophysical scales of the desireability or aversiveness of task components based on utility scaling methods (Galanter 1990). We modify methods used originally to scale sensory magnitudes (Stevens and Galanter 1957), and that have been applied recently to the measure of task 'workload' by Gopher and Braune (1984). Our modification uses utility comparison scaling techniques which avoid the unnecessary assumptions made by Gopher and Braune. Formula for the utility of complex tasks based on the theoretical models are used to predict decision and choice of alternate paths to the same goal.

  18. Uranium complex recycling method of purifying uranium liquors

    International Nuclear Information System (INIS)

    Elikan, L.; Lyon, W.L.; Sundar, P.S.

    1976-01-01

    Uranium is separated from contaminating cations in an aqueous liquor containing uranyl ions. The liquor is mixed with sufficient recycled uranium complex to raise the weight ratio of uranium to said cations preferably to at least about three. The liquor is then extracted with at least enough non-interfering, water-immiscible, organic solvent to theoretically extract about all of the uranium in the liquor. The organic solvent contains a reagent which reacts with the uranyl ions to form a complex soluble in the solvent. If the aqueous liquor is acidic, the organic solvent is then scrubbed with water. The organic solvent is stripped with a solution containing at least enough ammonium carbonate to precipitate the uranium complex. A portion of the uranium complex is recycled and the remainder can be collected and calcined to produce U 3 O 8 or UO 2

  19. Cognitive Task Complexity Effects on L2 Writing Performance: An Application of Mixed-Methods Approaches

    Science.gov (United States)

    Abdi Tabari, Mahmoud; Ivey, Toni A.

    2015-01-01

    This paper provides a methodological review of previous research on cognitive task complexity, since the term emerged in 1995, and investigates why much research was more quantitative rather than qualitative. Moreover, it sheds light onto the studies which used the mixed-methods approach and determines which version of the mixed-methods designs…

  20. Requirements and testing methods for surfaces of metallic bipolar plates for low-temperature PEM fuel cells

    Science.gov (United States)

    Jendras, P.; Lötsch, K.; von Unwerth, T.

    2017-03-01

    To reduce emissions and to substitute combustion engines automotive manufacturers, legislature and first users aspire hydrogen fuel cell vehicles. Up to now the focus of research was set on ensuring functionality and increasing durability of fuel cell components. Therefore, expensive materials were used. Contemporary research and development try to substitute these substances by more cost-effective material combinations. The bipolar plate is a key component with the greatest influence on volume and mass of a fuel cell stack and they have to meet complex requirements. They support bending sensitive components of stack, spread reactants over active cell area and form the electrical contact to another cell. Furthermore, bipolar plates dissipate heat of reaction and separate one cell gastight from the other. Consequently, they need a low interfacial contact resistance (ICR) to the gas diffusion layer, high flexural strength, good thermal conductivity and a high durability. To reduce costs stainless steel is a favoured material for bipolar plates in automotive applications. Steel is characterized by good electrical and thermal conductivity but the acid environment requires a high chemical durability against corrosion as well. On the one hand formation of a passivating oxide layer increasing ICR should be inhibited. On the other hand pitting corrosion leading to increased permeation rate may not occur. Therefore, a suitable substrate lamination combination is wanted. In this study material testing methods for bipolar plates are considered.

  1. Using Project Complexity Determinations to Establish Required Levels of Project Rigor

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, Thomas D.

    2015-10-01

    This presentation discusses the project complexity determination process that was developed by National Security Technologies, LLC, for the U.S. Department of Energy, National Nuclear Security Administration Nevada Field Office for implementation at the Nevada National Security Site (NNSS). The complexity determination process was developed to address the diversity of NNSS project types, size, and complexity; to fill the need for one procedure but with provision for tailoring the level of rigor to the project type, size, and complexity; and to provide consistent, repeatable, effective application of project management processes across the enterprise; and to achieve higher levels of efficiency in project delivery. These needs are illustrated by the wide diversity of NNSS projects: Defense Experimentation, Global Security, weapons tests, military training areas, sensor development and testing, training in realistic environments, intelligence community support, sensor development, environmental restoration/waste management, and disposal of radioactive waste, among others.

  2. A Mathematical Model of the Hotel Service Oligopoly Market and a Conflict-Optimal Management Method of the Hotel Complex Competitiveness

    Directory of Open Access Journals (Sweden)

    M. Soro

    2015-01-01

    Full Text Available The aim of this work is to enhance competitiveness of hotel complex management in the hotel service market of the state of Côte d'Ivoire.The objectives of this study are:1 producing a model of oligopoly market of hotel services based on marketing research;2 synthesis of the conflict-optimum management method of a hotel complex using the model of oligopoly market of hotel services;3 study of issues on competitiveness enhancement of running hotel complex in the market of oligopoly under counter-conditions of other hotel complexes.To obtain the result the paper offers to use techniques of classical and modern theory of management, namely methods of the theory of optimal management of multi-object multicriteria systems, methods of game theory, methods of system analysis, operations research and decision-making.The paper proposes a model of the oligopoly market for interaction of hotels in the hospitality services in Stackelberg’s form in which, as the "governing" parameters, a hotel complex has the accommodation cost in the selected category and the advertising cost. Monthly income of the hotel complex is proposed as an indicator of the efficiency. Evaluation of success in the market is based on the indicator of competitiveness.To solve the problem of multi-criteria optimization the paper offers a method of conflictoptimum competitiveness management of hotel complexes. It considers a model example of the interaction between two complexes in the market of duopoly.It is concluded that in the case of a saturated demand for the services of hotel complexes, there is a guaranteed Nash solution, which ensures a certain profit regardless of the actions of another hotel complex. In the case of unsaturated demand (lack of customers in the market there is no guaranteed solution, which is a prerequisite for entry into agreement between the market players.The model obtained can be useful for observing trends in the market of hotel services.

  3. Integrating complex functions: coordination of nuclear pore complex assembly and membrane expansion of the nuclear envelope requires a family of integral membrane proteins.

    Science.gov (United States)

    Schneiter, Roger; Cole, Charles N

    2010-01-01

    The nuclear envelope harbors numerous large proteinaceous channels, the nuclear pore complexes (NPCs), through which macromolecular exchange between the cytosol and the nucleoplasm occurs. This double-membrane nuclear envelope is continuous with the endoplasmic reticulum and thus functionally connected to such diverse processes as vesicular transport, protein maturation and lipid synthesis. Recent results obtained from studies in Saccharomyces cerevisiae indicate that assembly of the nuclear pore complex is functionally dependent upon maintenance of lipid homeostasis of the ER membrane. Previous work from one of our laboratories has revealed that an integral membrane protein Apq12 is important for the assembly of functional nuclear pores. Cells lacking APQ12 are viable but cannot grow at low temperatures, have aberrant NPCs and a defect in mRNA export. Remarkably, these defects in NPC assembly can be overcome by supplementing cells with a membrane fluidizing agent, benzyl alcohol, suggesting that Apq12 impacts the flexibility of the nuclear membrane, possibly by adjusting its lipid composition when cells are shifted to a reduced temperature. Our new study now expands these findings and reveals that an essential membrane protein, Brr6, shares at least partially overlapping functions with Apq12 and is also required for assembly of functional NPCs. A third nuclear envelope membrane protein, Brl1, is related to Brr6, and is also required for NPC assembly. Because maintenance of membrane homeostasis is essential for cellular survival, the fact that these three proteins are conserved in fungi that undergo closed mitoses, but are not found in metazoans or plants, may indicate that their functions are performed by proteins unrelated at the primary sequence level to Brr6, Brl1 and Apq12 in cells that disassemble their nuclear envelopes during mitosis.

  4. MODELS AND METHODS OF SAFETY-ORIENTED PROJECT MANAGEMENT OF DEVELOPMENT OF COMPLEX SYSTEMS: METHODOLOGICAL APPROACH

    Directory of Open Access Journals (Sweden)

    Олег Богданович ЗАЧКО

    2016-03-01

    Full Text Available The methods and models of safety-oriented project management of the development of complex systems are proposed resulting from the convergence of existing approaches in project management in contrast to the mechanism of value-oriented management. A cognitive model of safety oriented project management of the development of complex systems is developed, which provides a synergistic effect that is to move the system from the original (pre condition in an optimal one from the viewpoint of life safety - post-project state. The approach of assessment the project complexity is proposed, which consists in taking into account the seasonal component of a time characteristic of life cycles of complex organizational and technical systems with occupancy. This enabled to take into account the seasonal component in simulation models of life cycle of the product operation in complex organizational and technical system, modeling the critical points of operation of systems with occupancy, which forms a new methodology for safety-oriented management of projects, programs and portfolios of projects with the formalization of the elements of complexity.

  5. Developments based on stochastic and determinist methods for studying complex nuclear systems; Developpements utilisant des methodes stochastiques et deterministes pour l'analyse de systemes nucleaires complexes

    Energy Technology Data Exchange (ETDEWEB)

    Giffard, F.X

    2000-05-19

    In the field of reactor and fuel cycle physics, particle transport plays and important role. Neutronic design, operation and evaluation calculations of nuclear system make use of large and powerful computer codes. However, current limitations in terms of computer resources make it necessary to introduce simplifications and approximations in order to keep calculation time and cost within reasonable limits. Two different types of methods are available in these codes. The first one is the deterministic method, which is applicable in most practical cases but requires approximations. The other method is the Monte Carlo method, which does not make these approximations but which generally requires exceedingly long running times. The main motivation of this work is to investigate the possibility of a combined use of the two methods in such a way as to retain their advantages while avoiding their drawbacks. Our work has mainly focused on the speed-up of 3-D continuous energy Monte Carlo calculations (TRIPOLI-4 code) by means of an optimized biasing scheme derived from importance maps obtained from the deterministic code ERANOS. The application of this method to two different practical shielding-type problems has demonstrated its efficiency: speed-up factors of 100 have been reached. In addition, the method offers the advantage of being easily implemented as it is not very to the choice of the importance mesh grid. It has also been demonstrated that significant speed-ups can be achieved by this method in the case of coupled neutron-gamma transport problems, provided that the interdependence of the neutron and photon importance maps is taken into account. Complementary studies are necessary to tackle a problem brought out by this work, namely undesirable jumps in the Monte Carlo variance estimates. (author)

  6. Is there still a role for traditional methods in the management of fractures of the zygomatic complex?

    LENUS (Irish Health Repository)

    O'Sullivan, S T

    2012-02-03

    With the introduction of low-profile mini-plating systems, a trend has developed towards open reduction and rigid internal fixation (ORIF) of fractures of the cranio-facial skeleton. The current policy for management of zygomatic fractures in our unit is to attempt primary reduction by traditional methods, and proceed to ORIF in the event of unsatisfactory fracture stability or alignment. Over a one-year period, 109 patients underwent surgical correction of fractures of the zygomatic complex. Standard Gilles\\' elevation was performed in 71 cases, percutaneous elevation in three cases, and ORIF was performed in 35 cases. Mean follow-up was 190 days. One case of persistent infraorbital step and three cases of residual malar flattening were documented in patients who underwent Gilles or percutaneous elevation. Morbidity associated with ORIF was minimal. We conclude that while ORIF of zygomatic fractures may offer better results than traditional methods in the management of complex fractures, traditional methods still have a role to play in less complex fractures.

  7. Complex Correlation Kohn-T Method of Calculating Total and Elastic Cross Sections. Part 1; Electron-Hydrogen Elastic Scattering

    Science.gov (United States)

    Bhatia, A. K.; Temkin, A.; Fisher, Richard R. (Technical Monitor)

    2001-01-01

    We report on the first part of a study of electron-hydrogen scattering, using a method which allows for the ab initio calculation of total and elastic cross sections at higher energies. In its general form the method uses complex 'radial' correlation functions, in a (Kohn) T-matrix formalism. The titled method, abbreviated Complex Correlation Kohn T (CCKT) method, is reviewed, in the context of electron-hydrogen scattering, including the derivation of the equation for the (complex) scattering function, and the extraction of the scattering information from the latter. The calculation reported here is restricted to S-waves in the elastic region, where the correlation functions can be taken, without loss of generality, to be real. Phase shifts are calculated using Hylleraas-type correlation functions with up to 95 terms. Results are rigorous lower bounds; they are in general agreement with those of Schwartz, but they are more accurate and outside his error bounds at a couple of energies,

  8. Complex-valued derivative propagation method with approximate Bohmian trajectories: Application to electronic nonadiabatic dynamics

    Science.gov (United States)

    Wang, Yu; Chou, Chia-Chun

    2018-05-01

    The coupled complex quantum Hamilton-Jacobi equations for electronic nonadiabatic transitions are approximately solved by propagating individual quantum trajectories in real space. Equations of motion are derived through use of the derivative propagation method for the complex actions and their spatial derivatives for wave packets moving on each of the coupled electronic potential surfaces. These equations for two surfaces are converted into the moving frame with the same grid point velocities. Excellent wave functions can be obtained by making use of the superposition principle even when nodes develop in wave packet scattering.

  9. High-pressure synthesis of CuBa2Ca3Cu4O10+δ superconductor from precursors prepared by a polymerized complex method

    International Nuclear Information System (INIS)

    Aoba, Tomoya; Bizen, Takeshi; Suzuki, Tsuneo; Nakayama, Tadachika; Suematsu, Hisayuki; Niihara, Koichi; Katsumata, Tetsuhiro; Inaguma, Yoshiyuki

    2011-01-01

    Samples of a CuBa 2 Ca 3 Cu 4 O 10+ δ superconductor were synthesized under a high pressure of 5 GPa at 1100-1200degC for 30 min using precursors produced by solid-state reaction and polymerized complex methods. Compared with the precursors prepared by the solid-state reaction method, the precursors produced by the polymerized complex method have low grain sizes. The superconductive transition temperature of the samples prepared using precursors made by the polymerized complex method was found to be 113 K. The volume fractions of the superconducting phase in the samples prepared using precursors made by the solid-state reaction and polymerized complex methods were 49 and 36%, respectively. From these results, precursors made by the polymerized complex method can be used in the high-pressure synthesis of superconductors similarly to those made by the solid-state reaction method. (author)

  10. Justification of computational methods to ensure information management systems

    Directory of Open Access Journals (Sweden)

    E. D. Chertov

    2016-01-01

    Full Text Available Summary. Due to the diversity and complexity of organizational management tasks a large enterprise, the construction of an information management system requires the establishment of interconnected complexes of means, implementing the most efficient way collect, transfer, accumulation and processing of information necessary drivers handle different ranks in the governance process. The main trends of the construction of integrated logistics management information systems can be considered: the creation of integrated data processing systems by centralizing storage and processing of data arrays; organization of computer systems to realize the time-sharing; aggregate-block principle of the integrated logistics; Use a wide range of peripheral devices with the unification of information and hardware communication. Main attention is paid to the application of the system of research of complex technical support, in particular, the definition of quality criteria for the operation of technical complex, the development of information base analysis methods of management information systems and define the requirements for technical means, as well as methods of structural synthesis of the major subsystems of integrated logistics. Thus, the aim is to study on the basis of systematic approach of integrated logistics management information system and the development of a number of methods of analysis and synthesis of complex logistics that are suitable for use in the practice of engineering systems design. The objective function of the complex logistics management information systems is the task of gathering systems, transmission and processing of specified amounts of information in the regulated time intervals with the required degree of accuracy while minimizing the reduced costs for the establishment and operation of technical complex. Achieving the objective function of the complex logistics to carry out certain organization of interaction of information

  11. Model-based human reliability analysis: prospects and requirements

    International Nuclear Information System (INIS)

    Mosleh, A.; Chang, Y.H.

    2004-01-01

    Major limitations of the conventional methods for human reliability analysis (HRA), particularly those developed for operator response analysis in probabilistic safety assessments (PSA) of nuclear power plants, are summarized as a motivation for the need and a basis for developing requirements for the next generation HRA methods. It is argued that a model-based approach that provides explicit cognitive causal links between operator behaviors and directly or indirectly measurable causal factors should be at the core of the advanced methods. An example of such causal model is briefly reviewed, where due to the model complexity and input requirements can only be currently implemented in a dynamic PSA environment. The computer simulation code developed for this purpose is also described briefly, together with current limitations in the models, data, and the computer implementation

  12. Reduction in requirements for allogeneic blood products: nonpharmacologic methods.

    Science.gov (United States)

    Hardy, J F; Bélisle, S; Janvier, G; Samama, M

    1996-12-01

    Various strategies have been proposed to decrease bleeding and allogeneic transfusion requirements during and after cardiac operations. This article attempts to document the usefulness, or lack thereof, of the nonpharmacologic methods available in clinical practice. Blood conservation methods were reviewed in chronologic order, as they become available to patients during the perisurgical period. The literature in support of or against each strategy was reexamined critically. Avoidance of preoperative anemia and adherence to published guidelines for the practice of transfusion are of paramount importance. Intraoperatively, tolerance of low hemoglobin concentrations and use of autologous blood (predonated or harvested before bypass) will reduce allogeneic transfusions. The usefulness of plateletpheresis and retransfusion of shed mediastinal fluid remains controversial. Intraoperatively and postoperatively, maintenance of normothermia contributes to improved hemostasis. Several approaches have been shown to be effective. An efficient combination of methods can reduce, and sometimes abolish, the need for allogeneic blood products after cardiac operations, inasmuch as all those involved in the care of cardiac surgical patients adhere thoughtfully to existing transfusion guidelines.

  13. Switching industrial production processes from complex to defined media: method development and case study using the example of Penicillium chrysogenum.

    Science.gov (United States)

    Posch, Andreas E; Spadiut, Oliver; Herwig, Christoph

    2012-06-22

    Filamentous fungi are versatile cell factories and widely used for the production of antibiotics, organic acids, enzymes and other industrially relevant compounds at large scale. As a fact, industrial production processes employing filamentous fungi are commonly based on complex raw materials. However, considerable lot-to-lot variability of complex media ingredients not only demands for exhaustive incoming components inspection and quality control, but unavoidably affects process stability and performance. Thus, switching bioprocesses from complex to defined media is highly desirable. This study presents a strategy for strain characterization of filamentous fungi on partly complex media using redundant mass balancing techniques. Applying the suggested method, interdependencies between specific biomass and side-product formation rates, production of fructooligosaccharides, specific complex media component uptake rates and fungal strains were revealed. A 2-fold increase of the overall penicillin space time yield and a 3-fold increase in the maximum specific penicillin formation rate were reached in defined media compared to complex media. The newly developed methodology enabled fast characterization of two different industrial Penicillium chrysogenum candidate strains on complex media based on specific complex media component uptake kinetics and identification of the most promising strain for switching the process from complex to defined conditions. Characterization at different complex/defined media ratios using only a limited number of analytical methods allowed maximizing the overall industrial objectives of increasing both, method throughput and the generation of scientific process understanding.

  14. An approach of requirements tracing in formal refinement

    DEFF Research Database (Denmark)

    Jastram, Michael; Hallerstede, Stefan; Leuschel, Michael

    2010-01-01

    Formal modeling of computing systems yields models that are intended to be correct with respect to the requirements that have been formalized. The complexity of typical computing systems can be addressed by formal refinement introducing all the necessary details piecemeal. We report on preliminar...... changes, making use of corresponding techniques already built into the Event-B method....

  15. Use of the Delphi method in resolving complex water resources issues

    Science.gov (United States)

    Taylor, J.G.; Ryder, S.D.

    2003-01-01

    The tri-state river basins, shared by Georgia, Alabama, and Florida, are being modeled by the U.S. Fish and Wildlife Service and the U.S. Army Corps of Engineers to help facilitate agreement in an acrimonious water dispute among these different state governments. Modeling of such basin reservoir operations requires parallel understanding of several river system components: hydropower production, flood control, municipal and industrial water use, navigation, and reservoir fisheries requirements. The Delphi method, using repetitive surveying of experts, was applied to determine fisheries' water and lake-level requirements on 25 reservoirs in these interstate basins. The Delphi technique allowed the needs and requirements of fish populations to be brought into the modeling effort on equal footing with other water supply and demand components. When the subject matter is concisely defined and limited, this technique can rapidly assess expert opinion on any natural resource issue, and even move expert opinion toward greater agreement.

  16. Evidence that Mediator is essential for Pol II transcription, but is not a required component of the preinitiation complex in vivo.

    Science.gov (United States)

    Petrenko, Natalia; Jin, Yi; Wong, Koon Ho; Struhl, Kevin

    2017-07-12

    The Mediator complex has been described as a general transcription factor, but it is unclear if it is essential for Pol II transcription and/or is a required component of the preinitiation complex (PIC) in vivo. Here, we show that depletion of individual subunits, even those essential for cell growth, causes a general but only modest decrease in transcription. In contrast, simultaneous depletion of all Mediator modules causes a drastic decrease in transcription. Depletion of head or middle subunits, but not tail subunits, causes a downstream shift in the Pol II occupancy profile, suggesting that Mediator at the core promoter inhibits promoter escape. Interestingly, a functional PIC and Pol II transcription can occur when Mediator is not detected at core promoters. These results provide strong evidence that Mediator is essential for Pol II transcription and stimulates PIC formation, but it is not a required component of the PIC in vivo.

  17. Fuzzy Entropy Method for Quantifying Supply Chain Networks Complexity

    Science.gov (United States)

    Zhang, Jihui; Xu, Junqin

    Supply chain is a special kind of complex network. Its complexity and uncertainty makes it very difficult to control and manage. Supply chains are faced with a rising complexity of products, structures, and processes. Because of the strong link between a supply chain’s complexity and its efficiency the supply chain complexity management becomes a major challenge of today’s business management. The aim of this paper is to quantify the complexity and organization level of an industrial network working towards the development of a ‘Supply Chain Network Analysis’ (SCNA). By measuring flows of goods and interaction costs between different sectors of activity within the supply chain borders, a network of flows is built and successively investigated by network analysis. The result of this study shows that our approach can provide an interesting conceptual perspective in which the modern supply network can be framed, and that network analysis can handle these issues in practice.

  18. Engineering Complex Embedded Systems with State Analysis and the Mission Data System

    Science.gov (United States)

    Ingham, Michel D.; Rasmussen, Robert D.; Bennett, Matthew B.; Moncada, Alex C.

    2004-01-01

    It has become clear that spacecraft system complexity is reaching a threshold where customary methods of control are no longer affordable or sufficiently reliable. At the heart of this problem are the conventional approaches to systems and software engineering based on subsystem-level functional decomposition, which fail to scale in the tangled web of interactions typically encountered in complex spacecraft designs. Furthermore, there is a fundamental gap between the requirements on software specified by systems engineers and the implementation of these requirements by software engineers. Software engineers must perform the translation of requirements into software code, hoping to accurately capture the systems engineer's understanding of the system behavior, which is not always explicitly specified. This gap opens up the possibility for misinterpretation of the systems engineer s intent, potentially leading to software errors. This problem is addressed by a systems engineering methodology called State Analysis, which provides a process for capturing system and software requirements in the form of explicit models. This paper describes how requirements for complex aerospace systems can be developed using State Analysis and how these requirements inform the design of the system software, using representative spacecraft examples.

  19. [TVT (transvaginal mesh) surgical method for complex resolution of pelvic floor defects].

    Science.gov (United States)

    Adamík, Z

    2006-01-01

    Assessment of the effects of a new surgical method for complex resolution of pelvic floor defects. Case study. Department of Obstetrics and Gynaecology, Bata Hospital, Zlín. We evaluated the procedures and results of the new TVM (transvaginal mesh) surgical method which we used in a group of 12 patients. Ten patients had vaginal prolapse following vaginal hysterectomy and in two cases there was uterine prolapse and vaginal prolapse. Only in one case there was a small protrusion in the range of 0.5 cm which we resolved by removal of the penetrated section. The resulting anatomic effect was very good in all the cases.

  20. Estimation of very low concentrations of Ruthenium by spectrophotometric method using barbituric acid as complexing agent

    International Nuclear Information System (INIS)

    Ramakrishna Reddy, S.; Srinivasan, R.; Mallika, C.; Kamachi Mudali, U.; Natarajan, R.

    2012-01-01

    Spectrophotometric method employing numerous chromogenic reagents like thiourea, 1,10-phenanthroline, thiocyanate and tropolone is reported in the literature for the estimation of very low concentrations of Ru. A sensitive spectrophotometric method has been developed for the determination of ruthenium in the concentration range 1.5 to 6.5 ppm in the present work. This method is based on the reaction of ruthenium with barbituric acid to produce ruthenium(ll)tris-violurate, (Ru(H 2 Va) 3 ) -1 complex which gives a stable deep-red coloured solution. The maximum absorption of the complex is at 491 nm due to the inverted t 2g → Π(L-L ligand) electron - transfer transition. The molar absorptivity of the coloured species is 9,851 dm 3 mol -1 cm -1

  1. Loss tangent and complex modulus estimated by acoustic radiation force creep and shear wave dispersion.

    Science.gov (United States)

    Amador, Carolina; Urban, Matthew W; Chen, Shigao; Greenleaf, James F

    2012-03-07

    Elasticity imaging methods have been used to study tissue mechanical properties and have demonstrated that tissue elasticity changes with disease state. In current shear wave elasticity imaging methods typically only shear wave speed is measured and rheological models, e.g. Kelvin-Voigt, Maxwell and Standard Linear Solid, are used to solve for tissue mechanical properties such as the shear viscoelastic complex modulus. This paper presents a method to quantify viscoelastic material properties in a model-independent way by estimating the complex shear elastic modulus over a wide frequency range using time-dependent creep response induced by acoustic radiation force. This radiation force induced creep method uses a conversion formula that is the analytic solution of a constitutive equation. The proposed method in combination with shearwave dispersion ultrasound vibrometry is used to measure the complex modulus so that knowledge of the applied radiation force magnitude is not necessary. The conversion formula is shown to be sensitive to sampling frequency and the first reliable measure in time according to numerical simulations using the Kelvin-Voigt model creep strain and compliance. Representative model-free shear complex moduli from homogeneous tissue mimicking phantoms and one excised swine kidney were obtained. This work proposes a novel model-free ultrasound-based elasticity method that does not require a rheological model with associated fitting requirements.

  2. The complex variable boundary element method: Applications in determining approximative boundaries

    Science.gov (United States)

    Hromadka, T.V.

    1984-01-01

    The complex variable boundary element method (CVBEM) is used to determine approximation functions for boundary value problems of the Laplace equation such as occurs in potential theory. By determining an approximative boundary upon which the CVBEM approximator matches the desired constant (level curves) boundary conditions, the CVBEM is found to provide the exact solution throughout the interior of the transformed problem domain. Thus, the acceptability of the CVBEM approximation is determined by the closeness-of-fit of the approximative boundary to the study problem boundary. ?? 1984.

  3. Comparison of different methods to extract the required coefficient of friction for level walking.

    Science.gov (United States)

    Chang, Wen-Ruey; Chang, Chien-Chi; Matz, Simon

    2012-01-01

    The required coefficient of friction (RCOF) is an important predictor for slip incidents. Despite the wide use of the RCOF there is no standardised method for identifying the RCOF from ground reaction forces. This article presents a comparison of the outcomes from seven different methods, derived from those reported in the literature, for identifying the RCOF from the same data. While commonly used methods are based on a normal force threshold, percentage of stance phase or time from heel contact, a newly introduced hybrid method is based on a combination of normal force, time and direction of increase in coefficient of friction. Although no major differences were found with these methods in more than half the strikes, significant differences were found in a significant portion of strikes. Potential problems with some of these methods were identified and discussed and they appear to be overcome by the hybrid method. No standard method exists for determining the required coefficient of friction (RCOF), an important predictor for slipping. In this study, RCOF values from a single data set, using various methods from the literature, differed considerably for a significant portion of strikes. A hybrid method may yield improved results.

  4. Efficient nuclear export of p65-IkappaBalpha complexes requires 14-3-3 proteins.

    Science.gov (United States)

    Aguilera, Cristina; Fernández-Majada, Vanessa; Inglés-Esteve, Julia; Rodilla, Verónica; Bigas, Anna; Espinosa, Lluís

    2006-09-01

    IkappaB are responsible for maintaining p65 in the cytoplasm under non-stimulating conditions and promoting the active export of p65 from the nucleus following NFkappaB activation to terminate the signal. We now show that 14-3-3 proteins regulate the NFkappaB signaling pathway by physically interacting with p65 and IkappaBalpha proteins. We identify two functional 14-3-3 binding domains in the p65 protein involving residues 38-44 and 278-283, and map the interaction region of IkappaBalpha in residues 60-65. Mutation of these 14-3-3 binding domains in p65 or IkappaBalpha results in a predominantly nuclear distribution of both proteins. TNFalpha treatment promotes recruitment of 14-3-3 and IkappaBalpha to NFkappaB-dependent promoters and enhances the binding of 14-3-3 to p65. Disrupting 14-3-3 activity by transfection with a dominant-negative 14-3-3 leads to the accumulation of nuclear p65-IkappaBalpha complexes and the constitutive association of p65 with the chromatin. In this situation, NFkappaB-dependent genes become unresponsive to TNFalpha stimulation. Together our results indicate that 14-3-3 proteins facilitate the nuclear export of IkappaBalpha-p65 complexes and are required for the appropriate regulation of NFkappaB signaling.

  5. Application of Semiempirical Methods to Transition Metal Complexes: Fast Results but Hard-to-Predict Accuracy.

    KAUST Repository

    Minenkov, Yury

    2018-05-22

    A series of semiempirical PM6* and PM7 methods has been tested in reproducing of relative conformational energies of 27 realistic-size complexes of 16 different transition metals (TMs). An analysis of relative energies derived from single-point energy evaluations on density functional theory (DFT) optimized conformers revealed pronounced deviations between semiempirical and DFT methods indicating fundamental difference in potential energy surfaces (PES). To identify the origin of the deviation, we compared fully optimized PM7 and respective DFT conformers. For many complexes, differences in PM7 and DFT conformational energies have been confirmed often manifesting themselves in false coordination of some atoms (H, O) to TMs and chemical transformations/distortion of coordination center geometry in PM7 structures. Despite geometry optimization with fixed coordination center geometry leads to some improvements in conformational energies, the resulting accuracy is still too low to recommend explored semiempirical methods for out-of-the-box conformational search/sampling: careful testing is always needed.

  6. Validation of Experimental whole-body SAR Assessment Method in a Complex Indoor Environment

    DEFF Research Database (Denmark)

    Bamba, Aliou; Joseph, Wout; Vermeeren, Gunter

    2012-01-01

    Assessing experimentally the whole-body specific absorption rate (SARwb) in a complex indoor environment is very challenging. An experimental method based on room electromagnetics theory (accounting only the Line-Of-Sight as specular path) to assess the whole-body SAR is validated by numerical...... of the proposed method is that it allows discarding the computation burden because it does not use any discretizations. Results show good agreement between measurement and computation at 2.8 GHz, as long as the plane wave assumption is valid, i.e., for high distances from the transmitter. Relative deviations 0...

  7. An analytical approach to customer requirement information processing

    Science.gov (United States)

    Zhou, Zude; Xiao, Zheng; Liu, Quan; Ai, Qingsong

    2013-11-01

    'Customer requirements' (CRs) management is a key component of customer relationship management (CRM). By processing customer-focused information, CRs management plays an important role in enterprise systems (ESs). Although two main CRs analysis methods, quality function deployment (QFD) and Kano model, have been applied to many fields by many enterprises in the past several decades, the limitations such as complex processes and operations make them unsuitable for online businesses among small- and medium-sized enterprises (SMEs). Currently, most SMEs do not have the resources to implement QFD or Kano model. In this article, we propose a method named customer requirement information (CRI), which provides a simpler and easier way for SMEs to run CRs analysis. The proposed method analyses CRs from the perspective of information and applies mathematical methods to the analysis process. A detailed description of CRI's acquisition, classification and processing is provided.

  8. Antioxidant study of quercetin and their metal complex and determination of stability constant by spectrophotometry method.

    Science.gov (United States)

    Ravichandran, R; Rajendran, M; Devapiriam, D

    2014-03-01

    Quercetin found chelate cadmium ions, scavenge free radicals produced by cadmium. Hence new complex, quercetin with cadmium was synthesised, and the synthesised complex structures were determined by UV-vis spectrophotometry, infrared spectroscopy, thermogravimetry and differential thermal analysis techniques (UV-vis, IR, TGA and DTA). The equilibrium stability constants of quercetin-cadmium complex were determined by Job's method. The determined stability constant value of quercetin-cadminum complex at pH 4.4 is 2.27×10(6) and at pH 7.4 is 7.80×10(6). It was found that the quercetin and cadmium ion form 1:1 complex in both pH 4.4 and pH 7.4. The structure of the compounds was elucidated on the basis of obtained results. Furthermore, the antioxidant activity of the free quercetin and quercetin-cadmium complexes were determined by DPPH and ABTS assays. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. The Reliasep method used for the functional modeling of complex systems

    International Nuclear Information System (INIS)

    Dubiez, P.; Gaufreteau, P.; Pitton, J.P.

    1997-07-01

    The RELIASEP R method and its support tool have been recommended to carry out the functional analysis of large systems within the framework of the design of new power units. Let us first recall the principles of the method based on the breakdown of functions into tree(s). These functions are characterised by their performance and constraints. Then the main modifications made under EDF requirement and in particular the 'viewpoints' analyses are presented. The knowledge obtained from the first studies carried out are discussed. (author)

  10. The Reliasep method used for the functional modeling of complex systems

    Energy Technology Data Exchange (ETDEWEB)

    Dubiez, P.; Gaufreteau, P.; Pitton, J.P

    1997-07-01

    The RELIASEP{sup R} method and its support tool have been recommended to carry out the functional analysis of large systems within the framework of the design of new power units. Let us first recall the principles of the method based on the breakdown of functions into tree(s). These functions are characterised by their performance and constraints. Then the main modifications made under EDF requirement and in particular the `viewpoints` analyses are presented. The knowledge obtained from the first studies carried out are discussed. (author)

  11. Complex quantum group, dual algebra and bicovariant differential calculus

    International Nuclear Information System (INIS)

    Carow-Watamura, U.; Watamura, Satoshi

    1993-01-01

    The method used to construct the bicovariant bimodule in ref. [CSWW] is applied to examine the structure of the dual algebra and the bicovariant differential calculus of the complex quantum group. The complex quantum group Fun q (SL(N, C)) is defined by requiring that it contains Fun q (SU(N)) as a subalgebra analogously to the quantum Lorentz group. Analyzing the properties of the fundamental bimodule, we show that the dual algebra has the structure of the twisted product Fun q (SU(N))x tilde Fun q (SU(N)) reg *. Then the bicovariant differential calculi on the complex quantum group are constructed. (orig.)

  12. Complexity explained

    CERN Document Server

    Erdi, Peter

    2008-01-01

    This book explains why complex systems research is important in understanding the structure, function and dynamics of complex natural and social phenomena. Readers will learn the basic concepts and methods of complex system research.

  13. Physical approach to complex systems

    Science.gov (United States)

    Kwapień, Jarosław; Drożdż, Stanisław

    2012-06-01

    Typically, complex systems are natural or social systems which consist of a large number of nonlinearly interacting elements. These systems are open, they interchange information or mass with environment and constantly modify their internal structure and patterns of activity in the process of self-organization. As a result, they are flexible and easily adapt to variable external conditions. However, the most striking property of such systems is the existence of emergent phenomena which cannot be simply derived or predicted solely from the knowledge of the systems’ structure and the interactions among their individual elements. This property points to the holistic approaches which require giving parallel descriptions of the same system on different levels of its organization. There is strong evidence-consolidated also in the present review-that different, even apparently disparate complex systems can have astonishingly similar characteristics both in their structure and in their behaviour. One can thus expect the existence of some common, universal laws that govern their properties. Physics methodology proves helpful in addressing many of the related issues. In this review, we advocate some of the computational methods which in our opinion are especially fruitful in extracting information on selected-but at the same time most representative-complex systems like human brain, financial markets and natural language, from the time series representing the observables associated with these systems. The properties we focus on comprise the collective effects and their coexistence with noise, long-range interactions, the interplay between determinism and flexibility in evolution, scale invariance, criticality, multifractality and hierarchical structure. The methods described either originate from “hard” physics-like the random matrix theory-and then were transmitted to other fields of science via the field of complex systems research, or they originated elsewhere but

  14. COST Training School on New Economic Complex Geography

    CERN Document Server

    Panchuk, Anastasiia; Radi, Davide

    2016-01-01

    The book presents the lectures delivered during a short course held at Urbino University in summer 2015 on qualitative theory of dynamical systems, included in the activities of the COST Action IS1104 “The EU in the new economic complex geography: models, tools and policy evaluation”. It provides a basic introduction to dynamical systems and optimal control both in continuous and discrete time, as well as some numerical methods and applications in economic modelling. Economic and social systems are intrinsically dynamic, characterized by interdependence, nonlinearity and complexity, and these features can only be approached using a qualitative analysis based on the study of invariant sets (equilibrium points, limit cycles and more complex attractors, together with the boundaries of their basins of attraction), which requires a trade-off between analytical, geometrical and numerical methods. Even though the early steps of the qualitative theory of dynamical systems have been in continuous time models, in e...

  15. Y-12 National Security Complex Emergency Management Hazards Assessment (EMHA) Process; FINAL

    International Nuclear Information System (INIS)

    Bailiff, E.F.; Bolling, J.D.

    2001-01-01

    This document establishes requirements and standard methods for the development and maintenance of the Emergency Management Hazards Assessment (EMHA) process used by the lead and all event contractors at the Y-12 Complex for emergency planning and preparedness. The EMHA process provides the technical basis for the Y-12 emergency management program. The instructions provided in this document include methods and requirements for performing the following emergency management activities at Y-12: (1) hazards identification; (2) hazards survey, and (3) hazards assessment

  16. Complex Correspondence Principle

    International Nuclear Information System (INIS)

    Bender, Carl M.; Meisinger, Peter N.; Hook, Daniel W.; Wang Qinghai

    2010-01-01

    Quantum mechanics and classical mechanics are distinctly different theories, but the correspondence principle states that quantum particles behave classically in the limit of high quantum number. In recent years much research has been done on extending both quantum and classical mechanics into the complex domain. These complex extensions continue to exhibit a correspondence, and this correspondence becomes more pronounced in the complex domain. The association between complex quantum mechanics and complex classical mechanics is subtle and demonstrating this relationship requires the use of asymptotics beyond all orders.

  17. Fitting methods to paradigms: are ergonomics methods fit for systems thinking?

    Science.gov (United States)

    Salmon, Paul M; Walker, Guy H; M Read, Gemma J; Goode, Natassia; Stanton, Neville A

    2017-02-01

    The issues being tackled within ergonomics problem spaces are shifting. Although existing paradigms appear relevant for modern day systems, it is worth questioning whether our methods are. This paper asks whether the complexities of systems thinking, a currently ubiquitous ergonomics paradigm, are outpacing the capabilities of our methodological toolkit. This is achieved through examining the contemporary ergonomics problem space and the extent to which ergonomics methods can meet the challenges posed. Specifically, five key areas within the ergonomics paradigm of systems thinking are focused on: normal performance as a cause of accidents, accident prediction, system migration, systems concepts and ergonomics in design. The methods available for pursuing each line of inquiry are discussed, along with their ability to respond to key requirements. In doing so, a series of new methodological requirements and capabilities are identified. It is argued that further methodological development is required to provide researchers and practitioners with appropriate tools to explore both contemporary and future problems. Practitioner Summary: Ergonomics methods are the cornerstone of our discipline. This paper examines whether our current methodological toolkit is fit for purpose given the changing nature of ergonomics problems. The findings provide key research and practice requirements for methodological development.

  18. Methods of determining information needs for control

    Energy Technology Data Exchange (ETDEWEB)

    Borkowski, Z.

    1980-01-01

    Work has begun in the Main Data Center in the field of mining (Poland) on estimation in improvement of methods of determining information requirements necessary for control. Existing methods are briefly surveyed. Their imperfection is shown. The complexity of characteristics for this problem is pointed out.

  19. Mixed-method research protocol: defining and operationalizing patient-related complexity of nursing care in acute care hospitals.

    Science.gov (United States)

    Huber, Evelyn; Kleinknecht-Dolf, Michael; Müller, Marianne; Kugler, Christiane; Spirig, Rebecca

    2017-06-01

    To define the concept of patient-related complexity of nursing care in acute care hospitals and to operationalize it in a questionnaire. The concept of patient-related complexity of nursing care in acute care hospitals has not been conclusively defined in the literature. The operationalization in a corresponding questionnaire is necessary, given the increased significance of the topic, due to shortened lengths of stay and increased patient morbidity. Hybrid model of concept development and embedded mixed-methods design. The theoretical phase of the hybrid model involved a literature review and the development of a working definition. In the fieldwork phase of 2015 and 2016, an embedded mixed-methods design was applied with complexity assessments of all patients at five Swiss hospitals using our newly operationalized questionnaire 'Complexity of Nursing Care' over 1 month. These data will be analysed with structural equation modelling. Twelve qualitative case studies will be embedded. They will be analysed using a structured process of constructing case studies and content analysis. In the final analytic phase, the quantitative and qualitative data will be merged and added to the results of the theoretical phase for a common interpretation. Cantonal Ethics Committee Zurich judged the research programme as unproblematic in December 2014 and May 2015. Following the phases of the hybrid model and using an embedded mixed-methods design can reach an in-depth understanding of patient-related complexity of nursing care in acute care hospitals, a final version of the questionnaire and an acknowledged definition of the concept. © 2016 John Wiley & Sons Ltd.

  20. The geosystems of complex geographical atlases

    Directory of Open Access Journals (Sweden)

    Jovanović Jasmina

    2012-01-01

    Full Text Available Complex geographical atlases represent geosystems of different hierarchical rank, complexity and diversity, scale and connection. They represent a set of large number of different pieces of information about geospace. Also, they contain systematized, correlative and in the apparent form represented pieces of information about space. The degree of information revealed in the atlas is precisely explained by its content structure and the form of presentation. The quality of atlas depends on the method of visualization of data and the quality of geodata. Cartographic visualization represents cognitive process. The analysis converts geospatial data into knowledge. A complex geographical atlas represents information complex of spatial - temporal coordinated database on geosystems of different complexity and territorial scope. Each geographical atlas defines a concrete geosystem. Systemic organization (structural and contextual determines its complexity and concreteness. In complex atlases, the attributes of geosystems are modeled and pieces of information are given in systematized, graphically unique form. The atlas can be considered as a database. In composing a database, semantic analysis of data is important. The result of semantic modeling is expressed in structuring of data information, in emphasizing logic connections between phenomena and processes and in defining their classes according to the degree of similarity. Accordingly, the efficiency of research of needed pieces of information in the process of the database use is enabled. An atlas map has a special power to integrate sets of geodata and present information contents in user - friendly and understandable visual and tactile way using its visual ability. Composing an atlas by systemic cartography requires the pieces of information on concrete - defined geosystems of different hierarchical level, the application of scientific methods and making of adequate number of analytical, synthetic

  1. PNN NGC 246: A Complex Photometric Behaviour That Requires Wet

    Directory of Open Access Journals (Sweden)

    Pérez J. M. González

    2003-03-01

    Full Text Available We present a study over three single-site campaigns to investigate the photometric behaviour of the PNN NGC 246. We observed this object in 2000 and 2001. The analysis of the light curves indicates complex and variable temporal spectra. Using wavelet analysis we have found evidences for changes on time scales of hours in the 2000 dataset. The temporal spectra obtained during 2001 are quite different from the results of the previous year. The modulations in the light curve are more noticeable and the temporal spectra present a higher number of modulation frequencies. One peculiar characteristic is the presence of a variable harmonic structure related to one of these modulation frequencies. This complex photometric behaviour may be explained by a more complicated unresolved combination of modulation frequencies, but more likely due to a combination of pulsations of the star plus modulations related to interaction with a close companion, maybe indicating a disc. However, these characteristics cannot be confirmed from single site observations. The complex and variable behaviour of NGC 246 needs the WET co-operation in order to completely resolve its light curve.

  2. Complex multidisciplinary system composition for aerospace vehicle conceptual design

    Science.gov (United States)

    Gonzalez, Lex

    Although, there exists a vast amount of work concerning the analysis, design, integration of aerospace vehicle systems, there is no standard for how this data and knowledge should be combined in order to create a synthesis system. Each institution creating a synthesis system has in house vehicle and hardware components they are attempting to model and proprietary methods with which to model them. This leads to the fact that synthesis systems begin as one-off creations meant to answer a specific problem. As the scope of the synthesis system grows to encompass more and more problems, so does its size and complexity; in order for a single synthesis system to answer multiple questions the number of methods and method interface must increase. As a means to curtail the requirement that the increase of an aircraft synthesis systems capability leads to an increase in its size and complexity, this research effort focuses on the idea that each problem in aerospace requires its own analysis framework. By focusing on the creation of a methodology which centers on the matching of an analysis framework towards the problem being solved, the complexity of the analysis framework is decoupled from the complexity of the system that creates it. The derived methodology allows for the composition of complex multi-disciplinary systems (CMDS) through the automatic creation and implementation of system and disciplinary method interfaces. The CMDS Composition process follows a four step methodology meant to take a problem definition and progress towards the creation of an analysis framework meant to answer said problem. The unique implementation of the CMDS Composition process take user selected disciplinary analysis methods and automatically integrates them, together in order to create a syntactically composable analysis framework. As a means of assessing the validity of the CMDS Composition process a prototype system (AVDDBMS) has been developed. AVD DBMS has been used to model the

  3. KDM2B Recruitment of the Polycomb Group Complex, PRC1.1, Requires Cooperation between PCGF1 and BCORL1.

    Science.gov (United States)

    Wong, Sarah J; Gearhart, Micah D; Taylor, Alexander B; Nanyes, David R; Ha, Daniel J; Robinson, Angela K; Artigas, Jason A; Lee, Oliver J; Demeler, Borries; Hart, P John; Bardwell, Vivian J; Kim, Chongwoo A

    2016-10-04

    KDM2B recruits H2A-ubiquitinating activity of a non-canonical Polycomb Repression Complex 1 (PRC1.1) to CpG islands, facilitating gene repression. We investigated the molecular basis of recruitment using in vitro assembly assays to identify minimal components, subcomplexes, and domains required for recruitment. A minimal four-component PRC1.1 complex can be assembled by combining two separately isolated subcomplexes: the DNA-binding KDM2B/SKP1 heterodimer and the heterodimer of BCORL1 and PCGF1, a core component of PRC1.1. The crystal structure of the KDM2B/SKP1/BCORL1/PCGF1 complex illustrates the crucial role played by the PCGF1/BCORL1 heterodimer. The BCORL1 PUFD domain positions residues preceding the RAWUL domain of PCGF1 to create an extended interface for interaction with KDM2B, which is unique to the PCGF1-containing PRC1.1 complex. The structure also suggests how KDM2B might simultaneously function in PRC1.1 and an SCF ubiquitin ligase complex and the possible molecular consequences of BCOR PUFD internal tandem duplications found in pediatric kidney and brain tumors. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Atg6/UVRAG/Vps34-Containing Lipid Kinase Complex Is Required for Receptor Downregulation through Endolysosomal Degradation and Epithelial Polarity during Drosophila Wing Development

    Directory of Open Access Journals (Sweden)

    Péter Lőrincz

    2014-01-01

    Full Text Available Atg6 (Beclin 1 in mammals is a core component of the Vps34 PI3K (III complex, which promotes multiple vesicle trafficking pathways. Atg6 and Vps34 form two distinct PI3K (III complexes in yeast and mammalian cells, either with Atg14 or with UVRAG. The functions of these two complexes are not entirely clear, as both Atg14 and UVRAG have been suggested to regulate both endocytosis and autophagy. In this study, we performed a microscopic analysis of UVRAG, Atg14, or Atg6 loss-of-function cells in the developing Drosophila wing. Both autophagy and endocytosis are seriously impaired and defective endolysosomes accumulate upon loss of Atg6. We show that Atg6 is required for the downregulation of Notch and Wingless signaling pathways; thus it is essential for normal wing development. Moreover, the loss of Atg6 impairs cell polarity. Atg14 depletion results in autophagy defects with no effect on endocytosis or cell polarity, while the silencing of UVRAG phenocopies all but the autophagy defect of Atg6 depleted cells. Thus, our results indicate that the UVRAG-containing PI3K (III complex is required for receptor downregulation through endolysosomal degradation and for the establishment of proper cell polarity in the developing wing, while the Atg14-containing complex is involved in autophagosome formation.

  5. Reducing the Computational Complexity of Reconstruction in Compressed Sensing Nonuniform Sampling

    DEFF Research Database (Denmark)

    Grigoryan, Ruben; Jensen, Tobias Lindstrøm; Arildsen, Thomas

    2013-01-01

    sparse signals, but requires computationally expensive reconstruction algorithms. This can be an obstacle for real-time applications. The reduction of complexity is achieved by applying a multi-coset sampling procedure. This proposed method reduces the size of the dictionary matrix, the size...

  6. Identifying influential spreaders in complex networks based on kshell hybrid method

    Science.gov (United States)

    Namtirtha, Amrita; Dutta, Animesh; Dutta, Biswanath

    2018-06-01

    Influential spreaders are the key players in maximizing or controlling the spreading in a complex network. Identifying the influential spreaders using kshell decomposition method has become very popular in the recent time. In the literature, the core nodes i.e. with the largest kshell index of a network are considered as the most influential spreaders. We have studied the kshell method and spreading dynamics of nodes using Susceptible-Infected-Recovered (SIR) epidemic model to understand the behavior of influential spreaders in terms of its topological location in the network. From the study, we have found that every node in the core area is not the most influential spreader. Even a strategically placed lower shell node can also be a most influential spreader. Moreover, the core area can also be situated at the periphery of the network. The existing indexing methods are only designed to identify the most influential spreaders from core nodes and not from lower shells. In this work, we propose a kshell hybrid method to identify highly influential spreaders not only from the core but also from lower shells. The proposed method comprises the parameters such as kshell power, node's degree, contact distance, and many levels of neighbors' influence potential. The proposed method is evaluated using nine real world network datasets. In terms of the spreading dynamics, the experimental results show the superiority of the proposed method over the other existing indexing methods such as the kshell method, the neighborhood coreness centrality, the mixed degree decomposition, etc. Furthermore, the proposed method can also be applied to large-scale networks by considering the three levels of neighbors' influence potential.

  7. Evaluation of Two Statistical Methods Provides Insights into the Complex Patterns of Alternative Polyadenylation Site Switching

    Science.gov (United States)

    Li, Jie; Li, Rui; You, Leiming; Xu, Anlong; Fu, Yonggui; Huang, Shengfeng

    2015-01-01

    Switching between different alternative polyadenylation (APA) sites plays an important role in the fine tuning of gene expression. New technologies for the execution of 3’-end enriched RNA-seq allow genome-wide detection of the genes that exhibit significant APA site switching between different samples. Here, we show that the independence test gives better results than the linear trend test in detecting APA site-switching events. Further examination suggests that the discrepancy between these two statistical methods arises from complex APA site-switching events that cannot be represented by a simple change of average 3’-UTR length. In theory, the linear trend test is only effective in detecting these simple changes. We classify the switching events into four switching patterns: two simple patterns (3’-UTR shortening and lengthening) and two complex patterns. By comparing the results of the two statistical methods, we show that complex patterns account for 1/4 of all observed switching events that happen between normal and cancerous human breast cell lines. Because simple and complex switching patterns may convey different biological meanings, they merit separate study. We therefore propose to combine both the independence test and the linear trend test in practice. First, the independence test should be used to detect APA site switching; second, the linear trend test should be invoked to identify simple switching events; and third, those complex switching events that pass independence testing but fail linear trend testing can be identified. PMID:25875641

  8. A thermogravimetric analysis (TGA) method developed for estimating the stoichiometric ratio of solid-state {alpha}-cyclodextrin-based inclusion complexes

    Energy Technology Data Exchange (ETDEWEB)

    Bai, Yuxiang; Wang, Jinpeng; Bashari, Mohanad; Hu, Xiuting [The State Key Laboratory of Food Science and Technology, School of Food Science and Technology, Jiangnan University, Wuxi 214122 (China); Feng, Tao [School of Perfume and Aroma Technology, Shanghai Institute of Technology, Shanghai 201418 (China); Xu, Xueming [The State Key Laboratory of Food Science and Technology, School of Food Science and Technology, Jiangnan University, Wuxi 214122 (China); Jin, Zhengyu, E-mail: jinlab2008@yahoo.com [The State Key Laboratory of Food Science and Technology, School of Food Science and Technology, Jiangnan University, Wuxi 214122 (China); Tian, Yaoqi, E-mail: yqtian@jiangnan.edu.cn [The State Key Laboratory of Food Science and Technology, School of Food Science and Technology, Jiangnan University, Wuxi 214122 (China)

    2012-08-10

    Highlights: Black-Right-Pointing-Pointer We develop a TGA method for the measurement of the stoichiometric ratio. Black-Right-Pointing-Pointer A series of formulas are deduced to calculate the stoichiometric ratio. Black-Right-Pointing-Pointer Four {alpha}-CD-based inclusion complexes were successfully prepared. Black-Right-Pointing-Pointer The developed method is applicable. - Abstract: An approach mainly based on thermogravimetric analysis (TGA) was developed to evaluate the stoichiometric ratio (SR, guest to host) of the guest-{alpha}-cyclodextrin (Guest-{alpha}-CD) inclusion complexes (4-cresol-{alpha}-CD, benzyl alcohol-{alpha}-CD, ferrocene-{alpha}-CD and decanoic acid-{alpha}-CD). The present data obtained from Fourier transform-infrared (FT-IR) spectroscopy showed that all the {alpha}-CD-based inclusion complexes were successfully prepared in a solid-state form. The stoichiometric ratios of {alpha}-CD to the relative guests (4-cresol, benzyl alcohol, ferrocene and decanoic acid) determined by the developed method were 1:1, 1:2, 2:1 and 1:2, respectively. These SR data were well demonstrated by the previously reported X-ray diffraction (XRD) method and the NMR confirmatory experiments, except the SR of decanoic acid with a larger size and longer chain was not consistent. It is, therefore, suggested that the TGA-based method is applicable to follow the stoichiometric ratio of the polycrystalline {alpha}-CD-based inclusion complexes with smaller and shorter chain guests.

  9. A thermogravimetric analysis (TGA) method developed for estimating the stoichiometric ratio of solid-state α-cyclodextrin-based inclusion complexes

    International Nuclear Information System (INIS)

    Bai, Yuxiang; Wang, Jinpeng; Bashari, Mohanad; Hu, Xiuting; Feng, Tao; Xu, Xueming; Jin, Zhengyu; Tian, Yaoqi

    2012-01-01

    Highlights: ► We develop a TGA method for the measurement of the stoichiometric ratio. ► A series of formulas are deduced to calculate the stoichiometric ratio. ► Four α-CD-based inclusion complexes were successfully prepared. ► The developed method is applicable. - Abstract: An approach mainly based on thermogravimetric analysis (TGA) was developed to evaluate the stoichiometric ratio (SR, guest to host) of the guest–α-cyclodextrin (Guest-α-CD) inclusion complexes (4-cresol-α-CD, benzyl alcohol-α-CD, ferrocene-α-CD and decanoic acid-α-CD). The present data obtained from Fourier transform-infrared (FT-IR) spectroscopy showed that all the α-CD-based inclusion complexes were successfully prepared in a solid-state form. The stoichiometric ratios of α-CD to the relative guests (4-cresol, benzyl alcohol, ferrocene and decanoic acid) determined by the developed method were 1:1, 1:2, 2:1 and 1:2, respectively. These SR data were well demonstrated by the previously reported X-ray diffraction (XRD) method and the NMR confirmatory experiments, except the SR of decanoic acid with a larger size and longer chain was not consistent. It is, therefore, suggested that the TGA-based method is applicable to follow the stoichiometric ratio of the polycrystalline α-CD-based inclusion complexes with smaller and shorter chain guests.

  10. Simulating Engineering Flows through Complex Porous Media via the Lattice Boltzmann Method

    Directory of Open Access Journals (Sweden)

    Vesselin Krassimirov Krastev

    2018-03-01

    Full Text Available In this paper, recent achievements in the application of the lattice Boltzmann method (LBM to complex fluid flows are reported. More specifically, we focus on flows through reactive porous media, such as the flow through the substrate of a selective catalytic reactor (SCR for the reduction of gaseous pollutants in the automotive field; pulsed-flow analysis through heterogeneous catalyst architectures; and transport and electro-chemical phenomena in microbial fuel cells (MFC for novel waste-to-energy applications. To the authors’ knowledge, this is the first known application of LBM modeling to the study of MFCs, which represents by itself a highly innovative and challenging research area. The results discussed here essentially confirm the capabilities of the LBM approach as a flexible and accurate computational tool for the simulation of complex multi-physics phenomena of scientific and technological interest, across physical scales.

  11. The method of measurement and synchronization control for large-scale complex loading system

    International Nuclear Information System (INIS)

    Liao Min; Li Pengyuan; Hou Binglin; Chi Chengfang; Zhang Bo

    2012-01-01

    With the development of modern industrial technology, measurement and control system was widely used in high precision, complex industrial control equipment and large-tonnage loading device. The measurement and control system is often used to analyze the distribution of stress and displacement in the complex bearing load or the complex nature of the mechanical structure itself. In ITER GS mock-up with 5 flexible plates, for each load combination, detect and measure potential slippage between the central flexible plate and the neighboring spacers is necessary as well as the potential slippage between each pre-stressing bar and its neighboring plate. The measurement and control system consists of seven sets of EDC controller and board, computer system, 16-channel quasi-dynamic strain gauge, 25 sets of displacement sensors, 7 sets of load and displacement sensors in the cylinders. This paper demonstrates the principles and methods of EDC220 digital controller to achieve synchronization control, and R and D process of multi-channel loading control software and measurement software. (authors)

  12. A Comparison of Molecular Typing Methods Applied to Enterobacter cloacae complex: hsp60 Sequencing, Rep-PCR, and MLST

    Directory of Open Access Journals (Sweden)

    Roberto Viau

    2017-02-01

    Full Text Available Molecular typing using repetitive sequenced-based PCR (rep-PCR and hsp60 sequencing were applied to a collection of diverse Enterobacter cloacae complex isolates. To determine the most practical method for reference laboratories, we analyzed 71 E. cloacae complex isolates from sporadic and outbreak occurrences originating from 4 geographic areas. While rep-PCR was more discriminating, hsp60 sequencing provided a broader and a more objective geographical tracking method similar to multilocus sequence typing (MLST. In addition, we suggest that MLST may have higher discriminative power compared to hsp60 sequencing, although rep-PCR remains the most discriminative method for local outbreak investigations. In addition, rep-PCR can be an effective and inexpensive method for local outbreak investigation.

  13. Distributed coding/decoding complexity in video sensor networks.

    Science.gov (United States)

    Cordeiro, Paulo J; Assunção, Pedro

    2012-01-01

    Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality.

  14. Screening tests for hazard classification of complex waste materials – Selection of methods

    International Nuclear Information System (INIS)

    Weltens, R.; Vanermen, G.; Tirez, K.; Robbens, J.; Deprez, K.; Michiels, L.

    2012-01-01

    In this study we describe the development of an alternative methodology for hazard characterization of waste materials. Such an alternative methodology for hazard assessment of complex waste materials is urgently needed, because the lack of a validated instrument leads to arbitrary hazard classification of such complex waste materials. False classification can lead to human and environmental health risks and also has important financial consequences for the waste owner. The Hazardous Waste Directive (HWD) describes the methodology for hazard classification of waste materials. For mirror entries the HWD classification is based upon the hazardous properties (H1–15) of the waste which can be assessed from the hazardous properties of individual identified waste compounds or – if not all compounds are identified – from test results of hazard assessment tests performed on the waste material itself. For the latter the HWD recommends toxicity tests that were initially designed for risk assessment of chemicals in consumer products (pharmaceuticals, cosmetics, biocides, food, etc.). These tests (often using mammals) are not designed nor suitable for the hazard characterization of waste materials. With the present study we want to contribute to the development of an alternative and transparent test strategy for hazard assessment of complex wastes that is in line with the HWD principles for waste classification. It is necessary to cope with this important shortcoming in hazardous waste classification and to demonstrate that alternative methods are available that can be used for hazard assessment of waste materials. Next, by describing the pros and cons of the available methods, and by identifying the needs for additional or further development of test methods, we hope to stimulate research efforts and development in this direction. In this paper we describe promising techniques and argument on the test selection for the pilot study that we have performed on different

  15. Complexity in practice: understanding primary care as a complex adaptive system

    Directory of Open Access Journals (Sweden)

    Beverley Ellis

    2010-06-01

    Conclusions The results are real-world exemplars of the emergent properties of complex adaptive systems. Improving clinical governance in primary care requires both complex social interactions and underpinning informatics. The socio-technical lessons learned from this research should inform future management approaches.

  16. Number theoretic methods in cryptography complexity lower bounds

    CERN Document Server

    Shparlinski, Igor

    1999-01-01

    The book introduces new techniques which imply rigorous lower bounds on the complexity of some number theoretic and cryptographic problems. These methods and techniques are based on bounds of character sums and numbers of solutions of some polynomial equations over finite fields and residue rings. It also contains a number of open problems and proposals for further research. We obtain several lower bounds, exponential in terms of logp, on the de­ grees and orders of • polynomials; • algebraic functions; • Boolean functions; • linear recurring sequences; coinciding with values of the discrete logarithm modulo a prime p at suf­ ficiently many points (the number of points can be as small as pI/He). These functions are considered over the residue ring modulo p and over the residue ring modulo an arbitrary divisor d of p - 1. The case of d = 2 is of special interest since it corresponds to the representation of the right­ most bit of the discrete logarithm and defines whether the argument is a quadratic...

  17. Adjust the method of the FMEA to the requirements of the aviation industry

    Directory of Open Access Journals (Sweden)

    Andrzej FELLNER

    2015-12-01

    Full Text Available The article presents a summary of current methods used in aviation and rail transport. It also contains a proposal to adjust the method of the FMEA to the latest requirements of the airline industry. The authors suggested tables of indicators Zn, Pr and Dt necessary to implement FMEA method of risk analysis taking into account current achievements aerospace and rail safety. Also proposed acceptable limits of the RPN number which allows you to classify threats.

  18. A time-minimizing hybrid method for fitting complex Moessbauer spectra

    International Nuclear Information System (INIS)

    Steiner, K.J.

    2000-07-01

    The process of fitting complex Moessbauer-spectra is known to be time-consuming. The fitting process involves a mathematical model for the combined hyperfine interaction which can be solved by an iteration method only. The iteration method is very sensitive to its input-parameters. In other words, with arbitrary input-parameters it is most unlikely that the iteration method will converge. Up to now a scientist has to spent her/his time to guess appropriate input parameters for the iteration process. The idea is to replace the guessing phase by a genetic algorithm. The genetic algorithm starts with an initial population of arbitrary input parameters. Each parameter set is called an individual. The first step is to evaluate the fitness of all individuals. Afterwards the current population is recombined to form a new population. The process of recombination involves the successive application of genetic operators which are selection, crossover, and mutation. These operators mimic the process of natural evolution, i.e. the concept of the survival of the fittest. Even though there is no formal proof that the genetic algorithm will eventually converge, there is an excellent chance that there will be a population with very good individuals after some generations. The hybrid method presented in the following combines a very modern version of a genetic algorithm with a conventional least-square routine solving the combined interaction Hamiltonian i.e. providing a physical solution with the original Moessbauer parameters by a minimum of input. (author)

  19. a Range Based Method for Complex Facade Modeling

    Science.gov (United States)

    Adami, A.; Fregonese, L.; Taffurelli, L.

    2011-09-01

    the complex architecture. From the point cloud we can extract a false colour map depending on the distance of each point from the average plane. In this way we can represent each point of the facades by a height map in grayscale. In this operation it is important to define the scale of the final result in order to set the correct pixel size in the map. The following step is concerning the use of a modifier which is well-known in computer graphics. In fact the modifier Displacement allows to simulate on a planar surface the original roughness of the object according to a grayscale map. The value of gray is read by the modifier as the distance from the reference plane and it represents the displacement of the corresponding element of the virtual plane. Similar to the bump map, the displacement modifier does not only simulate the effect, but it really deforms the planar surface. In this way the 3d model can be use not only in a static representation, but also in dynamic animation or interactive application. The setting of the plane to be deformed is the most important step in this process. In 3d Max the planar surface has to be characterized by the real dimension of the façade and also by a correct number of quadrangular faces which are the smallest part of the whole surface. In this way we can consider the modified surface as a 3d raster representation where each quadrangular face (corresponding to traditional pixel) is displaced according the value of gray (= distance from the plane). This method can be applied in different context, above all when the object to be represented can be considered as a 2,5 dimension such as facades of architecture in city model or large scale representation. But also it can be used to represent particular effect such as deformation of walls in a complete 3d way.

  20. A RANGE BASED METHOD FOR COMPLEX FACADE MODELING

    Directory of Open Access Journals (Sweden)

    A. Adami

    2012-09-01

    homogeneous point cloud of the complex architecture. From the point cloud we can extract a false colour map depending on the distance of each point from the average plane. In this way we can represent each point of the facades by a height map in grayscale. In this operation it is important to define the scale of the final result in order to set the correct pixel size in the map. The following step is concerning the use of a modifier which is well-known in computer graphics. In fact the modifier Displacement allows to simulate on a planar surface the original roughness of the object according to a grayscale map. The value of gray is read by the modifier as the distance from the reference plane and it represents the displacement of the corresponding element of the virtual plane. Similar to the bump map, the displacement modifier does not only simulate the effect, but it really deforms the planar surface. In this way the 3d model can be use not only in a static representation, but also in dynamic animation or interactive application. The setting of the plane to be deformed is the most important step in this process. In 3d Max the planar surface has to be characterized by the real dimension of the façade and also by a correct number of quadrangular faces which are the smallest part of the whole surface. In this way we can consider the modified surface as a 3d raster representation where each quadrangular face (corresponding to traditional pixel is displaced according the value of gray (= distance from the plane. This method can be applied in different context, above all when the object to be represented can be considered as a 2,5 dimension such as facades of architecture in city model or large scale representation. But also it can be used to represent particular effect such as deformation of walls in a complete 3d way.

  1. Developments based on stochastic and determinist methods for studying complex nuclear systems

    International Nuclear Information System (INIS)

    Giffard, F.X.

    2000-01-01

    In the field of reactor and fuel cycle physics, particle transport plays and important role. Neutronic design, operation and evaluation calculations of nuclear system make use of large and powerful computer codes. However, current limitations in terms of computer resources make it necessary to introduce simplifications and approximations in order to keep calculation time and cost within reasonable limits. Two different types of methods are available in these codes. The first one is the deterministic method, which is applicable in most practical cases but requires approximations. The other method is the Monte Carlo method, which does not make these approximations but which generally requires exceedingly long running times. The main motivation of this work is to investigate the possibility of a combined use of the two methods in such a way as to retain their advantages while avoiding their drawbacks. Our work has mainly focused on the speed-up of 3-D continuous energy Monte Carlo calculations (TRIPOLI-4 code) by means of an optimized biasing scheme derived from importance maps obtained from the deterministic code ERANOS. The application of this method to two different practical shielding-type problems has demonstrated its efficiency: speed-up factors of 100 have been reached. In addition, the method offers the advantage of being easily implemented as it is not very to the choice of the importance mesh grid. It has also been demonstrated that significant speed-ups can be achieved by this method in the case of coupled neutron-gamma transport problems, provided that the interdependence of the neutron and photon importance maps is taken into account. Complementary studies are necessary to tackle a problem brought out by this work, namely undesirable jumps in the Monte Carlo variance estimates. (author)

  2. Method and apparatus for purifying nucleic acids and performing polymerase chain reaction assays using an immiscible fluid

    Energy Technology Data Exchange (ETDEWEB)

    Koh, Chung-Yan; Light, Yooli Kim; Piccini, Matthew Ernest; Singh, Anup K.

    2017-10-31

    Embodiments of the present invention are directed toward devices, systems, and methods for purifying nucleic acids to conduct polymerase chain reaction (PCR) assays. In one example, a method includes generating complexes of silica beads and nucleic acids in a lysis buffer, transporting the complexes through an immiscible fluid to remove interfering compounds from the complexes, further transporting the complexes into a density medium containing components required for PCR where the nucleic acids disassociate from the silica beads, and thermocycling the contents of the density medium to achieve PCR. Signal may be detected from labeling agents in the components required for PCR.

  3. Thermodynamic method for obtaining the solubilities of complex medium-sized chemicals in pure and mixed solvents

    DEFF Research Database (Denmark)

    Abildskov, Jens; O'Connell, J.P.

    2005-01-01

    This paper extends our previous simplified approach to using group contribution methods and limited data to determine differences in solubility of sparingly soluble complex chemicals as the solvent is changed. New applications include estimating temperature dependence and the effect of adding cos....... Though we present no new solution theory, the paper shows an especially efficient use of thermodynamic models for solvent and cosolvent selection for product formulations. Examples and discussion of applications are given. (c) 2004 Elsevier B.V. All rights reserved.......This paper extends our previous simplified approach to using group contribution methods and limited data to determine differences in solubility of sparingly soluble complex chemicals as the solvent is changed. New applications include estimating temperature dependence and the effect of adding...

  4. Small-volume potentiometric titrations: EPR investigations of Fe-S cluster N2 in mitochondrial complex I.

    Science.gov (United States)

    Wright, John J; Salvadori, Enrico; Bridges, Hannah R; Hirst, Judy; Roessler, Maxie M

    2016-09-01

    EPR-based potentiometric titrations are a well-established method for determining the reduction potentials of cofactors in large and complex proteins with at least one EPR-active state. However, such titrations require large amounts of protein. Here, we report a new method that requires an order of magnitude less protein than previously described methods, and that provides EPR samples suitable for measurements at both X- and Q-band microwave frequencies. We demonstrate our method by determining the reduction potential of the terminal [4Fe-4S] cluster (N2) in the intramolecular electron-transfer relay in mammalian respiratory complex I. The value determined by our method, E m7 =-158mV, is precise, reproducible, and consistent with previously reported values. Our small-volume potentiometric titration method will facilitate detailed investigations of EPR-active centres in non-abundant and refractory proteins that can only be prepared in small quantities. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  5. Automated design of complex dynamic systems.

    Directory of Open Access Journals (Sweden)

    Michiel Hermans

    Full Text Available Several fields of study are concerned with uniting the concept of computation with that of the design of physical systems. For example, a recent trend in robotics is to design robots in such a way that they require a minimal control effort. Another example is found in the domain of photonics, where recent efforts try to benefit directly from the complex nonlinear dynamics to achieve more efficient signal processing. The underlying goal of these and similar research efforts is to internalize a large part of the necessary computations within the physical system itself by exploiting its inherent non-linear dynamics. This, however, often requires the optimization of large numbers of system parameters, related to both the system's structure as well as its material properties. In addition, many of these parameters are subject to fabrication variability or to variations through time. In this paper we apply a machine learning algorithm to optimize physical dynamic systems. We show that such algorithms, which are normally applied on abstract computational entities, can be extended to the field of differential equations and used to optimize an associated set of parameters which determine their behavior. We show that machine learning training methodologies are highly useful in designing robust systems, and we provide a set of both simple and complex examples using models of physical dynamical systems. Interestingly, the derived optimization method is intimately related to direct collocation a method known in the field of optimal control. Our work suggests that the application domains of both machine learning and optimal control have a largely unexplored overlapping area which envelopes a novel design methodology of smart and highly complex physical systems.

  6. Guidance and methods for satisfying low specific activity material and surface contaminated object regulatory requirements

    International Nuclear Information System (INIS)

    Pope, R.B.; Shappert, L.B.; Michelhaugh, R.D.; Boyle, R.W.; Easton, E.P.; Coodk, J.R.

    1998-01-01

    The U.S. Department of Transportation (DOT) and the U.S. Nuclear Regulatory Commission (NRC) have prepared a comprehensive set of draft guidance for shippers and inspectors to use when applying the newly imposed regulatory requirements for low specific activity (LSA) material and surface contaminated objects (SCOs). These requirements represent significant departures in some areas from the manner in which these materials and objects were regulated by the earlier versions of the regulations. The proper interpretation and application of the regulatory criteria can require a fairly complex set of decisions be made. To assist those trying these regulatory requirements, a detailed set of logic-flow diagrams representing decisions related to multiple factors were prepared and included in the draft report for comment on Categorizing and Transporting Low Specific Activity Materials and Surface Contaminated Objects, (DOT/NRC, 1997). These logic-flow diagrams, as developed, are specific to the U.S. regulations, but were readily adaptable to the IAEA regulations. The diagrams have been modified accordingly and tied directly to specific paragraphs in IAEA Safety Series No. 6. This paper provides the logic-flow diagrams adapted in the IAEA regulations, and demonstrated how these diagrams can be used to assist consignors and inspectors in assessing compliance of shipments with the LSA material and SCO regulatory requirements. (authors)

  7. Design requirements, criteria and methods for seismic qualification of CANDU power plants

    International Nuclear Information System (INIS)

    Singh, N.; Duff, C.G.

    1979-10-01

    This report describes the requirements and criteria for the seismic design and qualification of systems and equipment in CANDU nuclear power plants. Acceptable methods and techniques for seismic qualification of CANDU nuclear power plants to mitigate the effects or the consequences of earthquakes are also described. (auth)

  8. Expedient Syntheses of Neutral and Cationic Au(I)–NHC Complexes

    KAUST Repository

    Veenboer, Richard M. P.

    2017-09-08

    The synthesis and isolation of gold(I) precatalysts often requires the generation of several isolable intermediates as well as numerous purification steps. New protocols for the expedient synthesis of neutral [Au(OH)(NHC)] and [Au(CH2COCH3)(NHC)] species from [AuCl(NHC)] or [AuCl(DMS)] precursors bearing a variety of N-heterocyclic carbene (NHC) ligands are presented. These methods can be employed in a telescoping manner for the synthesis of catalytically relevant [Au(NTf2)(NHC)] and [Au(NHC)(NCCH3)][BF4] complexes. These attractive methods are straightforward and practical leading to various complexes in high isolated yields and purity.

  9. Expedient Syntheses of Neutral and Cationic Au(I)–NHC Complexes

    KAUST Repository

    Veenboer, Richard M. P.; Gasperini, Danila; Nahra, Fady; Cordes, David B.; Slawin, Alexandra M. Z.; Cazin, Catherine S. J.; Nolan, Steven P.

    2017-01-01

    The synthesis and isolation of gold(I) precatalysts often requires the generation of several isolable intermediates as well as numerous purification steps. New protocols for the expedient synthesis of neutral [Au(OH)(NHC)] and [Au(CH2COCH3)(NHC)] species from [AuCl(NHC)] or [AuCl(DMS)] precursors bearing a variety of N-heterocyclic carbene (NHC) ligands are presented. These methods can be employed in a telescoping manner for the synthesis of catalytically relevant [Au(NTf2)(NHC)] and [Au(NHC)(NCCH3)][BF4] complexes. These attractive methods are straightforward and practical leading to various complexes in high isolated yields and purity.

  10. Interpreting complex data by methods of recognition and classification in an automated system of aerogeophysical material processing

    Energy Technology Data Exchange (ETDEWEB)

    Koval' , L.A.; Dolgov, S.V.; Liokumovich, G.B.; Ovcharenko, A.V.; Priyezzhev, I.I.

    1984-01-01

    The system of automated processing of aerogeophysical data, ASOM-AGS/YeS, is equipped with complex interpretation of multichannel measurements. Algorithms of factor analysis, automatic classification and apparatus of a priori specified (selected) decisive rules are used. The areas of effect of these procedures can be initially limited to the specified geological information. The possibilities of the method are demonstrated by the results of automated processing of the aerogram-spectrometric measurements in the region of the known copper-porphyr manifestation in Kazakhstan. This ore deposit was clearly noted after processing by the method of main components by complex aureole of independent factors U (severe increase), Th (noticeable increase), K (decrease).

  11. Calculation of seismic response of a flexible rotor by complex modal method, 1

    International Nuclear Information System (INIS)

    Azuma, Takao; Saito, Shinobu

    1984-01-01

    In rotary machines, at the time of earthquakes, whether the rotating part and stationary part touch or whether the bearings and seals are damaged or not are problems. In order to examine these problems, it is necessary to analyze the seismic response of a rotary shaft or sometimes a casing system. But the conventional analysis methods are unsatisfactory. Accordingly, in the case of a general shaft system supported with slide bearings and on which gyro effect acts, complex modal method must be used. This calculation method is explained in detail in the book of Lancaster, however, when this method is applied to the seismic response of rotary shafts, the calculation time is considerably different according to the method of final integration. In this study, good results were obtained when the method which did not depend on numerical integration was attempted. The equation of motion and its solution, the displacement vector of a foundation, the verification of the calculation program and the example of calculating the seismic response of two coupled rotor shafts are reported. (Kako, I.)

  12. Recruitment of Mediator Complex by Cell Type and Stage-Specific Factors Required for Tissue-Specific TAF Dependent Gene Activation in an Adult Stem Cell Lineage.

    Science.gov (United States)

    Lu, Chenggang; Fuller, Margaret T

    2015-12-01

    Onset of terminal differentiation in adult stem cell lineages is commonly marked by robust activation of new transcriptional programs required to make the appropriate differentiated cell type(s). In the Drosophila male germ line stem cell lineage, the switch from proliferating spermatogonia to spermatocyte is accompanied by one of the most dramatic transcriptional changes in the fly, as over 1000 new transcripts turn on in preparation for meiosis and spermatid differentiation. Here we show that function of the coactivator complex Mediator is required for activation of hundreds of new transcripts in the spermatocyte program. Mediator appears to act in a sequential hierarchy, with the testis activating Complex (tMAC), a cell type specific form of the Mip/dREAM general repressor, required to recruit Mediator subunits to the chromatin, and Mediator function required to recruit the testis TAFs (tTAFs), spermatocyte specific homologs of subunits of TFIID. Mediator, tMAC and the tTAFs co-regulate expression of a major set of spermatid differentiation genes. The Mediator subunit Med22 binds the tMAC component Topi when the two are coexpressed in S2 cells, suggesting direct recruitment. Loss of Med22 function in spermatocytes causes meiosis I maturation arrest male infertility, similar to loss of function of the tMAC subunits or the tTAFs. Our results illuminate how cell type specific versions of the Mip/dREAM complex and the general transcription machinery cooperate to drive selective gene activation during differentiation in stem cell lineages.

  13. SIMPL enhancement of tumor necrosis factor-α dependent p65-MED1 complex formation is required for mammalian hematopoietic stem and progenitor cell function.

    Directory of Open Access Journals (Sweden)

    Weina Zhao

    Full Text Available Significant insight into the signaling pathways leading to activation of the Rel transcription factor family, collectively termed NF-κB, has been gained. Less well understood is how subsets of NF-κB-dependent genes are regulated in a signal specific manner. The SIMPL protein (signaling molecule that interacts with mouse pelle-like kinase is required for full Tumor Necrosis Factor-α (TNFα induced NF-κB activity. We show that SIMPL is required for steady-state hematopoiesis and the expression of a subset of TNFα induced genes whose products regulate hematopoietic cell activity. To gain insight into the mechanism through which SIMPL modulates gene expression we focused on the Tnf gene, an immune response regulator required for steady-state hematopoiesis. In response to TNFα SIMPL localizes to the Tnf gene promoter where it modulates the initiation of Tnf gene transcription. SIMPL binding partners identified by mass spectrometry include proteins involved in transcription and the interaction between SIMPL and MED1 was characterized in more detail. In response to TNFα, SIMPL is found in p65-MED1 complexes where SIMPL enhances p65/MED1/SIMPL complex formation. Together our results indicate that SIMPL functions as a TNFα-dependent p65 co-activator by facilitating the recruitment of MED1 to p65 containing transcriptional complexes to control the expression of a subset of TNFα-induced genes.

  14. A general method for computing the total solar radiation force on complex spacecraft structures

    Science.gov (United States)

    Chan, F. K.

    1981-01-01

    The method circumvents many of the existing difficulties in computational logic presently encountered in the direct analytical or numerical evaluation of the appropriate surface integral. It may be applied to complex spacecraft structures for computing the total force arising from either specular or diffuse reflection or even from non-Lambertian reflection and re-radiation.

  15. Complex dynamical invariants for two-dimensional complex potentials

    Indian Academy of Sciences (India)

    Abstract. Complex dynamical invariants are searched out for two-dimensional complex poten- tials using rationalization method within the framework of an extended complex phase space characterized by x = x1 + ip3, y = x2 + ip4, px = p1 + ix3, py = p2 + ix4. It is found that the cubic oscillator and shifted harmonic oscillator ...

  16. The influence of atomic number on the complex formation constants by visible spectrophotometric method

    International Nuclear Information System (INIS)

    Samin; Kris-Tri-Basuki; Farida-Ernawati

    1996-01-01

    The influence of atomic number on the complex formation constants and it's application by visible spectrophotometric method has been carried out. The complex compound have been made of Y, Nd, Sm and Gd with alizarin red sulfonic in the mole fraction range of 0.20 - 0.53 and pH range of 3.5 - 5. The optimum condition of complex formation was found in the mole fraction range of 0.30 - 0.53, range of pH 3.75 - 5, and the total concentration was 0.00030 M. It was found that the formation constant (β) of alizarin red S. complex by continued variation and matrix disintegration techniques were β : (7.00 ± 0.64).10 9 of complex 3 9γ,β : (4.09±0.34).10 8 of 6 0Nd, β : (7.26 ± 0.42).10 8 of 62 S m and β : (8.38 ± 0.70).10 8 of 64 G d. It can be concluded that the atomic number of Nd is bigger than Sm which is bigger than Gd. The atomic number of Y is the smallest. (39) and the complex formation constant is a biggest. The complex compound can be used for sample analysis with limit detection of Y : 2.2 .10 -5 M, Nd : 2.9 .10 -5 M, Sm : 2.6 .10 -5 M and Gd : 2.4 .10 -5 M. The sensitivity of analysis are Y>Gd>Sm>Nd. The Y 2 O 3 sample of product result from xenotime sand contains Y 2 O 3 : 98.96 ± 1.40 % and in the filtrate (product of monazite sand) contains Nd : 0.27 ± 0.002 M

  17. Unraveling chaotic attractors by complex networks and measurements of stock market complexity

    International Nuclear Information System (INIS)

    Cao, Hongduo; Li, Ying

    2014-01-01

    We present a novel method for measuring the complexity of a time series by unraveling a chaotic attractor modeled on complex networks. The complexity index R, which can potentially be exploited for prediction, has a similar meaning to the Kolmogorov complexity (calculated from the Lempel–Ziv complexity), and is an appropriate measure of a series' complexity. The proposed method is used to research the complexity of the world's major capital markets. None of these markets are completely random, and they have different degrees of complexity, both over the entire length of their time series and at a level of detail. However, developing markets differ significantly from mature markets. Specifically, the complexity of mature stock markets is stronger and more stable over time, whereas developing markets exhibit relatively low and unstable complexity over certain time periods, implying a stronger long-term price memory process

  18. Unraveling chaotic attractors by complex networks and measurements of stock market complexity.

    Science.gov (United States)

    Cao, Hongduo; Li, Ying

    2014-03-01

    We present a novel method for measuring the complexity of a time series by unraveling a chaotic attractor modeled on complex networks. The complexity index R, which can potentially be exploited for prediction, has a similar meaning to the Kolmogorov complexity (calculated from the Lempel-Ziv complexity), and is an appropriate measure of a series' complexity. The proposed method is used to research the complexity of the world's major capital markets. None of these markets are completely random, and they have different degrees of complexity, both over the entire length of their time series and at a level of detail. However, developing markets differ significantly from mature markets. Specifically, the complexity of mature stock markets is stronger and more stable over time, whereas developing markets exhibit relatively low and unstable complexity over certain time periods, implying a stronger long-term price memory process.

  19. Assembling large, complex environmental metagenomes

    Energy Technology Data Exchange (ETDEWEB)

    Howe, A. C. [Michigan State Univ., East Lansing, MI (United States). Microbiology and Molecular Genetics, Plant Soil and Microbial Sciences; Jansson, J. [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Earth Sciences Division; Malfatti, S. A. [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Tringe, S. G. [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Tiedje, J. M. [Michigan State Univ., East Lansing, MI (United States). Microbiology and Molecular Genetics, Plant Soil and Microbial Sciences; Brown, C. T. [Michigan State Univ., East Lansing, MI (United States). Microbiology and Molecular Genetics, Computer Science and Engineering

    2012-12-28

    The large volumes of sequencing data required to sample complex environments deeply pose new challenges to sequence analysis approaches. De novo metagenomic assembly effectively reduces the total amount of data to be analyzed but requires significant computational resources. We apply two pre-assembly filtering approaches, digital normalization and partitioning, to make large metagenome assemblies more computationaly tractable. Using a human gut mock community dataset, we demonstrate that these methods result in assemblies nearly identical to assemblies from unprocessed data. We then assemble two large soil metagenomes from matched Iowa corn and native prairie soils. The predicted functional content and phylogenetic origin of the assembled contigs indicate significant taxonomic differences despite similar function. The assembly strategies presented are generic and can be extended to any metagenome; full source code is freely available under a BSD license.

  20. Complex molecular orbital method: open-shell theory

    International Nuclear Information System (INIS)

    Hendekovic, J.

    1976-01-01

    A singe-determinant open-shell formalism for complex molecular orbitals is developed. An iterative algorithm for solving the resulting secular equations is constructed. It is based on a sequence of similarity transformations and matrix triangularizations

  1. A systematic method for identifying vital areas at complex nuclear facilities.

    Energy Technology Data Exchange (ETDEWEB)

    Beck, David Franklin; Hockert, John

    2005-05-01

    Identifying the areas to be protected is an important part of the development of measures for physical protection against sabotage at complex nuclear facilities. In June 1999, the International Atomic Energy Agency published INFCIRC/225/Rev.4, 'The Physical Protection of Nuclear Material and Nuclear Facilities.' This guidance recommends that 'Safety specialists, in close cooperation with physical protection specialists, should evaluate the consequences of malevolent acts, considered in the context of the State's design basis threat, to identify nuclear material, or the minimum complement of equipment, systems or devices to be protected against sabotage.' This report presents a structured, transparent approach for identifying the areas that contain this minimum complement of equipment, systems, and devices to be protected against sabotage that is applicable to complex nuclear facilities. The method builds upon safety analyses to develop sabotage fault trees that reflect sabotage scenarios that could cause unacceptable radiological consequences. The sabotage actions represented in the fault trees are linked to the areas from which they can be accomplished. The fault tree is then transformed (by negation) into its dual, the protection location tree, which reflects the sabotage actions that must be prevented in order to prevent unacceptable radiological consequences. The minimum path sets of this fault tree dual yield, through the area linkage, sets of areas, each of which contains nuclear material, or a minimum complement of equipment, systems or devices that, if protected, will prevent sabotage. This method also provides guidance for the selection of the minimum path set that permits optimization of the trade-offs among physical protection effectiveness, safety impact, cost and operational impact.

  2. A new method of surgical navigation for orthognathic surgery: optical tracking guided free-hand repositioning of the maxillomandibular complex.

    Science.gov (United States)

    Li, Biao; Zhang, Lei; Sun, Hao; Shen, Steve G F; Wang, Xudong

    2014-03-01

    In bimaxillary orthognathic surgery, the positioning of the maxilla and the mandible is typically accomplished via 2-splint technique, which may be the sources of several types of inaccuracy. To overcome the limitations of the 2-splint technique, we developed a new navigation method, which guided the surgeon to free-hand reposition the maxillomandibular complex as a whole intraoperatively, without the intermediate splint. In this preliminary study, the feasibility was demonstrated. Five patients with dental maxillofacial deformities were enrolled. Before the surgery, 3-dimensional planning was conducted and imported into a navigation system. During the operation, a tracker was connected to the osteotomized maxillomandibular complex via a splint. The navigation system tracked the movement of the complex and displayed it on the screen in real time to guide the surgeon to reposition the complex. The postoperative result was compared with the plan by analyzing the measured distances between the maxillary landmarks and reference planes, as determined from computed tomography data. The mean absolute errors of the maxillary position were clinically acceptable (<1.0 mm). Preoperative preparation time was reduced to 100 minutes on average. All patients were satisfied with the aesthetic results. This navigation method without intraoperative image registration provided a feasible means of transferring virtual planning to the real orthognathic surgery. The real-time position of the maxillomandibular complex was displayed on a monitor to visually guide the surgeon to reposition the complex. In this method, the traditional model surgery and the intermediate splint were discarded, and the preoperative preparation was simplified.

  3. Simulation of biological flow and transport in complex geometries using embedded boundary/volume-of-fluid methods

    International Nuclear Information System (INIS)

    Trebotich, David

    2007-01-01

    We have developed a simulation capability to model multiscale flow and transport in complex biological systems based on algorithms and software infrastructure developed under the SciDAC APDEC CET. The foundation of this work is a new hybrid fluid-particle method for modeling polymer fluids in irregular microscale geometries that enables long-time simulation of validation experiments. Both continuum viscoelastic and discrete particle representations have been used to model the constitutive behavior of polymer fluids. Complex flow environment geometries are represented on Cartesian grids using an implicit function. Direct simulation of flow in the irregular geometry is then possible using embedded boundary/volume-of-fluid methods without loss of geometric detail. This capability has been used to simulate biological flows in a variety of application geometries including biomedical microdevices, anatomical structures and porous media

  4. Introduction to crystal structure determination methods using x-ray diffraction: application to some rare earth complexes

    International Nuclear Information System (INIS)

    Oliveira, M.A. de.

    1986-01-01

    This work is composed by a theoretical introduction studying crystal concept, interaction between X-ray and crystal medium, and methods for determining small molecular structures applied in solution of crystal structures of praseodymium, neodymium and europium complexes with perrhenate and trans - 1,4 - dithiane - 1,4 - dioxide, (TDTD), which general formula is [ Ln (H sub(2) O) sub(4) (η-TDTD) (η'Re O sub(4)) (μ-η sup(2)-TDTD)] sub(n) (Re O sub(4)) sub(2n). nTDTD, where, Ln = Eu, Pr, Nd and methyl-2,6-anhydrous-3-azido-4-0-benzoyl-3-deoxy-α-D-iodo pyranoside. The structure of C sub(14) H sub(15) N sub(3) O sub(5) organic complex was determined using direct methods. (M.C.K.)

  5. An Embedded Ghost-Fluid Method for Compressible Flow in Complex Geometry

    KAUST Repository

    Almarouf, Mohamad Abdulilah Alhusain Alali

    2016-06-03

    We present an embedded ghost-fluid method for numerical solutions of the compressible Navier Stokes (CNS) equations in arbitrary complex domains. The PDE multidimensional extrapolation approach of Aslam [1] is used to reconstruct the solution in the ghost-fluid regions and impose boundary conditions at the fluid-solid interface. The CNS equations are numerically solved by the second order multidimensional upwind method of Colella [2] and Saltzman [3]. Block-structured adaptive mesh refinement implemented under the Chombo framework is utilized to reduce the computational cost while keeping high-resolution mesh around the embedded boundary and regions of high gradient solutions. Numerical examples with different Reynolds numbers for low and high Mach number flow will be presented. We compare our simulation results with other reported experimental and computational results. The significance and advantages of our implementation, which revolve around balancing between the solution accuracy and implementation difficulties, are briefly discussed as well. © 2016 Trans Tech Publications.

  6. An Embedded Ghost-Fluid Method for Compressible Flow in Complex Geometry

    KAUST Repository

    Almarouf, Mohamad Abdulilah Alhusain Alali; Samtaney, Ravi

    2016-01-01

    We present an embedded ghost-fluid method for numerical solutions of the compressible Navier Stokes (CNS) equations in arbitrary complex domains. The PDE multidimensional extrapolation approach of Aslam [1] is used to reconstruct the solution in the ghost-fluid regions and impose boundary conditions at the fluid-solid interface. The CNS equations are numerically solved by the second order multidimensional upwind method of Colella [2] and Saltzman [3]. Block-structured adaptive mesh refinement implemented under the Chombo framework is utilized to reduce the computational cost while keeping high-resolution mesh around the embedded boundary and regions of high gradient solutions. Numerical examples with different Reynolds numbers for low and high Mach number flow will be presented. We compare our simulation results with other reported experimental and computational results. The significance and advantages of our implementation, which revolve around balancing between the solution accuracy and implementation difficulties, are briefly discussed as well. © 2016 Trans Tech Publications.

  7. Bloom: A Relationship Visualization Tool for Complex Networks

    Directory of Open Access Journals (Sweden)

    Frank Horsfall

    2010-07-01

    Full Text Available Faced with an ever-increasing capacity to collect and store data, organizations must find a way to make sense of it to their advantage. Methods are required to simplify the data so that it can inform strategic decisions and help solve problems. Visualization tools are becoming increasingly popular since they can display complex relationships in a simple, visual format. This article describes Bloom, a project at Carleton University to develop an open source visualization tool for complex networks and business ecosystems. It provides an overview of the visualization technology used in the project and demonstrates its potential impact through a case study using real-world data.

  8. Advanced Kalman Filter for Real-Time Responsiveness in Complex Systems

    Energy Technology Data Exchange (ETDEWEB)

    Welch, Gregory Francis [UNC-Chapel Hill/University of Central Florida; Zhang, Jinghe [UNC-Chapel Hill/Virginia Tech

    2014-06-10

    Complex engineering systems pose fundamental challenges in real-time operations and control because they are highly dynamic systems consisting of a large number of elements with severe nonlinearities and discontinuities. Today’s tools for real-time complex system operations are mostly based on steady state models, unable to capture the dynamic nature and too slow to prevent system failures. We developed advanced Kalman filtering techniques and the formulation of dynamic state estimation using Kalman filtering techniques to capture complex system dynamics in aiding real-time operations and control. In this work, we looked at complex system issues including severe nonlinearity of system equations, discontinuities caused by system controls and network switches, sparse measurements in space and time, and real-time requirements of power grid operations. We sought to bridge the disciplinary boundaries between Computer Science and Power Systems Engineering, by introducing methods that leverage both existing and new techniques. While our methods were developed in the context of electrical power systems, they should generalize to other large-scale scientific and engineering applications.

  9. New concepts, requirements and methods concerning the periodic inspection of the CANDU fuel channels

    International Nuclear Information System (INIS)

    Denis, J.R.

    1995-01-01

    Periodic inspection of fuel channels is essential for a proper assessment of the structural integrity of these vital components of the reactor. The development of wet channel technologies for non-destructive examination (NDE) of pressure tubes and the high technical performance and reliability of the CIGAR equipment have led, in less than 1 0 years, to the accumulation of a very significant volume of data concerning the flaw mechanisms and structural behaviour of the CANDU fuel channels. On this basis, a new form of the CAN/CSA-N285.4 Standard for Periodic Inspection of CANDU Nuclear Power Plant components was elaborated, introducing new concepts and requirements, in accord with the powerful NDE methods now available. This paper presents these concepts and requirements, and discusses the NDE methods, presently used or under development, to satisfy these requirements. Specific features regarding the fuel channel inspections of Cernavoda NGS Unit 1 are also discussed. (author)

  10. Regression methods for medical research

    CERN Document Server

    Tai, Bee Choo

    2013-01-01

    Regression Methods for Medical Research provides medical researchers with the skills they need to critically read and interpret research using more advanced statistical methods. The statistical requirements of interpreting and publishing in medical journals, together with rapid changes in science and technology, increasingly demands an understanding of more complex and sophisticated analytic procedures.The text explains the application of statistical models to a wide variety of practical medical investigative studies and clinical trials. Regression methods are used to appropriately answer the

  11. Complex nuclear geophysical methods and apparatus to increase the efficiency of prospecting extracting and processing nonradioactive minerals as examplified by tin ores

    International Nuclear Information System (INIS)

    Baldin, S.A.; Voloshchuk, S.N.; Egiazarov, B.G.; Zernov, L.V.; Luchin, I.A.; Matveev, V.V.; Pukhal'skij, L.Ch.; Chesnokov, N.I.

    1979-01-01

    Described is the complex of nuclear geophysical methods and apparatus, with the help of which the problem of the industrial control at all stages of ore concentrating industry are being solved. γ resonance and X-ray radiometric methods and apparatus providing express and not less accurate determination of general tin and tin in the form of cassiterite are used in the complex. The devices developed on the base of semiconductor spectrometers and used both under industrial conditions and in production regimes are used for the first time in the practice of ore concentrating industry. The essential positive effect of the complex on technical economical indices of the industry is found out; it allows to use more effective methods of extracting and processing technology. The similar complexes may be developed for other kinds of nonradioactive minerals

  12. Mixed-Methods Research in a Complex Multisite VA Health Services Study: Variations in the Implementation and Characteristics of Chiropractic Services in VA

    Directory of Open Access Journals (Sweden)

    Raheleh Khorsan

    2013-01-01

    Full Text Available Maximizing the quality and benefits of newly established chiropractic services represents an important policy and practice goal for the US Department of Veterans Affairs’ healthcare system. Understanding the implementation process and characteristics of new chiropractic clinics and the determinants and consequences of these processes and characteristics is a critical first step in guiding quality improvement. This paper reports insights and lessons learned regarding the successful application of mixed methods research approaches—insights derived from a study of chiropractic clinic implementation and characteristics, Variations in the Implementation and Characteristics of Chiropractic Services in VA (VICCS. Challenges and solutions are presented in areas ranging from selection and recruitment of sites and participants to the collection and analysis of varied data sources. The VICCS study illustrates the importance of several factors in successful mixed-methods approaches, including (1 the importance of a formal, fully developed logic model to identify and link data sources, variables, and outcomes of interest to the study’s analysis plan and its data collection instruments and codebook and (2 ensuring that data collection methods, including mixed-methods, match study aims. Overall, successful application of a mixed-methods approach requires careful planning, frequent trade-offs, and complex coding and analysis.

  13. The effects of micro-implant assisted rapid palatal expansion (MARPE) on the nasomaxillary complex--a finite element method (FEM) analysis.

    Science.gov (United States)

    MacGinnis, Matt; Chu, Howard; Youssef, George; Wu, Kimberley W; Machado, Andre Wilson; Moon, Won

    2014-08-29

    Orthodontic palatal expansion appliances have been widely used with satisfactory and, most often, predictable clinical results. Recently, clinicians have successfully utilized micro-implants with palatal expander designs to work as anchors to the palate to achieve more efficient skeletal expansion and to decrease undesired dental effects. The purpose of the study was to use finite element method (FEM) to determine the stress distribution and displacement within the craniofacial complex when simulated conventional and micro-implant-assisted rapid palatal expansion (MARPE) expansion forces are applied to the maxilla. The simulated stress distribution produced within the palate and maxillary buttresses in addition to the displacement and rotation of the maxilla could then be analyzed to determine if micro-implants aid in skeletal expansion. A three-dimensional (3D) mesh model of the cranium with associated maxillary sutures was developed using computed tomography (CT) images and Mimics modeling software. To compare transverse expansion stresses in rapid palatal expansion (RPE) and MARPE, expansion forces were distributed to differing points on the maxilla and evaluated with ANSYS simulation software. The stresses distributed from forces applied to the maxillary teeth are distributed mainly along the trajectories of the three maxillary buttresses. In comparison, the MARPE showed tension and compression directed to the palate, while showing less rotation, and tipping of the maxillary complex. In addition, the conventional hyrax displayed a rotation of the maxilla around the teeth as opposed to the midpalatal suture of the MARPE. This data suggests that the MARPE causes the maxilla to bend laterally, while preventing unwanted rotation of the complex. In conclusion, the MARPE may be beneficial for hyperdivergent patients, or those that have already experienced closure of the midpalatal suture, who require palatal expansion and would worsen from buccal tipping of the teeth

  14. RESEARCH OF PROBLEMS OF DESIGN OF COMPLEX TECHNICAL PROVIDING AND THE GENERALIZED MODEL OF THEIR DECISION

    Directory of Open Access Journals (Sweden)

    A. V. Skrypnikov

    2015-01-01

    Full Text Available Summary. In this work the general ideas of a method of V. I. Skurikhin taking into account the specified features develop and questions of the analysis and synthesis of a complex of technical means, with finishing them to the level suitable for use in engineering practice of design of information management systems are in more detail considered. In work the general system approach to the solution of questions of a choice of technical means of the information management system is created, the general technique of the sys tem analysis and synthesis of a complex of the technical means and its subsystems providing achievement of extreme value of criterion of efficiency of functioning of a technical complex of the information management system is developed. The main attention is paid to the applied party of system researches of complex technical providing, in particular, to definition of criteria of quality of functioning of a technical complex, development of methods of the analysis of information base of the information management system and definition of requirements to technical means, and also methods of structural synthesis of the main subsystems of complex technical providing. Thus, the purpose is research on the basis of system approach of complex technical providing the information management system and development of a number of methods of the analysis and the synthesis of complex technical providing suitable for use in engineering practice of design of systems. The well-known paradox of development of management information consists of that parameters of the system, and consequently, and requirements to the complex hardware, can not be strictly reasonable to development of algorithms and programs, and vice versa. The possible method of overcoming of these difficulties is prognostication of structure and parameters of complex hardware for certain management informations on the early stages of development, with subsequent clarification and

  15. A Low-Complexity ESPRIT-Based DOA Estimation Method for Co-Prime Linear Arrays.

    Science.gov (United States)

    Sun, Fenggang; Gao, Bin; Chen, Lizhen; Lan, Peng

    2016-08-25

    The problem of direction-of-arrival (DOA) estimation is investigated for co-prime array, where the co-prime array consists of two uniform sparse linear subarrays with extended inter-element spacing. For each sparse subarray, true DOAs are mapped into several equivalent angles impinging on the traditional uniform linear array with half-wavelength spacing. Then, by applying the estimation of signal parameters via rotational invariance technique (ESPRIT), the equivalent DOAs are estimated, and the candidate DOAs are recovered according to the relationship among equivalent and true DOAs. Finally, the true DOAs are estimated by combining the results of the two subarrays. The proposed method achieves a better complexity-performance tradeoff as compared to other existing methods.

  16. Quantifying complexity in translational research: an integrated approach.

    Science.gov (United States)

    Munoz, David A; Nembhard, Harriet Black; Kraschnewski, Jennifer L

    2014-01-01

    The purpose of this paper is to quantify complexity in translational research. The impact of major operational steps and technical requirements is calculated with respect to their ability to accelerate moving new discoveries into clinical practice. A three-phase integrated quality function deployment (QFD) and analytic hierarchy process (AHP) method was used to quantify complexity in translational research. A case study in obesity was used to usability. Generally, the evidence generated was valuable for understanding various components in translational research. Particularly, the authors found that collaboration networks, multidisciplinary team capacity and community engagement are crucial for translating new discoveries into practice. As the method is mainly based on subjective opinion, some argue that the results may be biased. However, a consistency ratio is calculated and used as a guide to subjectivity. Alternatively, a larger sample may be incorporated to reduce bias. The integrated QFD-AHP framework provides evidence that could be helpful to generate agreement, develop guidelines, allocate resources wisely, identify benchmarks and enhance collaboration among similar projects. Current conceptual models in translational research provide little or no clue to assess complexity. The proposed method aimed to fill this gap. Additionally, the literature review includes various features that have not been explored in translational research.

  17. Complex Functions with GeoGebra

    Science.gov (United States)

    Breda, Ana Maria D'azevedo; Dos Santos, José Manuel Dos Santos

    2016-01-01

    Complex functions, generally feature some interesting peculiarities, seen as extensions of real functions. The visualization of complex functions properties usually requires the simultaneous visualization of two-dimensional spaces. The multiple Windows of GeoGebra, combined with its ability of algebraic computation with complex numbers, allow the…

  18. Requirements management at Westinghouse Electric Company

    International Nuclear Information System (INIS)

    Gustavsson, Henrik

    2014-01-01

    Field studies and surveys made in various industry branches support the Westinghouse opinion that qualitative systems engineering and requirements management have a high value in the development of complex systems and products. Two key issues causing overspending and schedule delays in projects are underestimation of complexity and misunderstandings between the different sub-project teams. These issues often arise when a project jumps too early into detail design. Good requirements management practice before detail design helps the project teams avoid such issues. Westinghouse therefore puts great effort into requirements management. The requirements management methodology at Westinghouse rests primarily on four key cornerstones: 1 - Iterative team work when developing requirements specifications, 2 - Id number tags on requirements, 3 - Robust change routine, and 4 - Requirements Traceability Matrix. (authors)

  19. Recruitment of Mediator Complex by Cell Type and Stage-Specific Factors Required for Tissue-Specific TAF Dependent Gene Activation in an Adult Stem Cell Lineage.

    Directory of Open Access Journals (Sweden)

    Chenggang Lu

    2015-12-01

    Full Text Available Onset of terminal differentiation in adult stem cell lineages is commonly marked by robust activation of new transcriptional programs required to make the appropriate differentiated cell type(s. In the Drosophila male germ line stem cell lineage, the switch from proliferating spermatogonia to spermatocyte is accompanied by one of the most dramatic transcriptional changes in the fly, as over 1000 new transcripts turn on in preparation for meiosis and spermatid differentiation. Here we show that function of the coactivator complex Mediator is required for activation of hundreds of new transcripts in the spermatocyte program. Mediator appears to act in a sequential hierarchy, with the testis activating Complex (tMAC, a cell type specific form of the Mip/dREAM general repressor, required to recruit Mediator subunits to the chromatin, and Mediator function required to recruit the testis TAFs (tTAFs, spermatocyte specific homologs of subunits of TFIID. Mediator, tMAC and the tTAFs co-regulate expression of a major set of spermatid differentiation genes. The Mediator subunit Med22 binds the tMAC component Topi when the two are coexpressed in S2 cells, suggesting direct recruitment. Loss of Med22 function in spermatocytes causes meiosis I maturation arrest male infertility, similar to loss of function of the tMAC subunits or the tTAFs. Our results illuminate how cell type specific versions of the Mip/dREAM complex and the general transcription machinery cooperate to drive selective gene activation during differentiation in stem cell lineages.

  20. A theoretical study of the complexes of N2O with H+, Li+, and HF using various correlation methods

    International Nuclear Information System (INIS)

    Del Bene, J.E.; Stahlberg, E.A.; Shavitt, I.

    1990-01-01

    Binding energies for complexes of N 2 O with the acids H + , Li + , and HF have been computed using the following correlation methods: many-body (Moller-Plesset) perturbation theory at second (MP2), third (MP3), and fourth (MP4) order; the quadratic CI method with single and double excitations (QCISD) and with noniterative inclusion of triple excitations (QCISD(T)); the linearized coupled-cluster method (LCCM); the averaged coupled-pair functional (ACPF); configuration interaction with all single and double excitations (CISD); and CISD with the Davidson and Pople corrections. The convergence of the Moller-Plesset expansion is erratic, predicting that the terminal nitrogen is the preferred binding site for the complexes at the MP2 and MP4 levels, in disagreement with Hartree-Fock and MP3 and all other models (including the infinite-order QCI). The effect of triple excitations at MP4 and QCI is to destabilize complexes bound at O and stabilize those bound at N, but this effect is greatly overestimated at MP4 relative to QCI. Except for the LCCM result for N-protonated N 2 O, ACPF and LCCM binding energies are similar to the QCISD values. The size-consistency error in the ACPF binding energies of the complexes of N 2 O with HF is about 0.5 kcal/mol. The CISD size-consistency error for these complexes is 23 kcal/mol, leading to negative binding energies when computed relative to isolated N 2 O and HF

  1. Typical Complexity Numbers

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Typical Complexity Numbers. Say. 1000 tones,; 100 Users,; Transmission every 10 msec. Full Crosstalk cancellation would require. Full cancellation requires a matrix multiplication of order 100*100 for all the tones. 1000*100*100*100 operations every second for the ...

  2. Reduced complexity and latency for a massive MIMO system using a parallel detection algorithm

    Directory of Open Access Journals (Sweden)

    Shoichi Higuchi

    2017-09-01

    Full Text Available In recent years, massive MIMO systems have been widely researched to realize high-speed data transmission. Since massive MIMO systems use a large number of antennas, these systems require huge complexity to detect the signal. In this paper, we propose a novel detection method for massive MIMO using parallel detection with maximum likelihood detection with QR decomposition and M-algorithm (QRM-MLD to reduce the complexity and latency. The proposed scheme obtains an R matrix after permutation of an H matrix and QR decomposition. The R matrix is also eliminated using a Gauss–Jordan elimination method. By using a modified R matrix, the proposed method can detect the transmitted signal using parallel detection. From the simulation results, the proposed scheme can achieve a reduced complexity and latency with a little degradation of the bit error rate (BER performance compared with the conventional method.

  3. Solid phase excitation-emission fluorescence method for the classification of complex substances: Cortex Phellodendri and other traditional Chinese medicines as examples.

    Science.gov (United States)

    Gu, Yao; Ni, Yongnian; Kokot, Serge

    2012-09-13

    A novel, simple and direct fluorescence method for analysis of complex substances and their potential substitutes has been researched and developed. Measurements involved excitation and emission (EEM) fluorescence spectra of powdered, complex, medicinal herbs, Cortex Phellodendri Chinensis (CPC) and the similar Cortex Phellodendri Amurensis (CPA); these substances were compared and discriminated from each other and the potentially adulterated samples (Caulis mahoniae (CM) and David poplar bark (DPB)). Different chemometrics methods were applied for resolution of the complex spectra, and the excitation spectra were found to be the most informative; only the rank-ordering PROMETHEE method was able to classify the samples with single ingredients (CPA, CPC, CM) or those with binary mixtures (CPA/CPC, CPA/CM, CPC/CM). Interestingly, it was essential to use the geometrical analysis for interactive aid (GAIA) display for a full understanding of the classification results. However, these two methods, like the other chemometrics models, were unable to classify composite spectral matrices consisting of data from samples of single ingredients and binary mixtures; this suggested that the excitation spectra of the different samples were very similar. However, the method is useful for classification of single-ingredient samples and, separately, their binary mixtures; it may also be applied for similar classification work with other complex substances.

  4. Automated Planning Enables Complex Protocols on Liquid-Handling Robots.

    Science.gov (United States)

    Whitehead, Ellis; Rudolf, Fabian; Kaltenbach, Hans-Michael; Stelling, Jörg

    2018-03-16

    Robotic automation in synthetic biology is especially relevant for liquid handling to facilitate complex experiments. However, research tasks that are not highly standardized are still rarely automated in practice. Two main reasons for this are the substantial investments required to translate molecular biological protocols into robot programs, and the fact that the resulting programs are often too specific to be easily reused and shared. Recent developments of standardized protocols and dedicated programming languages for liquid-handling operations addressed some aspects of ease-of-use and portability of protocols. However, either they focus on simplicity, at the expense of enabling complex protocols, or they entail detailed programming, with corresponding skills and efforts required from the users. To reconcile these trade-offs, we developed Roboliq, a software system that uses artificial intelligence (AI) methods to integrate (i) generic formal, yet intuitive, protocol descriptions, (ii) complete, but usually hidden, programming capabilities, and (iii) user-system interactions to automatically generate executable, optimized robot programs. Roboliq also enables high-level specifications of complex tasks with conditional execution. To demonstrate the system's benefits for experiments that are difficult to perform manually because of their complexity, duration, or time-critical nature, we present three proof-of-principle applications for the reproducible, quantitative characterization of GFP variants.

  5. So how do you know you have a macromolecular complex?

    International Nuclear Information System (INIS)

    Dafforn, Timothy R.

    2007-01-01

    Structures of protein complexes offer some of the most interesting insights into biological processes. In this article, the methods required to show that the complex observed is the physiological one are investigated. Protein in crystal form is at an extremely high concentration and yet retains the complex secondary structure that defines an active protein. The protein crystal itself is made up of a repeating lattice of protein–protein and protein–solvent interactions. The problem that confronts any crystallographer is to identify those interactions that represent physiological interactions and those that do not. This review explores the tools that are available to provide such information using the original crystal liquor as a sample. The review is aimed at postgraduate and postdoctoral researchers who may well be coming up against this problem for the first time. Techniques are discussed that will provide information on the stoichiometry of complexes as well as low-resolution information on complex structure. Together, these data will help to identify the physiological complex

  6. On conjugate gradient type methods and polynomial preconditioners for a class of complex non-Hermitian matrices

    Science.gov (United States)

    Freund, Roland

    1988-01-01

    Conjugate gradient type methods are considered for the solution of large linear systems Ax = b with complex coefficient matrices of the type A = T + i(sigma)I where T is Hermitian and sigma, a real scalar. Three different conjugate gradient type approaches with iterates defined by a minimal residual property, a Galerkin type condition, and an Euclidian error minimization, respectively, are investigated. In particular, numerically stable implementations based on the ideas behind Paige and Saunder's SYMMLQ and MINRES for real symmetric matrices are proposed. Error bounds for all three methods are derived. It is shown how the special shift structure of A can be preserved by using polynomial preconditioning. Results on the optimal choice of the polynomial preconditioner are given. Also, some numerical experiments for matrices arising from finite difference approximations to the complex Helmholtz equation are reported.

  7. Analysis and application of classification methods of complex carbonate reservoirs

    Science.gov (United States)

    Li, Xiongyan; Qin, Ruibao; Ping, Haitao; Wei, Dan; Liu, Xiaomei

    2018-06-01

    There are abundant carbonate reservoirs from the Cenozoic to Mesozoic era in the Middle East. Due to variation in sedimentary environment and diagenetic process of carbonate reservoirs, several porosity types coexist in carbonate reservoirs. As a result, because of the complex lithologies and pore types as well as the impact of microfractures, the pore structure is very complicated. Therefore, it is difficult to accurately calculate the reservoir parameters. In order to accurately evaluate carbonate reservoirs, based on the pore structure evaluation of carbonate reservoirs, the classification methods of carbonate reservoirs are analyzed based on capillary pressure curves and flow units. Based on the capillary pressure curves, although the carbonate reservoirs can be classified, the relationship between porosity and permeability after classification is not ideal. On the basis of the flow units, the high-precision functional relationship between porosity and permeability after classification can be established. Therefore, the carbonate reservoirs can be quantitatively evaluated based on the classification of flow units. In the dolomite reservoirs, the average absolute error of calculated permeability decreases from 15.13 to 7.44 mD. Similarly, the average absolute error of calculated permeability of limestone reservoirs is reduced from 20.33 to 7.37 mD. Only by accurately characterizing pore structures and classifying reservoir types, reservoir parameters could be calculated accurately. Therefore, characterizing pore structures and classifying reservoir types are very important to accurate evaluation of complex carbonate reservoirs in the Middle East.

  8. Analyzing Software Requirements Errors in Safety-Critical, Embedded Systems

    Science.gov (United States)

    Lutz, Robyn R.

    1993-01-01

    This paper analyzes the root causes of safety-related software errors in safety-critical, embedded systems. The results show that software errors identified as potentially hazardous to the system tend to be produced by different error mechanisms than non- safety-related software errors. Safety-related software errors are shown to arise most commonly from (1) discrepancies between the documented requirements specifications and the requirements needed for correct functioning of the system and (2) misunderstandings of the software's interface with the rest of the system. The paper uses these results to identify methods by which requirements errors can be prevented. The goal is to reduce safety-related software errors and to enhance the safety of complex, embedded systems.

  9. An Automated Approach to Very High Order Aeroacoustic Computations in Complex Geometries

    Science.gov (United States)

    Dyson, Rodger W.; Goodrich, John W.

    2000-01-01

    Computational aeroacoustics requires efficient, high-resolution simulation tools. And for smooth problems, this is best accomplished with very high order in space and time methods on small stencils. But the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewslci recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that are located near wall boundaries. These procedures are used to automatically develop and implement very high order methods (>15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.

  10. A Proactive Complex Event Processing Method for Large-Scale Transportation Internet of Things

    OpenAIRE

    Wang, Yongheng; Cao, Kening

    2014-01-01

    The Internet of Things (IoT) provides a new way to improve the transportation system. The key issue is how to process the numerous events generated by IoT. In this paper, a proactive complex event processing method is proposed for large-scale transportation IoT. Based on a multilayered adaptive dynamic Bayesian model, a Bayesian network structure learning algorithm using search-and-score is proposed to support accurate predictive analytics. A parallel Markov decision processes model is design...

  11. Combination of Complex-Based and Magnitude-Based Multiecho Water-Fat Separation for Accurate Quantification of Fat-Fraction

    Science.gov (United States)

    Yu, Huanzhou; Shimakawa, Ann; Hines, Catherine D. G.; McKenzie, Charles A.; Hamilton, Gavin; Sirlin, Claude B.; Brittain, Jean H.; Reeder, Scott B.

    2011-01-01

    Multipoint water–fat separation techniques rely on different water–fat phase shifts generated at multiple echo times to decompose water and fat. Therefore, these methods require complex source images and allow unambiguous separation of water and fat signals. However, complex-based water–fat separation methods are sensitive to phase errors in the source images, which may lead to clinically important errors. An alternative approach to quantify fat is through “magnitude-based” methods that acquire multiecho magnitude images. Magnitude-based methods are insensitive to phase errors, but cannot estimate fat-fraction greater than 50%. In this work, we introduce a water–fat separation approach that combines the strengths of both complex and magnitude reconstruction algorithms. A magnitude-based reconstruction is applied after complex-based water–fat separation to removes the effect of phase errors. The results from the two reconstructions are then combined. We demonstrate that using this hybrid method, 0–100% fat-fraction can be estimated with improved accuracy at low fat-fractions. PMID:21695724

  12. THE DEVELOPMENT OF METHOD FOR MINT AND TURMERIC ESSENTIAL OILS IDENTIFICATION AND QUANTITATIVE ANALYSIS IN COMPLEX DRUG

    Directory of Open Access Journals (Sweden)

    O. G. Smalyuh

    2015-04-01

    Full Text Available The aim of our study was to develop the method for identification and assay of essential oils of mint and turmeric in complex medicinal product in capsule form. Materials and method.The paper used samples of turmeric and mint essential oils and complex drug, in the form of capsules containing oil of peppermint, oil of Curcuma longa, a mixture of extracts sandy everlasting (Helichrysumarenarium (L. Moench, marigold (Caléndulaofficinális L, wild carrot (Daucussarota and Curcuma longa (Curcuma longa. Results and discussion. The structure of the complex drug is dry extract sand everlasting flowers, wild carrot extract of marigold flowers and fruits thick, dry extract of Curcuma longa and essential oils of peppermint and turmeric. According to the research of different samples of peppermint oil, and given the need for its identification and quantification of the finished medicinal product, we have decided to choose menthol as analytical marker. In order to establish the identity of complex drug its main components - Ar- turmeric, α-and β- turmeric, and their total content must meet the quantitative indicators "content turmerics" in the specifications for turmeric oil. Past studies of sample preparation conditions allowed to offer 96% ethanol to extract oil components from the sample; ultrasonic and centrifugation to improve removal of the capsule weight. Cromatographiccharacteristics of substances was obtained by column firm Agilent, HP-Innowax. It has been established that other active pharmaceutical ingredients capsule (placebo did not affect the quantification of the components of essential oils of mint and turmeric. Conclusions. 1. Chromatographic conditions of identification and assay of essential oils of mint and turmeric in a complex drug and optimal conditions for sample preparation and analysis by gas chromatographyhave been studied. 2. Methods for identification and assay of menthol, α-, β- and Ar- turmerics in complex drug based on

  13. Low complexity lossless compression of underwater sound recordings.

    Science.gov (United States)

    Johnson, Mark; Partan, Jim; Hurst, Tom

    2013-03-01

    Autonomous listening devices are increasingly used to study vocal aquatic animals, and there is a constant need to record longer or with greater bandwidth, requiring efficient use of memory and battery power. Real-time compression of sound has the potential to extend recording durations and bandwidths at the expense of increased processing operations and therefore power consumption. Whereas lossy methods such as MP3 introduce undesirable artifacts, lossless compression algorithms (e.g., flac) guarantee exact data recovery. But these algorithms are relatively complex due to the wide variety of signals they are designed to compress. A simpler lossless algorithm is shown here to provide compression factors of three or more for underwater sound recordings over a range of noise environments. The compressor was evaluated using samples from drifting and animal-borne sound recorders with sampling rates of 16-240 kHz. It achieves >87% of the compression of more-complex methods but requires about 1/10 of the processing operations resulting in less than 1 mW power consumption at a sampling rate of 192 kHz on a low-power microprocessor. The potential to triple recording duration with a minor increase in power consumption and no loss in sound quality may be especially valuable for battery-limited tags and robotic vehicles.

  14. Rising Trend: Complex and sophisticated attack methods

    Indian Academy of Sciences (India)

    Stux, DuQu, Nitro, Luckycat, Exploit Kits, FLAME. ADSL/SoHo Router Compromise. Botnets of compromised ADSL/SoHo Routers; User Redirection via malicious DNS entry. Web Application attacks. SQL Injection, RFI etc. More and more Webshells. More utility to hackers; Increasing complexity and evading mechanisms.

  15. New method for rekindling the nonlinear solitary waves in Maxwellian complex space plasma

    Science.gov (United States)

    Das, G. C.; Sarma, Ridip

    2018-04-01

    Our interest is to study the nonlinear wave phenomena in complex plasma constituents with Maxwellian electrons and ions. The main reason for this consideration is to exhibit the effects of dust charge fluctuations on acoustic modes evaluated by the use of a new method. A special method (G'/G) has been developed to yield the coherent features of nonlinear waves augmented through the derivation of a Korteweg-de Vries equation and found successfully the different nature of solitons recognized in space plasmas. Evolutions have shown with the input of appropriate typical plasma parameters to support our theoretical observations in space plasmas. All conclusions are in good accordance with the actual occurrences and could be of interest to further the investigations in experiments and satellite observations in space. In this paper, we present not only the model that exhibited nonlinear solitary wave propagation but also a new mathematical method to the execution.

  16. An iterative reconstruction method of complex images using expectation maximization for radial parallel MRI

    International Nuclear Information System (INIS)

    Choi, Joonsung; Kim, Dongchan; Oh, Changhyun; Han, Yeji; Park, HyunWook

    2013-01-01

    In MRI (magnetic resonance imaging), signal sampling along a radial k-space trajectory is preferred in certain applications due to its distinct advantages such as robustness to motion, and the radial sampling can be beneficial for reconstruction algorithms such as parallel MRI (pMRI) due to the incoherency. For radial MRI, the image is usually reconstructed from projection data using analytic methods such as filtered back-projection or Fourier reconstruction after gridding. However, the quality of the reconstructed image from these analytic methods can be degraded when the number of acquired projection views is insufficient. In this paper, we propose a novel reconstruction method based on the expectation maximization (EM) method, where the EM algorithm is remodeled for MRI so that complex images can be reconstructed. Then, to optimize the proposed method for radial pMRI, a reconstruction method that uses coil sensitivity information of multichannel RF coils is formulated. Experiment results from synthetic and in vivo data show that the proposed method introduces better reconstructed images than the analytic methods, even from highly subsampled data, and provides monotonic convergence properties compared to the conjugate gradient based reconstruction method. (paper)

  17. Organization structures for dealing with complexity

    NARCIS (Netherlands)

    Meijer, B.R.

    2006-01-01

    "Complexity is in the eye of the beholder" is a well known quote in the research field of complexity. In the world of managers the word complex is often a synonym for difficult, complicated, involving many factors and highly uncertain. A complex business decision requires careful preparation and

  18. HIV surveillance in complex emergencies.

    Science.gov (United States)

    Salama, P; Dondero, T J

    2001-04-01

    Many studies have shown a positive association between both migration and temporary expatriation and HIV risk. This association is likely to be similar or even more pronounced for forced migrants. In general, HIV transmission in host-migrant or host-forced-migrant interactions depends on the maturity of the HIV epidemic in both the host and the migrant population, the relative seroprevalence of HIV in the host and the migrant population, the prevalence of other sexually transmitted infections (STIs) that may facilitate transmission, and the level of sexual interaction between the two communities. Complex emergencies are the major cause of mass population movement today. In complex emergencies, additional factors such as sexual interaction between forced-migrant populations and the military; sexual violence; increasing commercial sex work; psychological trauma; and disruption of preventive and curative health services may increase the risk for HIV transmission. Despite recent success in preventing HIV infection in stable populations in selected developing countries, internally displaced persons and refugees (or forced migrants) have not been systematically included in HIV surveillance systems, nor consequently in prevention activities. Standard surveillance systems that rely on functioning health services may not provide useful data in many complex emergency settings. Secondary sources can provide some information in these settings. Little attempt has been made, however, to develop innovative HIV surveillance systems in countries affected by complex emergencies. Consequently, data on the HIV epidemic in these countries are scarce and HIV prevention programs are either not implemented or interventions are not effectively targeted. Second generation surveillance methods such as cross-sectional, population-based surveys can provide rapid information on HIV, STIs, and sexual behavior. The risks for stigmatization and breaches of confidentiality must be recognized

  19. Medicinal Chemistry Projects Requiring Imaginative Structure-Based Drug Design Methods.

    Science.gov (United States)

    Moitessier, Nicolas; Pottel, Joshua; Therrien, Eric; Englebienne, Pablo; Liu, Zhaomin; Tomberg, Anna; Corbeil, Christopher R

    2016-09-20

    Computational methods for docking small molecules to proteins are prominent in drug discovery. There are hundreds, if not thousands, of documented examples-and several pertinent cases within our research program. Fifteen years ago, our first docking-guided drug design project yielded nanomolar metalloproteinase inhibitors and illustrated the potential of structure-based drug design. Subsequent applications of docking programs to the design of integrin antagonists, BACE-1 inhibitors, and aminoglycosides binding to bacterial RNA demonstrated that available docking programs needed significant improvement. At that time, docking programs primarily considered flexible ligands and rigid proteins. We demonstrated that accounting for protein flexibility, employing displaceable water molecules, and using ligand-based pharmacophores improved the docking accuracy of existing methods-enabling the design of bioactive molecules. The success prompted the development of our own program, Fitted, implementing all of these aspects. The primary motivation has always been to respond to the needs of drug design studies; the majority of the concepts behind the evolution of Fitted are rooted in medicinal chemistry projects and collaborations. Several examples follow: (1) Searching for HDAC inhibitors led us to develop methods considering drug-zinc coordination and its effect on the pKa of surrounding residues. (2) Targeting covalent prolyl oligopeptidase (POP) inhibitors prompted an update to Fitted to identify reactive groups and form bonds with a given residue (e.g., a catalytic residue) when the geometry allows it. Fitted-the first fully automated covalent docking program-was successfully applied to the discovery of four new classes of covalent POP inhibitors. As a result, efficient stereoselective syntheses of a few screening hits were prioritized rather than synthesizing large chemical libraries-yielding nanomolar inhibitors. (3) In order to study the metabolism of POP inhibitors by

  20. Adiabatic passage for a lossy two-level quantum system by a complex time method

    International Nuclear Information System (INIS)

    Dridi, G; Guérin, S

    2012-01-01

    Using a complex time method with the formalism of Stokes lines, we establish a generalization of the Davis–Dykhne–Pechukas formula which gives in the adiabatic limit the transition probability of a lossy two-state system driven by an external frequency-chirped pulse-shaped field. The conditions that allow this generalization are derived. We illustrate the result with the dissipative Allen–Eberly and Rosen–Zener models. (paper)

  1. Empirical Requirements Analysis for Mars Surface Operations Using the Flashline Mars Arctic Research Station

    Science.gov (United States)

    Clancey, William J.; Lee, Pascal; Sierhuis, Maarten; Norvig, Peter (Technical Monitor)

    2001-01-01

    Living and working on Mars will require model-based computer systems for maintaining and controlling complex life support, communication, transportation, and power systems. This technology must work properly on the first three-year mission, augmenting human autonomy, without adding-yet more complexity to be diagnosed and repaired. One design method is to work with scientists in analog (mars-like) setting to understand how they prefer to work, what constrains will be imposed by the Mars environment, and how to ameliorate difficulties. We describe how we are using empirical requirements analysis to prototype model-based tools at a research station in the High Canadian Arctic.

  2. [Progress in sample preparation and analytical methods for trace polar small molecules in complex samples].

    Science.gov (United States)

    Zhang, Qianchun; Luo, Xialin; Li, Gongke; Xiao, Xiaohua

    2015-09-01

    Small polar molecules such as nucleosides, amines, amino acids are important analytes in biological, food, environmental, and other fields. It is necessary to develop efficient sample preparation and sensitive analytical methods for rapid analysis of these polar small molecules in complex matrices. Some typical materials in sample preparation, including silica, polymer, carbon, boric acid and so on, are introduced in this paper. Meanwhile, the applications and developments of analytical methods of polar small molecules, such as reversed-phase liquid chromatography, hydrophilic interaction chromatography, etc., are also reviewed.

  3. Advances in complexity of beam halo-chaos and its control methods for beam transport networks

    International Nuclear Information System (INIS)

    Fang Jinqing

    2004-11-01

    The complexity theory of beam halo-chaos in beam transport networks and its control methods for a new subject of high-tech field is discussed. It is pointed that in recent years, there has been growing interest in proton beams of high power linear accelerator due to its attractive features in possible breakthrough applications in national defense and industry. In particular, high-current accelerator driven clean activity nuclear power systems for various applications as energy resources has been one of the most focusing issues in the current research, because it provides a safer, cleaner and cheaper nuclear energy resource. However, halo-chaos in high-current beam transport networks become a key concerned issue because it can generate excessive radioactivity therefore significantly limits its applications. It is very important to study the complexity properties of beam halo-chaos and to understand the basic physical mechanisms for halo chaos formation as well as to develop effective control methods for its suppression. These are very challenging subjects for the current research. The main research advances in the subjects, including experimental investigation and the oretical research, especially some very efficient control methods developed through many years of efforts of authors are reviewed and summarized. Finally, some research outlooks are given. (author)

  4. Isolation and mass spectrometry of transcription factor complexes.

    Science.gov (United States)

    Sebastiaan Winkler, G; Lacomis, Lynne; Philip, John; Erdjument-Bromage, Hediye; Svejstrup, Jesper Q; Tempst, Paul

    2002-03-01

    Protocols are described that enable the isolation of novel proteins associated with a known protein and the subsequent identification of these proteins by mass spectrometry. We review the basics of nanosample handling and of two complementary approaches to mass analysis, and provide protocols for the entire process. The protein isolation procedure is rapid and based on two high-affinity chromatography steps. The method does not require previous knowledge of complex composition or activity and permits subsequent biochemical characterization of the isolated factor. As an example, we provide the procedures used to isolate and analyze yeast Elongator, a histone acetyltransferase complex important for transcript elongation, which led to the identification of three novel subunits.

  5. Formal verification of complex properties on PLC programs

    CERN Document Server

    Darvas, D; Voros, A; Bartha, T; Blanco Vinuela, E; Gonzalez Suarez, V M

    2014-01-01

    Formal verification has become a recommended practice in the safety-critical application areas. However, due to the complexity of practical control and safety systems, the state space explosion often prevents the use of formal analysis. In this paper we extend our former verification methodology with effective property preserving reduction techniques. For this purpose we developed general rule-based reductions and a customized version of the Cone of Influence (COI) reduction. Using these methods, the verification of complex requirements formalised with temporal logics (e.g. CTL, LTL) can be orders of magnitude faster. We use the NuSMV model checker on a real-life PLC program from CERN to demonstrate the performance of our reduction techniques.

  6. Monitoring Freeze Thaw Transitions in Arctic Soils using Complex Resistivity Method

    Science.gov (United States)

    Wu, Y.; Hubbard, S. S.; Ulrich, C.; Dafflon, B.; Wullschleger, S. D.

    2012-12-01

    The Arctic region, which is a sensitive system that has emerged as a focal point for climate change studies, is characterized by a large amount of stored carbon and a rapidly changing landscape. Seasonal freeze-thaw transitions in the Arctic alter subsurface biogeochemical processes that control greenhouse gas fluxes from the subsurface. Our ability to monitor freeze thaw cycles and associated biogeochemical transformations is critical to the development of process rich ecosystem models, which are in turn important for gaining a predictive understanding of Arctic terrestrial system evolution and feedbacks with climate. In this study, we conducted both laboratory and field investigations to explore the use of the complex resistivity method to monitor freeze thaw transitions of arctic soil in Barrow, AK. In the lab studies, freeze thaw transitions were induced on soil samples having different average carbon content through exposing the arctic soil to temperature controlled environments at +4 oC and -20 oC. Complex resistivity and temperature measurements were collected using electrical and temperature sensors installed along the soil columns. During the laboratory experiments, resistivity gradually changed over two orders of magnitude as the temperature was increased or decreased between -20 oC and 0 oC. Electrical phase responses at 1 Hz showed a dramatic and immediate response to the onset of freeze and thaw. Unlike the resistivity response, the phase response was found to be exclusively related to unfrozen water in the soil matrix, suggesting that this geophysical attribute can be used as a proxy for the monitoring of the onset and progression of the freeze-thaw transitions. Spectral electrical responses contained additional information about the controls of soil grain size distribution on the freeze thaw dynamics. Based on the demonstrated sensitivity of complex resistivity signals to the freeze thaw transitions, field complex resistivity data were collected over

  7. Petascale Many Body Methods for Complex Correlated Systems

    Science.gov (United States)

    Pruschke, Thomas

    2012-02-01

    Correlated systems constitute an important class of materials in modern condensed matter physics. Correlation among electrons are at the heart of all ordering phenomena and many intriguing novel aspects, such as quantum phase transitions or topological insulators, observed in a variety of compounds. Yet, theoretically describing these phenomena is still a formidable task, even if one restricts the models used to the smallest possible set of degrees of freedom. Here, modern computer architectures play an essential role, and the joint effort to devise efficient algorithms and implement them on state-of-the art hardware has become an extremely active field in condensed-matter research. To tackle this task single-handed is quite obviously not possible. The NSF-OISE funded PIRE collaboration ``Graduate Education and Research in Petascale Many Body Methods for Complex Correlated Systems'' is a successful initiative to bring together leading experts around the world to form a virtual international organization for addressing these emerging challenges and educate the next generation of computational condensed matter physicists. The collaboration includes research groups developing novel theoretical tools to reliably and systematically study correlated solids, experts in efficient computational algorithms needed to solve the emerging equations, and those able to use modern heterogeneous computer architectures to make then working tools for the growing community.

  8. 6th international symposium on finite volumes for complex applications

    CERN Document Server

    Halama, Jan; Herbin, Raphaèle; Hubert, Florence; Fort, Jaroslav; FVCA 6; Finite Volumes for Complex Applications VI : Problems and perspectives

    2011-01-01

    Finite volume methods are used for various applications in fluid dynamics, magnetohydrodynamics, structural analysis or nuclear physics. A closer look reveals many interesting phenomena and mathematical or numerical difficulties, such as true error analysis and adaptivity, modelling of multi-phase phenomena or fitting problems, stiff terms in convection/diffusion equations and sources. To overcome existing problems and to find solution methods for future applications requires many efforts and always new developments. The goal of The International Symposium on Finite Volumes for Complex Applica

  9. A method for evaluating the problem complex of choosing the ventilation system for a new building

    DEFF Research Database (Denmark)

    Hviid, Christian Anker; Svendsen, Svend

    2007-01-01

    The application of a ventilation system in a new building is a multidimensional complex problem that involves quantifiable and non-quantifiable data like energy consump¬tion, indoor environment, building integration and architectural expression. This paper presents a structured method for evaluat...

  10. A method for the calculation of the cumulative failure probability distribution of complex repairable systems

    International Nuclear Information System (INIS)

    Caldarola, L.

    1976-01-01

    A method is proposed for the analytical evaluation of the cumulative failure probability distribution of complex repairable systems. The method is based on a set of integral equations each one referring to a specific minimal cut set of the system. Each integral equation links the unavailability of a minimal cut set to its failure probability density distribution and to the probability that the minimal cut set is down at the time t under the condition that it was down at time t'(t'<=t). The limitations for the applicability of the method are also discussed. It has been concluded that the method is applicable if the process describing the failure of a minimal cut set is a 'delayed semi-regenerative process'. (Auth.)

  11. Extending product modeling methods for integrated product development

    DEFF Research Database (Denmark)

    Bonev, Martin; Wörösch, Michael; Hauksdóttir, Dagný

    2013-01-01

    Despite great efforts within the modeling domain, the majority of methods often address the uncommon design situation of an original product development. However, studies illustrate that development tasks are predominantly related to redesigning, improving, and extending already existing products...... and PVM methods, in a presented Product Requirement Development model some of the individual drawbacks of each method could be overcome. Based on the UML standard, the model enables the representation of complex hierarchical relationships in a generic product model. At the same time it uses matrix....... Updated design requirements have then to be made explicit and mapped against the existing product architecture. In this paper, existing methods are adapted and extended through linking updated requirements to suitable product models. By combining several established modeling techniques, such as the DSM...

  12. Characterization of Nuclear Materials Using Complex of Non-Destructive and Mass-Spectroscopy Methods of Measurements

    International Nuclear Information System (INIS)

    Gorbunova, A.; Kramchaninov, A.

    2015-01-01

    Information and Analytical Centre for nuclear materials investigations was established in Russian Federation in the February 2 of 2009 by ROSATOM State Atomic Energy Corporation (the order #80). Its purpose is in preventing unauthorized access to nuclear materials and excluding their illicit traffic. Information and Analytical Centre includes analytical laboratory to provide composition and properties of nuclear materials of unknown origin for their identification. According to Regulation the Centre deals with: · identification of nuclear materials of unknown origin to provide information about their composition and properties; · arbitration analyzes of nuclear materials; · comprehensive research of nuclear and radioactive materials for developing techniques characterization of materials; · interlaboratory measurements; · measurements for control and accounting; · confirmatory measurements. Complex of non-destructive and mass-spectroscopy techniques was developed for the measurements. The complex consists of: · gamma-ray techniques on the base of MGAU, MGA and FRAM codes for uranium and plutonium isotopic composition; · gravimetrical technique with gamma-spectroscopy in addition for uranium content; · calorimetric technique for plutonium mass; · neutron multiplicity technique for plutonium mass; · measurement technique on the base of mass-spectroscopy for uranium isotopic composition; · measurement technique on the base of mass-spectroscopy for metallic impurities. Complex satisfies the state regulation requirements of ensuring the uniformity of measurements including the Russian Federation Federal Law on Ensuring the Uniformity of Measurements #102-FZ, Interstate Standard GOST R ISO/IEC 17025-2006, National Standards of Russian Federation GOST R 8.563-2009, GOST R 8.703-2010, Federal Regulations NRB-99/2009, OSPORB 99/2010. Created complex is provided in reference materials, equipment end certificated techniques. The complex is included in accredited

  13. Using logic model methods in systematic review synthesis: describing complex pathways in referral management interventions.

    Science.gov (United States)

    Baxter, Susan K; Blank, Lindsay; Woods, Helen Buckley; Payne, Nick; Rimmer, Melanie; Goyder, Elizabeth

    2014-05-10

    There is increasing interest in innovative methods to carry out systematic reviews of complex interventions. Theory-based approaches, such as logic models, have been suggested as a means of providing additional insights beyond that obtained via conventional review methods. This paper reports the use of an innovative method which combines systematic review processes with logic model techniques to synthesise a broad range of literature. The potential value of the model produced was explored with stakeholders. The review identified 295 papers that met the inclusion criteria. The papers consisted of 141 intervention studies and 154 non-intervention quantitative and qualitative articles. A logic model was systematically built from these studies. The model outlines interventions, short term outcomes, moderating and mediating factors and long term demand management outcomes and impacts. Interventions were grouped into typologies of practitioner education, process change, system change, and patient intervention. Short-term outcomes identified that may result from these interventions were changed physician or patient knowledge, beliefs or attitudes and also interventions related to changed doctor-patient interaction. A range of factors which may influence whether these outcomes lead to long term change were detailed. Demand management outcomes and intended impacts included content of referral, rate of referral, and doctor or patient satisfaction. The logic model details evidence and assumptions underpinning the complex pathway from interventions to demand management impact. The method offers a useful addition to systematic review methodologies. PROSPERO registration number: CRD42013004037.

  14. A method for determining customer requirement weights based on TFMF and TLR

    Science.gov (United States)

    Ai, Qingsong; Shu, Ting; Liu, Quan; Zhou, Zude; Xiao, Zheng

    2013-11-01

    'Customer requirements' (CRs) management plays an important role in enterprise systems (ESs) by processing customer-focused information. Quality function deployment (QFD) is one of the main CRs analysis methods. Because CR weights are crucial for the input of QFD, we developed a method for determining CR weights based on trapezoidal fuzzy membership function (TFMF) and 2-tuple linguistic representation (TLR). To improve the accuracy of CR weights, we propose to apply TFMF to describe CR weights so that they can be appropriately represented. Because the fuzzy logic is not capable of aggregating information without loss, TLR model is adopted as well. We first describe the basic concepts of TFMF and TLR and then introduce an approach to compute CR weights. Finally, an example is provided to explain and verify the proposed method.

  15. Salvo: Seismic imaging software for complex geologies

    Energy Technology Data Exchange (ETDEWEB)

    OBER,CURTIS C.; GJERTSEN,ROB; WOMBLE,DAVID E.

    2000-03-01

    This report describes Salvo, a three-dimensional seismic-imaging software for complex geologies. Regions of complex geology, such as overthrusts and salt structures, can cause difficulties for many seismic-imaging algorithms used in production today. The paraxial wave equation and finite-difference methods used within Salvo can produce high-quality seismic images in these difficult regions. However this approach comes with higher computational costs which have been too expensive for standard production. Salvo uses improved numerical algorithms and methods, along with parallel computing, to produce high-quality images and to reduce the computational and the data input/output (I/O) costs. This report documents the numerical algorithms implemented for the paraxial wave equation, including absorbing boundary conditions, phase corrections, imaging conditions, phase encoding, and reduced-source migration. This report also describes I/O algorithms for large seismic data sets and images and parallelization methods used to obtain high efficiencies for both the computations and the I/O of seismic data sets. Finally, this report describes the required steps to compile, port and optimize the Salvo software, and describes the validation data sets used to help verify a working copy of Salvo.

  16. Assessment of the potential human health risks from exposure to complex substances in accordance with REACH requirements. "White spirit" as a case study.

    Science.gov (United States)

    McKee, Richard H; Tibaldi, Rosalie; Adenuga, Moyinoluwa D; Carrillo, Juan-Carlos; Margary, Alison

    2018-02-01

    The European chemical control regulation (REACH) requires that data on physical/chemical, toxicological and environmental hazards be compiled. Additionally, REACH requires formal assessments to ensure that substances can be safely used for their intended purposes. For health hazard assessments, reference values (Derived No Effect levels, DNELs) are calculated from toxicology data and compared to estimated exposure levels. If the ratio of the predicted exposure level to the DNEL, i.e. the Risk Characterization Ratio (RCR), is less than 1, the risk is considered controlled; otherwise, additional Risk Management Measures (RMM) must be applied. These requirements pose particular challenges for complex substances. Herein, "white spirit", a complex hydrocarbon solvent, is used as an example to illustrate how these procedures were applied. Hydrocarbon solvents were divided into categories of similar substances. Representative substances were identified for DNEL determinations. Adjustment factors were applied to the no effect levels to calculate the DNELs. Exposure assessments utilized a standardized set of generic exposure scenarios (GES) which incorporated exposure predictions for solvent handling activities. Computer-based tools were developed to automate RCR calculations and identify appropriate RMMs, allowing consistent communications to users via safety data sheets. Copyright © 2017 ExxonMobil Biomedical Sciences Inc. Published by Elsevier Inc. All rights reserved.

  17. Critical requirements of the SSTR method

    International Nuclear Information System (INIS)

    Gold, R.

    1975-08-01

    Discrepancies have been reported in absolute fission rate measurements observed with Solid State Tract Recorders (SSTR) and fission chambers which lie well outside experimental error. As a result of these comparisons, the reliability of the SSTR method has been seriously questioned, and the fission chamber method has been advanced for sole use in absolute fission rate determinations. In view of the absolute accuracy already reported and well documented for the SSTR method, this conclusion is both surprising and unfortunate. Two independent methods are highly desirable. Moreover, these two methods more than compliment one another, since certain in-core experiments may be amenable to either but not both techniques. Consequently, one cannot abandon the SSTR method without sacrificing crucial advantages. A critical reappraisal of certain aspects of the SSTR method is offered in the hope that the source of the current controversy can be uncovered and a long term beneficial agreement between these two methods can therefore be established. (WHK)

  18. Measuring Complexity of SAP Systems

    Directory of Open Access Journals (Sweden)

    Ilja Holub

    2016-10-01

    Full Text Available The paper discusses the reasons of complexity rise in ERP system SAP R/3. It proposes a method for measuring complexity of SAP. Based on this method, the computer program in ABAP for measuring complexity of particular SAP implementation is proposed as a tool for keeping ERP complexity under control. The main principle of the measurement method is counting the number of items or relations in the system. The proposed computer program is based on counting of records in organization tables in SAP.

  19. METHOD TO DEVELOP THE DOUBLE-CURVED SURFACE OF THE ROOF

    Directory of Open Access Journals (Sweden)

    JURCO Ancuta Nadia

    2017-05-01

    Full Text Available This work present two methods for determining the development of double-curved surface. The aims of this paper is to show a comparative study between methods for determination of the sheet metal requirements for complex roof cover shape. In first part of the paper are presented the basic sketch and information about the roof shape and some consecrated buildings, which have a complex roof shape. The second part of the paper shows two methods for determining the developed of the spherical roof. The graphical method is the first method used for developing of the spherical shape. In this method it used the poly-cylindrical method to develop the double-curved surface. The second method is accomplishing by using the dedicated CAD software method.

  20. A low complexity method for the optimization of network path length in spatially embedded networks

    International Nuclear Information System (INIS)

    Chen, Guang; Yang, Xu-Hua; Xu, Xin-Li; Ming, Yong; Chen, Sheng-Yong; Wang, Wan-Liang

    2014-01-01

    The average path length of a network is an important index reflecting the network transmission efficiency. In this paper, we propose a new method of decreasing the average path length by adding edges. A new indicator is presented, incorporating traffic flow demand, to assess the decrease in the average path length when a new edge is added during the optimization process. With the help of the indicator, edges are selected and added into the network one by one. The new method has a relatively small time computational complexity in comparison with some traditional methods. In numerical simulations, the new method is applied to some synthetic spatially embedded networks. The result shows that the method can perform competitively in decreasing the average path length. Then, as an example of an application of this new method, it is applied to the road network of Hangzhou, China. (paper)

  1. Suicide and the 'Poison Complex': Toxic Relationalities, Child Development, and the Sri Lankan Self-Harm Epidemic.

    Science.gov (United States)

    Widger, Tom

    2015-01-01

    Suicide prevention efforts in Asia have increasingly turned to 'quick win' means restriction, while more complicated cognitive restriction and psychosocial programs are limited. This article argues the development of cognitive restriction programs requires greater consideration of suicide methods as social practices, and of how suicide cognitive schemata form. To illustrate this, the article contributes an ethnographically grounded study of how self-poisoning becomes cognitively available in Sri Lanka. I argue the overwhelming preference for poison as a method of self-harm in the country is not simply reflective of its widespread availability, but rather how cognitive schemata of poison-a 'poison complex'-develops from early childhood and is a precondition for suicide schemata. Limiting cognitive availability thus requires an entirely novel approach to suicide prevention that draws back from its immediate object (methods and causes of self-harm) to engage the wider poison complex of which suicide is just one aspect.

  2. Theoretical study of the electronic structure of f-element complexes by quantum chemical methods; Analyse de la structure electronique des complexes contenant des elements F par des methodes de la chimie quantique

    Energy Technology Data Exchange (ETDEWEB)

    Vetere, V

    2002-09-15

    This thesis is related to comparative studies of the chemical properties of molecular complexes containing lanthanide or actinide trivalent cations, in the context of the nuclear waste disposal. More precisely, our aim was a quantum chemical analysis of the metal-ligand bonding in such species. Various theoretical approaches were compared, for the inclusion of correlation (density functional theory, multiconfigurational methods) and of relativistic effects (relativistic scalar and 2-component Hamiltonians, relativistic pseudopotentials). The performance of these methods were checked by comparing computed structural properties to published experimental data, on small model systems: lanthanide and actinide tri-halides and on X{sub 3}M-L species (X=F, Cl; M=La, Nd, U; L = NH{sub 3}, acetonitrile, CO). We have thus shown the good performance of density functionals combined with a quasi-relativistic method, as well as of gradient-corrected functionals associated with relativistic pseudopotentials. In contrast, functionals including some part of exact exchange are less reliable to reproduce experimental trends, and we have given a possible explanation for this result . Then, a detailed analysis of the bonding has allowed us to interpret the discrepancies observed in the structural properties of uranium and lanthanides complexes, based on a covalent contribution to the bonding, in the case of uranium(III), which does not exist in the lanthanide(III) homologues. Finally, we have examined more sizeable systems, closer to experimental species, to analyse the influence of the coordination number, of the counter-ions and of the oxidation state of uranium, on the metal-ligand bonding. (author)

  3. A Corner-Point-Grid-Based Voxelization Method for Complex Geological Structure Model with Folds

    Science.gov (United States)

    Chen, Qiyu; Mariethoz, Gregoire; Liu, Gang

    2017-04-01

    3D voxelization is the foundation of geological property modeling, and is also an effective approach to realize the 3D visualization of the heterogeneous attributes in geological structures. The corner-point grid is a representative data model among all voxel models, and is a structured grid type that is widely applied at present. When carrying out subdivision for complex geological structure model with folds, we should fully consider its structural morphology and bedding features to make the generated voxels keep its original morphology. And on the basis of which, they can depict the detailed bedding features and the spatial heterogeneity of the internal attributes. In order to solve the shortage of the existing technologies, this work puts forward a corner-point-grid-based voxelization method for complex geological structure model with folds. We have realized the fast conversion from the 3D geological structure model to the fine voxel model according to the rule of isocline in Ramsay's fold classification. In addition, the voxel model conforms to the spatial features of folds, pinch-out and other complex geological structures, and the voxels of the laminas inside a fold accords with the result of geological sedimentation and tectonic movement. This will provide a carrier and model foundation for the subsequent attribute assignment as well as the quantitative analysis and evaluation based on the spatial voxels. Ultimately, we use examples and the contrastive analysis between the examples and the Ramsay's description of isoclines to discuss the effectiveness and advantages of the method proposed in this work when dealing with the voxelization of 3D geologic structural model with folds based on corner-point grids.

  4. What qualitative research can contribute to a randomized controlled trial of a complex community intervention.

    Science.gov (United States)

    Nelson, Geoffrey; Macnaughton, Eric; Goering, Paula

    2015-11-01

    Using the case of a large-scale, multi-site Canadian Housing First research demonstration project for homeless people with mental illness, At Home/Chez Soi, we illustrate the value of qualitative methods in a randomized controlled trial (RCT) of a complex community intervention. We argue that quantitative RCT research can neither capture the complexity nor tell the full story of a complex community intervention. We conceptualize complex community interventions as having multiple phases and dimensions that require both RCT and qualitative research components. Rather than assume that qualitative research and RCTs are incommensurate, a more pragmatic mixed methods approach was used, which included using both qualitative and quantitative methods to understand program implementation and outcomes. At the same time, qualitative research was used to examine aspects of the intervention that could not be understood through the RCT, such as its conception, planning, sustainability, and policy impacts. Through this example, we show how qualitative research can tell a more complete story about complex community interventions. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Approaching complexity by stochastic methods: From biological systems to turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Friedrich, Rudolf [Institute for Theoretical Physics, University of Muenster, D-48149 Muenster (Germany); Peinke, Joachim [Institute of Physics, Carl von Ossietzky University, D-26111 Oldenburg (Germany); Sahimi, Muhammad [Mork Family Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, CA 90089-1211 (United States); Reza Rahimi Tabar, M., E-mail: mohammed.r.rahimi.tabar@uni-oldenburg.de [Department of Physics, Sharif University of Technology, Tehran 11155-9161 (Iran, Islamic Republic of); Institute of Physics, Carl von Ossietzky University, D-26111 Oldenburg (Germany); Fachbereich Physik, Universitaet Osnabrueck, Barbarastrasse 7, 49076 Osnabrueck (Germany)

    2011-09-15

    This review addresses a central question in the field of complex systems: given a fluctuating (in time or space), sequentially measured set of experimental data, how should one analyze the data, assess their underlying trends, and discover the characteristics of the fluctuations that generate the experimental traces? In recent years, significant progress has been made in addressing this question for a class of stochastic processes that can be modeled by Langevin equations, including additive as well as multiplicative fluctuations or noise. Important results have emerged from the analysis of temporal data for such diverse fields as neuroscience, cardiology, finance, economy, surface science, turbulence, seismic time series and epileptic brain dynamics, to name but a few. Furthermore, it has been recognized that a similar approach can be applied to the data that depend on a length scale, such as velocity increments in fully developed turbulent flow, or height increments that characterize rough surfaces. A basic ingredient of the approach to the analysis of fluctuating data is the presence of a Markovian property, which can be detected in real systems above a certain time or length scale. This scale is referred to as the Markov-Einstein (ME) scale, and has turned out to be a useful characteristic of complex systems. We provide a review of the operational methods that have been developed for analyzing stochastic data in time and scale. We address in detail the following issues: (i) reconstruction of stochastic evolution equations from data in terms of the Langevin equations or the corresponding Fokker-Planck equations and (ii) intermittency, cascades, and multiscale correlation functions.

  6. Approaching complexity by stochastic methods: From biological systems to turbulence

    International Nuclear Information System (INIS)

    Friedrich, Rudolf; Peinke, Joachim; Sahimi, Muhammad; Reza Rahimi Tabar, M.

    2011-01-01

    This review addresses a central question in the field of complex systems: given a fluctuating (in time or space), sequentially measured set of experimental data, how should one analyze the data, assess their underlying trends, and discover the characteristics of the fluctuations that generate the experimental traces? In recent years, significant progress has been made in addressing this question for a class of stochastic processes that can be modeled by Langevin equations, including additive as well as multiplicative fluctuations or noise. Important results have emerged from the analysis of temporal data for such diverse fields as neuroscience, cardiology, finance, economy, surface science, turbulence, seismic time series and epileptic brain dynamics, to name but a few. Furthermore, it has been recognized that a similar approach can be applied to the data that depend on a length scale, such as velocity increments in fully developed turbulent flow, or height increments that characterize rough surfaces. A basic ingredient of the approach to the analysis of fluctuating data is the presence of a Markovian property, which can be detected in real systems above a certain time or length scale. This scale is referred to as the Markov-Einstein (ME) scale, and has turned out to be a useful characteristic of complex systems. We provide a review of the operational methods that have been developed for analyzing stochastic data in time and scale. We address in detail the following issues: (i) reconstruction of stochastic evolution equations from data in terms of the Langevin equations or the corresponding Fokker-Planck equations and (ii) intermittency, cascades, and multiscale correlation functions.

  7. An efficient approach to BAC based assembly of complex genomes.

    Science.gov (United States)

    Visendi, Paul; Berkman, Paul J; Hayashi, Satomi; Golicz, Agnieszka A; Bayer, Philipp E; Ruperao, Pradeep; Hurgobin, Bhavna; Montenegro, Juan; Chan, Chon-Kit Kenneth; Staňková, Helena; Batley, Jacqueline; Šimková, Hana; Doležel, Jaroslav; Edwards, David

    2016-01-01

    There has been an exponential growth in the number of genome sequencing projects since the introduction of next generation DNA sequencing technologies. Genome projects have increasingly involved assembly of whole genome data which produces inferior assemblies compared to traditional Sanger sequencing of genomic fragments cloned into bacterial artificial chromosomes (BACs). While whole genome shotgun sequencing using next generation sequencing (NGS) is relatively fast and inexpensive, this method is extremely challenging for highly complex genomes, where polyploidy or high repeat content confounds accurate assembly, or where a highly accurate 'gold' reference is required. Several attempts have been made to improve genome sequencing approaches by incorporating NGS methods, to variable success. We present the application of a novel BAC sequencing approach which combines indexed pools of BACs, Illumina paired read sequencing, a sequence assembler specifically designed for complex BAC assembly, and a custom bioinformatics pipeline. We demonstrate this method by sequencing and assembling BAC cloned fragments from bread wheat and sugarcane genomes. We demonstrate that our assembly approach is accurate, robust, cost effective and scalable, with applications for complete genome sequencing in large and complex genomes.

  8. Outer synchronization between two different fractional-order general complex dynamical networks

    International Nuclear Information System (INIS)

    Xiang-Jun, Wu; Hong-Tao, Lu

    2010-01-01

    Outer synchronization between two different fractional-order general complex dynamical networks is investigated in this paper. Based on the stability theory of the fractional-order system, the sufficient criteria for outer synchronization are derived analytically by applying the nonlinear control and the bidirectional coupling methods. The proposed synchronization method is applicable to almost all kinds of coupled fractional-order general complex dynamical networks. Neither a symmetric nor irreducible coupling configuration matrix is required. In addition, no constraint is imposed on the inner-coupling matrix. Numerical examples are also provided to demonstrate the validity of the presented synchronization scheme. Numeric evidence shows that both the feedback strength k and the fractional order α can be chosen appropriately to adjust the synchronization effect effectively. (general)

  9. The Detection Method of Escherichia coli in Water Resources: A Review

    Science.gov (United States)

    Nurliyana, M. R.; Sahdan, M. Z.; Wibowo, K. M.; Muslihati, A.; Saim, H.; Ahmad, S. A.; Sari, Y.; Mansor, Z.

    2018-04-01

    This article reviews several approaches for Escherichia coli (E. coli) bacteria detection from conventional methods, emerging method and goes to biosensor-based techniques. Detection and enumeration of E. coli bacteria usually required long duration of time in obtaining the result since laboratory-based approach is normally used in its assessment. It requires 24 hours to 72 hours after sampling to process the culturing samples before results are available. Although faster technique for detecting E. coli in water such as Polymerase Chain Reaction (PCR) and Enzyme-Linked Immunosorbent Assay (ELISA) have been developed, it still required transporting the samples from water resources to the laboratory, high-cost, complicated equipment usage, complex procedures, as well as the requirement of skilled specialist to cope with the complexity which limit their wide spread practice in water quality detection. Recently, development of biosensor device that is easy to perform, portable, highly sensitive and selective becomes indispensable in detecting extremely lower consolidation of pathogenic E. coli bacteria in water samples.

  10. High performance parallel computing of flows in complex geometries: II. Applications

    International Nuclear Information System (INIS)

    Gourdain, N; Gicquel, L; Staffelbach, G; Vermorel, O; Duchaine, F; Boussuge, J-F; Poinsot, T

    2009-01-01

    Present regulations in terms of pollutant emissions, noise and economical constraints, require new approaches and designs in the fields of energy supply and transportation. It is now well established that the next breakthrough will come from a better understanding of unsteady flow effects and by considering the entire system and not only isolated components. However, these aspects are still not well taken into account by the numerical approaches or understood whatever the design stage considered. The main challenge is essentially due to the computational requirements inferred by such complex systems if it is to be simulated by use of supercomputers. This paper shows how new challenges can be addressed by using parallel computing platforms for distinct elements of a more complex systems as encountered in aeronautical applications. Based on numerical simulations performed with modern aerodynamic and reactive flow solvers, this work underlines the interest of high-performance computing for solving flow in complex industrial configurations such as aircrafts, combustion chambers and turbomachines. Performance indicators related to parallel computing efficiency are presented, showing that establishing fair criterions is a difficult task for complex industrial applications. Examples of numerical simulations performed in industrial systems are also described with a particular interest for the computational time and the potential design improvements obtained with high-fidelity and multi-physics computing methods. These simulations use either unsteady Reynolds-averaged Navier-Stokes methods or large eddy simulation and deal with turbulent unsteady flows, such as coupled flow phenomena (thermo-acoustic instabilities, buffet, etc). Some examples of the difficulties with grid generation and data analysis are also presented when dealing with these complex industrial applications.

  11. Complexation of trivalent actinide ions (Am3+, Cm3+) with humic acid: a comparison of different experimental methods

    International Nuclear Information System (INIS)

    Kim, J.I.; Rhee, D.S.; Wimmer, H.; Buckau, G.; Klenze, R.

    1993-01-01

    The complexation of trivalent metal ions with humic acid has been studied at pH 4 and 5 in 0.1 M NaClO 4 by three different experimental methods, i.e. UV spectroscopy, time resolved laser fluorescence spectroscopy (TRLFS) and ultrafiltration. The direct speciation of the metal ion and its humate complex in the reaction process has been made by UV spectroscopy for Am(III) in the micromolar concentration range and by TRLFS for Cm(III) in the nanomolar concentration range. The ultrafiltration is used with the lowest pore size of filter (ca. 1 nm) to separate the uncomplexed metal ion from its complexed species. The concentrations of both metal ion and humic acid are varied in such a manner that the effective functional groups of the humic acid becomes loaded with metal ions from 1% to nearly 100%. The loading capacity of the humic acid for the trivalent metal ion, determined separately at each pH, is introduced into the evaluation of complexation constants. The variation of the metal ion concentration from 6 x 10 -8 mol/l to 4 x 10 -5 mol/l does not show any effect on the complexation reaction. The three different methods give rise to constants being comparable with one another. The average value of the constants thus determined is log β = 6.24±0.28 for the trivalent actinide ions. (orig.)

  12. Organizational Agility and Complex Enterprise System Innovations: A Mixed Methods Study of the Effects of Enterprise Systems on Organizational Agility

    Science.gov (United States)

    Kharabe, Amol T.

    2012-01-01

    Over the last two decades, firms have operated in "increasingly" accelerated "high-velocity" dynamic markets, which require them to become "agile." During the same time frame, firms have increasingly deployed complex enterprise systems--large-scale packaged software "innovations" that integrate and automate…

  13. Cross-linking mass spectrometry identifies new interfaces of Augmin required to localise the γ-tubulin ring complex to the mitotic spindle

    Directory of Open Access Journals (Sweden)

    Jack W. C. Chen

    2017-05-01

    Full Text Available The hetero-octameric protein complex, Augmin, recruits γ-Tubulin ring complex (γ-TuRC to pre-existing microtubules (MTs to generate branched MTs during mitosis, facilitating robust spindle assembly. However, despite a recent partial reconstitution of the human Augmin complex in vitro, the molecular basis of this recruitment remains unclear. Here, we used immuno-affinity purification of in vivo Augmin from Drosophila and cross-linking/mass spectrometry to identify distance restraints between residues within the eight Augmin subunits in the absence of any other structural information. The results allowed us to predict potential interfaces between Augmin and γ-TuRC. We tested these predictions biochemically and in the Drosophila embryo, demonstrating that specific regions of the Augmin subunits, Dgt3, Dgt5 and Dgt6 all directly bind the γ-TuRC protein, Dgp71WD, and are required for the accumulation of γ-TuRC, but not Augmin, to the mitotic spindle. This study therefore substantially increases our understanding of the molecular mechanisms underpinning MT-dependent MT nucleation.

  14. A Porosity Method to Describe Complex 3D-Structures Theory and Application to an Explosion

    Directory of Open Access Journals (Sweden)

    M.-F. Robbe

    2006-01-01

    Full Text Available A theoretical method was developed to be able to describe the influence of structures of complex shape on a transient fluid flow without meshing the structures. Structures are considered as solid pores inside the fluid and act as an obstacle for the flow. The method was specifically adapted to fast transient cases.The porosity method was applied to the simulation of a Hypothetical Core Disruptive Accident in a small-scale replica of a Liquid Metal Fast Breeder Reactor. A 2D-axisymmetrical simulation of the MARS test was performed with the EUROPLEXUS code. Whereas the central internal structures of the mock-up could be described with a classical shell model, the influence of the 3D peripheral structures was taken into account with the porosity method

  15. Visual Literacy: Does It Enhance Leadership Abilities Required for the Twenty-First Century?

    Science.gov (United States)

    Bintz, Carol

    2016-01-01

    The twenty-first century hosts a well-established global economy, where leaders are required to have increasingly complex skills that include creativity, innovation, vision, relatability, critical thinking and well-honed communications methods. The experience gained by learning to be visually literate includes the ability to see, observe, analyze,…

  16. History matching of a complex epidemiological model of human immunodeficiency virus transmission by using variance emulation.

    Science.gov (United States)

    Andrianakis, I; Vernon, I; McCreesh, N; McKinley, T J; Oakley, J E; Nsubuga, R N; Goldstein, M; White, R G

    2017-08-01

    Complex stochastic models are commonplace in epidemiology, but their utility depends on their calibration to empirical data. History matching is a (pre)calibration method that has been applied successfully to complex deterministic models. In this work, we adapt history matching to stochastic models, by emulating the variance in the model outputs, and therefore accounting for its dependence on the model's input values. The method proposed is applied to a real complex epidemiological model of human immunodeficiency virus in Uganda with 22 inputs and 18 outputs, and is found to increase the efficiency of history matching, requiring 70% of the time and 43% fewer simulator evaluations compared with a previous variant of the method. The insight gained into the structure of the human immunodeficiency virus model, and the constraints placed on it, are then discussed.

  17. Improved Dynamic Analysis method for quantitative PIXE and SXRF element imaging of complex materials

    International Nuclear Information System (INIS)

    Ryan, C.G.; Laird, J.S.; Fisher, L.A.; Kirkham, R.; Moorhead, G.F.

    2015-01-01

    The Dynamic Analysis (DA) method in the GeoPIXE software provides a rapid tool to project quantitative element images from PIXE and SXRF imaging event data both for off-line analysis and in real-time embedded in a data acquisition system. Initially, it assumes uniform sample composition, background shape and constant model X-ray relative intensities. A number of image correction methods can be applied in GeoPIXE to correct images to account for chemical concentration gradients, differential absorption effects, and to correct images for pileup effects. A new method, applied in a second pass, uses an end-member phase decomposition obtained from the first pass, and DA matrices determined for each end-member, to re-process the event data with each pixel treated as an admixture of end-member terms. This paper describes the new method and demonstrates through examples and Monte-Carlo simulations how it better tracks spatially complex composition and background shape while still benefitting from the speed of DA.

  18. Improved Dynamic Analysis method for quantitative PIXE and SXRF element imaging of complex materials

    Energy Technology Data Exchange (ETDEWEB)

    Ryan, C.G., E-mail: chris.ryan@csiro.au; Laird, J.S.; Fisher, L.A.; Kirkham, R.; Moorhead, G.F.

    2015-11-15

    The Dynamic Analysis (DA) method in the GeoPIXE software provides a rapid tool to project quantitative element images from PIXE and SXRF imaging event data both for off-line analysis and in real-time embedded in a data acquisition system. Initially, it assumes uniform sample composition, background shape and constant model X-ray relative intensities. A number of image correction methods can be applied in GeoPIXE to correct images to account for chemical concentration gradients, differential absorption effects, and to correct images for pileup effects. A new method, applied in a second pass, uses an end-member phase decomposition obtained from the first pass, and DA matrices determined for each end-member, to re-process the event data with each pixel treated as an admixture of end-member terms. This paper describes the new method and demonstrates through examples and Monte-Carlo simulations how it better tracks spatially complex composition and background shape while still benefitting from the speed of DA.

  19. Morbidity and mortality of complex spine surgery

    DEFF Research Database (Denmark)

    Karstensen, Sven; Bari, Tanvir; Gehrchen, Martin

    2016-01-01

    requiring revision. METHODS: All patients undergoing spinal surgery at an academic tertiary referral center in the study period were prospectively included. The newest version of SAVES system was used, and a research coordinator collected all intraoperative and perioperative data prospectively. Once a week...... adverse events (AEs). PURPOSE: This study aimed to determine the mortality and examine the incidence of morbidity in patients undergoing complex spinal surgery, including pediatric patients, and to validate the SAVES system in a European population. STUDY DESIGN: A prospective, consecutive cohort study...

  20. Social cohesion through football: a quasi-experimental mixed methods design to evaluate a complex health promotion program

    Directory of Open Access Journals (Sweden)

    Kemp Lynn

    2010-10-01

    also be solicited. Discussion The complexity of the Football United program poses challenges for measurement, and requires the study design to be responsive to the dynamic nature of the program and context. Assessment of change is needed at multiple levels, drawing on mixed methods and multidisciplinary approaches in implementation and evaluation. Attention to these challenges has underpinned the design and methods in the Social Cohesion through Football study, which will use a unique and innovative combination of measures that have not been applied together previously in social inclusion/cohesion and sport and social inclusion/cohesion program research.

  1. A new VLSI complex integer multiplier which uses a quadratic-polynomial residue system with Fermat numbers

    Science.gov (United States)

    Shyu, H. C.; Reed, I. S.; Truong, T. K.; Hsu, I. S.; Chang, J. J.

    1987-01-01

    A quadratic-polynomial Fermat residue number system (QFNS) has been used to compute complex integer multiplications. The advantage of such a QFNS is that a complex integer multiplication requires only two integer multiplications. In this article, a new type Fermat number multiplier is developed which eliminates the initialization condition of the previous method. It is shown that the new complex multiplier can be implemented on a single VLSI chip. Such a chip is designed and fabricated in CMOS-Pw technology.

  2. Method for synthesizing metal bis(borano) hypophosphite complexes

    Science.gov (United States)

    Cordaro, Joseph G.

    2013-06-18

    The present invention describes the synthesis of a family of metal bis(borano) hypophosphite complexes. One procedure described in detail is the syntheses of complexes beginning from phosphorus trichloride and sodium borohydride. Temperature, solvent, concentration, and atmosphere are all critical to ensure product formation. In the case of sodium bis(borano) hypophosphite, hydrogen gas was evolved upon heating at temperatures above 150.degree. C. Included in this family of materials are the salts of the alkali metals Li, Na and K, and those of the alkaline earth metals Mg and Ca. Hydrogen storage materials are possible. In particular the lithium salt, Li[PH.sub.2(BH.sub.3).sub.2], theoretically would contain nearly 12 wt % hydrogen. Analytical data for product characterization and thermal properties are given.

  3. Method

    Directory of Open Access Journals (Sweden)

    Ling Fiona W.M.

    2017-01-01

    Full Text Available Rapid prototyping of microchannel gain lots of attention from researchers along with the rapid development of microfluidic technology. The conventional methods carried few disadvantages such as high cost, time consuming, required high operating pressure and temperature and involve expertise in operating the equipment. In this work, new method adapting xurography method is introduced to replace the conventional method of fabrication of microchannels. The novelty in this study is replacing the adhesion film with clear plastic film which was used to cut the design of the microchannel as the material is more suitable for fabricating more complex microchannel design. The microchannel was then mold using polymethyldisiloxane (PDMS and bonded with a clean glass to produce a close microchannel. The microchannel produced had a clean edge indicating good master mold was produced using the cutting plotter and the bonding between the PDMS and glass was good where no leakage was observed. The materials used in this method is cheap and the total time consumed is less than 5 hours where this method is suitable for rapid prototyping of microchannel.

  4. On the complexity of a combined homotopy interior method for convex programming

    Science.gov (United States)

    Yu, Bo; Xu, Qing; Feng, Guochen

    2007-03-01

    In [G.C. Feng, Z.H. Lin, B. Yu, Existence of an interior pathway to a Karush-Kuhn-Tucker point of a nonconvex programming problem, Nonlinear Anal. 32 (1998) 761-768; G.C. Feng, B. Yu, Combined homotopy interior point method for nonlinear programming problems, in: H. Fujita, M. Yamaguti (Eds.), Advances in Numerical Mathematics, Proceedings of the Second Japan-China Seminar on Numerical Mathematics, Lecture Notes in Numerical and Applied Analysis, vol. 14, Kinokuniya, Tokyo, 1995, pp. 9-16; Z.H. Lin, B. Yu, G.C. Feng, A combined homotopy interior point method for convex programming problem, Appl. Math. Comput. 84 (1997) 193-211.], a combined homotopy was constructed for solving non-convex programming and convex programming with weaker conditions, without assuming the logarithmic barrier function to be strictly convex and the solution set to be bounded. It was proven that a smooth interior path from an interior point of the feasible set to a K-K-T point of the problem exists. This shows that combined homotopy interior point methods can solve the problem that commonly used interior point methods cannot solveE However, so far, there is no result on its complexity, even for linear programming. The main difficulty is that the objective function is not monotonically decreasing on the combined homotopy path. In this paper, by taking a piecewise technique, under commonly used conditions, polynomiality of a combined homotopy interior point method is given for convex nonlinear programming.

  5. Development and validation of a multi-locus DNA metabarcoding method to identify endangered species in complex samples.

    Science.gov (United States)

    Arulandhu, Alfred J; Staats, Martijn; Hagelaar, Rico; Voorhuijzen, Marleen M; Prins, Theo W; Scholtens, Ingrid; Costessi, Adalberto; Duijsings, Danny; Rechenmann, François; Gaspar, Frédéric B; Barreto Crespo, Maria Teresa; Holst-Jensen, Arne; Birck, Matthew; Burns, Malcolm; Haynes, Edward; Hochegger, Rupert; Klingl, Alexander; Lundberg, Lisa; Natale, Chiara; Niekamp, Hauke; Perri, Elena; Barbante, Alessandra; Rosec, Jean-Philippe; Seyfarth, Ralf; Sovová, Tereza; Van Moorleghem, Christoff; van Ruth, Saskia; Peelen, Tamara; Kok, Esther

    2017-10-01

    DNA metabarcoding provides great potential for species identification in complex samples such as food supplements and traditional medicines. Such a method would aid Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) enforcement officers to combat wildlife crime by preventing illegal trade of endangered plant and animal species. The objective of this research was to develop a multi-locus DNA metabarcoding method for forensic wildlife species identification and to evaluate the applicability and reproducibility of this approach across different laboratories. A DNA metabarcoding method was developed that makes use of 12 DNA barcode markers that have demonstrated universal applicability across a wide range of plant and animal taxa and that facilitate the identification of species in samples containing degraded DNA. The DNA metabarcoding method was developed based on Illumina MiSeq amplicon sequencing of well-defined experimental mixtures, for which a bioinformatics pipeline with user-friendly web-interface was developed. The performance of the DNA metabarcoding method was assessed in an international validation trial by 16 laboratories, in which the method was found to be highly reproducible and sensitive enough to identify species present in a mixture at 1% dry weight content. The advanced multi-locus DNA metabarcoding method assessed in this study provides reliable and detailed data on the composition of complex food products, including information on the presence of CITES-listed species. The method can provide improved resolution for species identification, while verifying species with multiple DNA barcodes contributes to an enhanced quality assurance. © The Authors 2017. Published by Oxford University Press.

  6. Mission from Mars - a method for exploring user requirements for children in a narrative space

    DEFF Research Database (Denmark)

    Dindler, Christian; Ludvigsen, Martin; Lykke-Olesen, Andreas

    2005-01-01

    In this paper a particular design method is propagated as a supplement to existing descriptive approaches to current practice studies especially suitable for gathering requirements for the design of children's technology. The Mission from Mars method was applied during the design of an electronic...... school bag (eBag). The three-hour collaborative session provides a first-hand insight into children's practice in a fun and intriguing way. The method is proposed as a supplement to existing descriptive design methods for interaction design and children....

  7. Multivariate methods in nuclear waste remediation: Needs and applications

    International Nuclear Information System (INIS)

    Pulsipher, B.A.

    1992-05-01

    The United States Department of Energy (DOE) has developed a strategy for nuclear waste remediation and environmental restoration at several major sites across the country. Nuclear and hazardous wastes are found in underground storage tanks, containment drums, soils, and facilities. Due to the many possible contaminants and complexities of sampling and analysis, multivariate methods are directly applicable. However, effective application of multivariate methods will require greater ability to communicate methods and results to a non-statistician community. Moreover, more flexible multivariate methods may be required to accommodate inherent sampling and analysis limitations. This paper outlines multivariate applications in the context of select DOE environmental restoration activities and identifies several perceived needs

  8. Theoretical study of the electronic structure of f-element complexes by quantum chemical methods

    International Nuclear Information System (INIS)

    Vetere, V.

    2002-09-01

    This thesis is related to comparative studies of the chemical properties of molecular complexes containing lanthanide or actinide trivalent cations, in the context of the nuclear waste disposal. More precisely, our aim was a quantum chemical analysis of the metal-ligand bonding in such species. Various theoretical approaches were compared, for the inclusion of correlation (density functional theory, multiconfigurational methods) and of relativistic effects (relativistic scalar and 2-component Hamiltonians, relativistic pseudopotentials). The performance of these methods were checked by comparing computed structural properties to published experimental data, on small model systems: lanthanide and actinide tri-halides and on X 3 M-L species (X=F, Cl; M=La, Nd, U; L = NH 3 , acetonitrile, CO). We have thus shown the good performance of density functionals combined with a quasi-relativistic method, as well as of gradient-corrected functionals associated with relativistic pseudopotentials. In contrast, functionals including some part of exact exchange are less reliable to reproduce experimental trends, and we have given a possible explanation for this result . Then, a detailed analysis of the bonding has allowed us to interpret the discrepancies observed in the structural properties of uranium and lanthanides complexes, based on a covalent contribution to the bonding, in the case of uranium(III), which does not exist in the lanthanide(III) homologues. Finally, we have examined more sizeable systems, closer to experimental species, to analyse the influence of the coordination number, of the counter-ions and of the oxidation state of uranium, on the metal-ligand bonding. (author)

  9. Methods for the analysis of complex fluorescence decays: sum of Becquerel functions versus sum of exponentials

    International Nuclear Information System (INIS)

    Menezes, Filipe; Fedorov, Alexander; Baleizão, Carlos; Berberan-Santos, Mário N; Valeur, Bernard

    2013-01-01

    Ensemble fluorescence decays are usually analyzed with a sum of exponentials. However, broad continuous distributions of lifetimes, either unimodal or multimodal, occur in many situations. A simple and flexible fitting function for these cases that encompasses the exponential is the Becquerel function. In this work, the applicability of the Becquerel function for the analysis of complex decays of several kinds is tested. For this purpose, decays of mixtures of four different fluorescence standards (binary, ternary and quaternary mixtures) are measured and analyzed. For binary and ternary mixtures, the expected sum of narrow distributions is well recovered from the Becquerel functions analysis, if the correct number of components is used. For ternary mixtures, however, satisfactory fits are also obtained with a number of Becquerel functions smaller than the true number of fluorophores in the mixture, at the expense of broadening the lifetime distributions of the fictitious components. The quaternary mixture studied is well fitted with both a sum of three exponentials and a sum of two Becquerel functions, showing the inevitable loss of information when the number of components is large. Decays of a fluorophore in a heterogeneous environment, known to be represented by unimodal and broad continuous distributions (as previously obtained by the maximum entropy method), are also measured and analyzed. It is concluded that these distributions can be recovered by the Becquerel function method with an accuracy similar to that of the much more complex maximum entropy method. It is also shown that the polar (or phasor) plot is not always helpful for ascertaining the degree (and kind) of complexity of a fluorescence decay. (paper)

  10. Investigation of anticancer properties of caffeinated complexes via computational chemistry methods

    Science.gov (United States)

    Sayin, Koray; Üngördü, Ayhan

    2018-03-01

    Computational investigations were performed for 1,3,7-trimethylpurine-2,6-dione, 3,7-dimethylpurine-2,6-dione, their Ru(II) and Os(III) complexes. B3LYP/6-311 ++G(d,p)(LANL2DZ) level was used in numerical calculations. Geometric parameters, IR spectrum, 1H-, 13C and 15N NMR spectrum were examined in detail. Additionally, contour diagram of frontier molecular orbitals (FMOs), molecular electrostatic potential (MEP) maps, MEP contour and some quantum chemical descriptors were used in the determination of reactivity rankings and active sites. The electron density on the surface was similar to each other in studied complexes. Quantum chemical descriptors were investigated and the anticancer activity of complexes were more than cisplatin and their ligands. Additionally, molecular docking calculations were performed in water between related complexes and a protein (ID: 3WZE). The most interact complex was found as Os complex. The interaction energy was calculated as 342.9 kJ/mol.

  11. Curvilinear immersed boundary method for simulating fluid structure interaction with complex 3D rigid bodies

    Science.gov (United States)

    Borazjani, Iman; Ge, Liang; Sotiropoulos, Fotis

    2008-08-01

    The sharp-interface CURVIB approach of Ge and Sotiropoulos [L. Ge, F. Sotiropoulos, A numerical method for solving the 3D unsteady incompressible Navier-Stokes equations in curvilinear domains with complex immersed boundaries, Journal of Computational Physics 225 (2007) 1782-1809] is extended to simulate fluid structure interaction (FSI) problems involving complex 3D rigid bodies undergoing large structural displacements. The FSI solver adopts the partitioned FSI solution approach and both loose and strong coupling strategies are implemented. The interfaces between immersed bodies and the fluid are discretized with a Lagrangian grid and tracked with an explicit front-tracking approach. An efficient ray-tracing algorithm is developed to quickly identify the relationship between the background grid and the moving bodies. Numerical experiments are carried out for two FSI problems: vortex induced vibration of elastically mounted cylinders and flow through a bileaflet mechanical heart valve at physiologic conditions. For both cases the computed results are in excellent agreement with benchmark simulations and experimental measurements. The numerical experiments suggest that both the properties of the structure (mass, geometry) and the local flow conditions can play an important role in determining the stability of the FSI algorithm. Under certain conditions the FSI algorithm is unconditionally unstable even when strong coupling FSI is employed. For such cases, however, combining the strong coupling iteration with under-relaxation in conjunction with the Aitken's acceleration technique is shown to effectively resolve the stability problems. A theoretical analysis is presented to explain the findings of the numerical experiments. It is shown that the ratio of the added mass to the mass of the structure as well as the sign of the local time rate of change of the force or moment imparted on the structure by the fluid determine the stability and convergence of the FSI

  12. Accuracy of the DLPNO-CCSD(T) method for non-covalent bond dissociation enthalpies from coinage metal cation complexes

    KAUST Repository

    Minenkov, Yury; Chermak, Edrisse; Cavallo, Luigi

    2015-01-01

    The performance of the domain based local pair-natural orbital coupled-cluster (DLPNO-CCSD(T)) method has been tested to reproduce the experimental gas phase ligand dissociation enthalpy in a series of Cu+, Ag+ and Au+ complexes. For 33 Cu+ - non-covalent ligand dissociation enthalpies all-electron calculations with the same method result in MUE below 2.2 kcal/mol, although a MSE of 1.4 kcal/mol indicates systematic underestimation of the experimental values. Inclusion of scalar relativistic effects for Cu either via effective core potential (ECP) or Douglass-Kroll-Hess Hamiltonian, reduces the MUE below 1.7 kcal/mol and the MSE to -1.0 kcal/mol. For 24 Ag+ - non-covalent ligand dissociation enthalpies the DLPNO-CCSD(T) method results in a mean unsigned error (MUE) below 2.1 kcal/mol and vanishing mean signed error (MSE). For 15 Au+ - non-covalent ligand dissociation enthalpies the DLPNO-CCSD(T) methods provides larger MUE and MSE, equal to 3.2 and 1.7 kcal/mol, which might be related to poor precision of the experimental measurements. Overall, for the combined dataset of 72 coinage metal ion complexes DLPNO-CCSD(T) results in a MUE below 2.2 kcal/mol and an almost vanishing MSE. As for a comparison with computationally cheaper density functional theory (DFT) methods, the routinely used M06 functional results in MUE and MSE equal to 3.6 and -1.7 kca/mol. Results converge already at CC-PVTZ quality basis set, making highly accurate DLPNO-CCSD(T) estimates to be affordable for routine calculations (single-point) on large transition metal complexes of > 100 atoms.

  13. Accuracy of the DLPNO-CCSD(T) method for non-covalent bond dissociation enthalpies from coinage metal cation complexes

    KAUST Repository

    Minenkov, Yury

    2015-08-27

    The performance of the domain based local pair-natural orbital coupled-cluster (DLPNO-CCSD(T)) method has been tested to reproduce the experimental gas phase ligand dissociation enthalpy in a series of Cu+, Ag+ and Au+ complexes. For 33 Cu+ - non-covalent ligand dissociation enthalpies all-electron calculations with the same method result in MUE below 2.2 kcal/mol, although a MSE of 1.4 kcal/mol indicates systematic underestimation of the experimental values. Inclusion of scalar relativistic effects for Cu either via effective core potential (ECP) or Douglass-Kroll-Hess Hamiltonian, reduces the MUE below 1.7 kcal/mol and the MSE to -1.0 kcal/mol. For 24 Ag+ - non-covalent ligand dissociation enthalpies the DLPNO-CCSD(T) method results in a mean unsigned error (MUE) below 2.1 kcal/mol and vanishing mean signed error (MSE). For 15 Au+ - non-covalent ligand dissociation enthalpies the DLPNO-CCSD(T) methods provides larger MUE and MSE, equal to 3.2 and 1.7 kcal/mol, which might be related to poor precision of the experimental measurements. Overall, for the combined dataset of 72 coinage metal ion complexes DLPNO-CCSD(T) results in a MUE below 2.2 kcal/mol and an almost vanishing MSE. As for a comparison with computationally cheaper density functional theory (DFT) methods, the routinely used M06 functional results in MUE and MSE equal to 3.6 and -1.7 kca/mol. Results converge already at CC-PVTZ quality basis set, making highly accurate DLPNO-CCSD(T) estimates to be affordable for routine calculations (single-point) on large transition metal complexes of > 100 atoms.

  14. Novel encapsulation method for probiotics using an interpolymer complex in supercriticial carbon dioxide

    CSIR Research Space (South Africa)

    Moolman, FS

    2006-10-01

    Full Text Available on Bioencapsulation, Lausanne, CH. Oct.6-7, 2006 O5-3 – page 1 A novel encapsulation method for probiotics using an interpolymer complex in supercriticial carbon dioxide F.S Moolman1, P.W. Labuschagne1, M.S. Thantsha2, T.L. van der Merwe1, H. Rolfes2 and T....cloete@up.ac.za 1. Introduction Evidence for the health benefits of probiotics is increasing. These benefits include protection against pathogenic bacteria, stimulation of the immune system, reduction in carcinogenesis, vitamin production and degradation...

  15. Daily radiotoxicological supervision of personnel at the Pierrelatte industrial complex. Methods and results

    International Nuclear Information System (INIS)

    Chalabreysse, Jacques.

    1978-05-01

    A 13 year experience gained from daily radiotoxicological supervision of personnel at the PIERRELATTE industrial complex is presented. This study is divided into two parts: part one is theoretical: bibliographical synthesis of all scattered documents and publications; a homogeneous survey of all literature on the subject is thus available. Part two reviews the experience gained in professional surroundings: laboratory measurements and analyses (development of methods and daily applications); mathematical formulae to answer the first questions which arise before an individual liable to be contaminated; results obtained at PIERRELATTE [fr

  16. Development of complex electrokinetic decontamination method for soil contaminated with uranium

    International Nuclear Information System (INIS)

    Kim, Gye-Nam; Kim, Seung-Soo; Park, Hye-Min; Kim, Wan-Suk; Moon, Jei-Kwon; Hyeon, Jay-Hyeok

    2012-01-01

    520L complex electrokinetic soil decontamination equipment was manufactured to clean up uranium contaminated soils from Korean nuclear facilities. To remove uranium at more than 95% from the radioactive soil through soil washing and electrokinetic technology, decontamination experiments were carried out. To reduce the generation of large quantities of metal oxides in cathode, a pH controller is used to control the pH of the electrolyte waste solution between 0.5 and 1 for the formation of UO 2+ . More than 80% metal oxides were removed through pre-washing, an electrolyte waste solution was circulated by a pump, and a metal oxide separator filtered the metal oxide particles. 80–85% of the uranium was removed from the soil by soil washing as part of the pre-treatment. When the initial uranium concentration of the soil was 21.7 Bq/g, the required electrokinetic decontamination time was 25 days. When the initial concentration of 238 U in the soil was higher, a longer decontamination time was needed, but the removal rate of 238 U from the soil was higher.

  17. QMU as an approach to strengthening the predictive capabilities of complex models.

    Energy Technology Data Exchange (ETDEWEB)

    Gray, Genetha Anne.; Boggs, Paul T.; Grace, Matthew D.

    2010-09-01

    Complex systems are made up of multiple interdependent parts, and the behavior of the entire system cannot always be directly inferred from the behavior of the individual parts. They are nonlinear and system responses are not necessarily additive. Examples of complex systems include energy, cyber and telecommunication infrastructures, human and animal social structures, and biological structures such as cells. To meet the goals of infrastructure development, maintenance, and protection for cyber-related complex systems, novel modeling and simulation technology is needed. Sandia has shown success using M&S in the nuclear weapons (NW) program. However, complex systems represent a significant challenge and relative departure from the classical M&S exercises, and many of the scientific and mathematical M&S processes must be re-envisioned. Specifically, in the NW program, requirements and acceptable margins for performance, resilience, and security are well-defined and given quantitatively from the start. The Quantification of Margins and Uncertainties (QMU) process helps to assess whether or not these safety, reliability and performance requirements have been met after a system has been developed. In this sense, QMU is used as a sort of check that requirements have been met once the development process is completed. In contrast, performance requirements and margins may not have been defined a priori for many complex systems, (i.e. the Internet, electrical distribution grids, etc.), particularly not in quantitative terms. This project addresses this fundamental difference by investigating the use of QMU at the start of the design process for complex systems. Three major tasks were completed. First, the characteristics of the cyber infrastructure problem were collected and considered in the context of QMU-based tools. Second, UQ methodologies for the quantification of model discrepancies were considered in the context of statistical models of cyber activity. Third

  18. A method for the determination of ascorbic acid using the iron(II)-pyridine-dimethylglyoxime complex

    International Nuclear Information System (INIS)

    Arya, S. P.; Mahajan, M.

    1998-01-01

    A simple and rapid spectrophotometric method for the determination of ascorbic acid is proposed. Ascorbic acid reduces iron (III) to iron (II) which forms a red colored complex with dimethylglyoxime in the presence of pyridine. The absorbance of the resulting solution is measured at 514 nm and a linear relationship between absorbance and concentration of ascorbic acid is observed up to 14 μg ml -1 . Studies on the interference of substances usually associated with ascorbic acid have been carried out and the applicability of the method has been tested by analysing pharmaceutical preparations of vitamin C [it

  19. Organisational reviews - requirements, methods and experience. Progress report 2006

    Energy Technology Data Exchange (ETDEWEB)

    Reiman, T.; Oedewald, P.; Wahlstroem, B. [VTT, Technical Research Centre of Finland (Finland); Rollenhagen, C.; Kahlbom, U. [Maelardalen University (FI)

    2007-04-15

    Organisational reviews are important instruments in the continuous quest for improved performance. In the nuclear field there has been an increasing regulatory interest in organisational performance, because incidents and accidents often point to organisational deficiencies as one of the major precursors. Many methods for organisational reviews have been proposed, but they are mostly based on ad hoc approaches to specific problems. The absence of well-established techniques for organisational reviews has already shown to cause discussions and controversies on different levels. The aim of the OrRe project is to collect the experiences from organisational reviews carried out so far and to reflect them in a theoretical model of organisational performance. Furthermore, the project aims to reflect on the criteria for the definition of the scope and content of organisational reviews. Finally, recommendations will be made for guidance for people participating in organisational reviews. This progress report describes regulatory practices in Finland and Sweden together with some case examples of organizational reviews and assessment in both countries. Some issues of concern are raised and an outline for the next year's work is proposed. Issues of concern include the sufficient depth of the assessment, the required competence in assessments, data and criteria problems, definition of the boundaries of the system to be assessed, and the necessary internal support and organisational maturity required for successful assessments. Finally, plans for next year's work are outlined. (au)

  20. Organisational reviews - requirements, methods and experience. Progress report 2006

    International Nuclear Information System (INIS)

    Reiman, T.; Oedewald, P.; Wahlstroem, B.; Rollenhagen, C.; Kahlbom, U.

    2007-04-01

    Organisational reviews are important instruments in the continuous quest for improved performance. In the nuclear field there has been an increasing regulatory interest in organisational performance, because incidents and accidents often point to organisational deficiencies as one of the major precursors. Many methods for organisational reviews have been proposed, but they are mostly based on ad hoc approaches to specific problems. The absence of well-established techniques for organisational reviews has already shown to cause discussions and controversies on different levels. The aim of the OrRe project is to collect the experiences from organisational reviews carried out so far and to reflect them in a theoretical model of organisational performance. Furthermore, the project aims to reflect on the criteria for the definition of the scope and content of organisational reviews. Finally, recommendations will be made for guidance for people participating in organisational reviews. This progress report describes regulatory practices in Finland and Sweden together with some case examples of organizational reviews and assessment in both countries. Some issues of concern are raised and an outline for the next year's work is proposed. Issues of concern include the sufficient depth of the assessment, the required competence in assessments, data and criteria problems, definition of the boundaries of the system to be assessed, and the necessary internal support and organisational maturity required for successful assessments. Finally, plans for next year's work are outlined. (au)