WorldWideScience

Sample records for method requires complex

  1. A Fluorine-18 Radiolabeling Method Enabled by Rhenium(I) Complexation Circumvents the Requirement of Anhydrous Conditions.

    Science.gov (United States)

    Klenner, Mitchell A; Pascali, Giancarlo; Zhang, Bo; Sia, Tiffany R; Spare, Lawson K; Krause-Heuer, Anwen M; Aldrich-Wright, Janice R; Greguric, Ivan; Guastella, Adam J; Massi, Massimiliano; Fraser, Benjamin H

    2017-05-11

    Azeotropic distillation is typically required to achieve fluorine-18 radiolabeling during the production of positron emission tomography (PET) imaging agents. However, this time-consuming process also limits fluorine-18 incorporation, due to radioactive decay of the isotope and its adsorption to the drying vessel. In addressing these limitations, the fluorine-18 radiolabeling of one model rhenium(I) complex is reported here, which is significantly improved under conditions that do not require azeotropic drying. This work could open a route towards the investigation of a simplified metal-mediated late-stage radiofluorination method, which would expand upon the accessibility of new PET and PET-optical probes. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Method of complex scaling

    International Nuclear Information System (INIS)

    Braendas, E.

    1986-01-01

    The method of complex scaling is taken to include bound states, resonances, remaining scattering background and interference. Particular points of the general complex coordinate formulation are presented. It is shown that care must be exercised to avoid paradoxical situations resulting from inadequate definitions of operator domains. A new resonance localization theorem is presented

  3. Lithography requirements in complex VLSI device fabrication

    International Nuclear Information System (INIS)

    Wilson, A.D.

    1985-01-01

    Fabrication of complex very large scale integration (VLSI) circuits requires continual advances in lithography to satisfy: decreasing minimum linewidths, larger chip sizes, tighter linewidth and overlay control, increasing topography to linewidth ratios, higher yield demands, increased throughput, harsher device processing, lower lithography cost, and a larger part number set with quick turn-around time. Where optical, electron beam, x-ray, and ion beam lithography can be applied to judiciously satisfy the complex VLSI circuit fabrication requirements is discussed and those areas that are in need of major further advances are addressed. Emphasis will be placed on advanced electron beam and storage ring x-ray lithography

  4. Linearization Method and Linear Complexity

    Science.gov (United States)

    Tanaka, Hidema

    We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N(≅2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.

  5. Information geometric methods for complexity

    Science.gov (United States)

    Felice, Domenico; Cafaro, Carlo; Mancini, Stefano

    2018-03-01

    Research on the use of information geometry (IG) in modern physics has witnessed significant advances recently. In this review article, we report on the utilization of IG methods to define measures of complexity in both classical and, whenever available, quantum physical settings. A paradigmatic example of a dramatic change in complexity is given by phase transitions (PTs). Hence, we review both global and local aspects of PTs described in terms of the scalar curvature of the parameter manifold and the components of the metric tensor, respectively. We also report on the behavior of geodesic paths on the parameter manifold used to gain insight into the dynamics of PTs. Going further, we survey measures of complexity arising in the geometric framework. In particular, we quantify complexity of networks in terms of the Riemannian volume of the parameter space of a statistical manifold associated with a given network. We are also concerned with complexity measures that account for the interactions of a given number of parts of a system that cannot be described in terms of a smaller number of parts of the system. Finally, we investigate complexity measures of entropic motion on curved statistical manifolds that arise from a probabilistic description of physical systems in the presence of limited information. The Kullback-Leibler divergence, the distance to an exponential family and volumes of curved parameter manifolds, are examples of essential IG notions exploited in our discussion of complexity. We conclude by discussing strengths, limits, and possible future applications of IG methods to the physics of complexity.

  6. Minimum Energy Requirements in Complex Distillation Arrangements

    Energy Technology Data Exchange (ETDEWEB)

    Halvorsen, Ivar J.

    2001-07-01

    Distillation is the most widely used industrial separation technology and distillation units are responsible for a significant part of the total heat consumption in the world's process industry. In this work we focus on directly (fully thermally) coupled column arrangements for separation of multicomponent mixtures. These systems are also denoted Petlyuk arrangements, where a particular implementation is the dividing wall column. Energy savings in the range of 20-40% have been reported with ternary feed mixtures. In addition to energy savings, such integrated units have also a potential for reduced capital cost, making them extra attractive. However, the industrial use has been limited, and difficulties in design and control have been reported as the main reasons. Minimum energy results have only been available for ternary feed mixtures and sharp product splits. This motivates further research in this area, and this thesis will hopefully give some contributions to better understanding of complex column systems. In the first part we derive the general analytic solution for minimum energy consumption in directly coupled columns for a multicomponent feed and any number of products. To our knowledge, this is a new contribution in the field. The basic assumptions are constant relative volatility, constant pressure and constant molar flows and the derivation is based on Underwood's classical methods. An important conclusion is that the minimum energy consumption in a complex directly integrated multi-product arrangement is the same as for the most difficult split between any pair of the specified products when we consider the performance of a conventional two-product column. We also present the Vmin-diagram, which is a simple graphical tool for visualisation of minimum energy related to feed distribution. The Vmin-diagram provides a simple mean to assess the detailed flow requirements for all parts of a complex directly coupled arrangement. The main purpose in

  7. Minimum Energy Requirements in Complex Distillation Arrangements

    Energy Technology Data Exchange (ETDEWEB)

    Halvorsen, Ivar J

    2001-07-01

    Distillation is the most widely used industrial separation technology and distillation units are responsible for a significant part of the total heat consumption in the world's process industry. In this work we focus on directly (fully thermally) coupled column arrangements for separation of multicomponent mixtures. These systems are also denoted Petlyuk arrangements, where a particular implementation is the dividing wall column. Energy savings in the range of 20-40% have been reported with ternary feed mixtures. In addition to energy savings, such integrated units have also a potential for reduced capital cost, making them extra attractive. However, the industrial use has been limited, and difficulties in design and control have been reported as the main reasons. Minimum energy results have only been available for ternary feed mixtures and sharp product splits. This motivates further research in this area, and this thesis will hopefully give some contributions to better understanding of complex column systems. In the first part we derive the general analytic solution for minimum energy consumption in directly coupled columns for a multicomponent feed and any number of products. To our knowledge, this is a new contribution in the field. The basic assumptions are constant relative volatility, constant pressure and constant molar flows and the derivation is based on Underwood's classical methods. An important conclusion is that the minimum energy consumption in a complex directly integrated multi-product arrangement is the same as for the most difficult split between any pair of the specified products when we consider the performance of a conventional two-product column. We also present the Vmin-diagram, which is a simple graphical tool for visualisation of minimum energy related to feed distribution. The Vmin-diagram provides a simple mean to assess the detailed flow requirements for all parts of a complex directly coupled arrangement. The main purpose in the first

  8. Satisfying positivity requirement in the Beyond Complex Langevin approach

    Directory of Open Access Journals (Sweden)

    Wyrzykowski Adam

    2018-01-01

    Full Text Available The problem of finding a positive distribution, which corresponds to a given complex density, is studied. By the requirement that the moments of the positive distribution and of the complex density are equal, one can reduce the problem to solving the matching conditions. These conditions are a set of quadratic equations, thus Groebner basis method was used to find its solutions when it is restricted to a few lowest-order moments. For a Gaussian complex density, these approximate solutions are compared with the exact solution, that is known in this special case.

  9. Satisfying positivity requirement in the Beyond Complex Langevin approach

    Science.gov (United States)

    Wyrzykowski, Adam; Ruba, Błażej Ruba

    2018-03-01

    The problem of finding a positive distribution, which corresponds to a given complex density, is studied. By the requirement that the moments of the positive distribution and of the complex density are equal, one can reduce the problem to solving the matching conditions. These conditions are a set of quadratic equations, thus Groebner basis method was used to find its solutions when it is restricted to a few lowest-order moments. For a Gaussian complex density, these approximate solutions are compared with the exact solution, that is known in this special case.

  10. Metric-based method of software requirements correctness improvement

    Directory of Open Access Journals (Sweden)

    Yaremchuk Svitlana

    2017-01-01

    Full Text Available The work highlights the most important principles of software reliability management (SRM. The SRM concept construes a basis for developing a method of requirements correctness improvement. The method assumes that complicated requirements contain more actual and potential design faults/defects. The method applies a newer metric to evaluate the requirements complexity and double sorting technique evaluating the priority and complexity of a particular requirement. The method enables to improve requirements correctness due to identification of a higher number of defects with restricted resources. Practical application of the proposed method in the course of demands review assured a sensible technical and economic effect.

  11. State analysis requirements database for engineering complex embedded systems

    Science.gov (United States)

    Bennett, Matthew B.; Rasmussen, Robert D.; Ingham, Michel D.

    2004-01-01

    It has become clear that spacecraft system complexity is reaching a threshold where customary methods of control are no longer affordable or sufficiently reliable. At the heart of this problem are the conventional approaches to systems and software engineering based on subsystem-level functional decomposition, which fail to scale in the tangled web of interactions typically encountered in complex spacecraft designs. Furthermore, there is a fundamental gap between the requirements on software specified by systems engineers and the implementation of these requirements by software engineers. Software engineers must perform the translation of requirements into software code, hoping to accurately capture the systems engineer's understanding of the system behavior, which is not always explicitly specified. This gap opens up the possibility for misinterpretation of the systems engineer's intent, potentially leading to software errors. This problem is addressed by a systems engineering tool called the State Analysis Database, which provides a tool for capturing system and software requirements in the form of explicit models. This paper describes how requirements for complex aerospace systems can be developed using the State Analysis Database.

  12. Scattering methods in complex fluids

    CERN Document Server

    Chen, Sow-Hsin

    2015-01-01

    Summarising recent research on the physics of complex liquids, this in-depth analysis examines the topic of complex liquids from a modern perspective, addressing experimental, computational and theoretical aspects of the field. Selecting only the most interesting contemporary developments in this rich field of research, the authors present multiple examples including aggregation, gel formation and glass transition, in systems undergoing percolation, at criticality, or in supercooled states. Connecting experiments and simulation with key theoretical principles, and covering numerous systems including micelles, micro-emulsions, biological systems, and cement pastes, this unique text is an invaluable resource for graduate students and researchers looking to explore and understand the expanding field of complex fluids.

  13. Immune Algorithm Complex Method for Transducer Calibration

    Directory of Open Access Journals (Sweden)

    YU Jiangming

    2014-08-01

    Full Text Available As a key link in engineering test tasks, the transducer calibration has significant influence on accuracy and reliability of test results. Because of unknown and complex nonlinear characteristics, conventional method can’t achieve satisfactory accuracy. An Immune algorithm complex modeling approach is proposed, and the simulated studies on the calibration of third multiple output transducers is made respectively by use of the developed complex modeling. The simulated and experimental results show that the Immune algorithm complex modeling approach can improve significantly calibration precision comparison with traditional calibration methods.

  14. Continuum Level Density in Complex Scaling Method

    International Nuclear Information System (INIS)

    Suzuki, R.; Myo, T.; Kato, K.

    2005-01-01

    A new calculational method of continuum level density (CLD) at unbound energies is studied in the complex scaling method (CSM). It is shown that the CLD can be calculated by employing the discretization of continuum states in the CSM without any smoothing technique

  15. Modeling complex work systems - method meets reality

    NARCIS (Netherlands)

    van der Veer, Gerrit C.; Hoeve, Machteld; Lenting, Bert

    1996-01-01

    Modeling an existing task situation is often a first phase in the (re)design of information systems. For complex systems design, this model should consider both the people and the organization involved, the work, and situational aspects. Groupware Task Analysis (GTA) as part of a method for the

  16. Method Points: towards a metric for method complexity

    Directory of Open Access Journals (Sweden)

    Graham McLeod

    1998-11-01

    Full Text Available A metric for method complexity is proposed as an aid to choosing between competing methods, as well as in validating the effects of method integration or the products of method engineering work. It is based upon a generic method representation model previously developed by the author and adaptation of concepts used in the popular Function Point metric for system size. The proposed technique is illustrated by comparing two popular I.E. deliverables with counterparts in the object oriented Unified Modeling Language (UML. The paper recommends ways to improve the practical adoption of new methods.

  17. Automated Derivation of Complex System Constraints from User Requirements

    Science.gov (United States)

    Foshee, Mark; Murey, Kim; Marsh, Angela

    2010-01-01

    The Payload Operations Integration Center (POIC) located at the Marshall Space Flight Center has the responsibility of integrating US payload science requirements for the International Space Station (ISS). All payload operations must request ISS system resources so that the resource usage will be included in the ISS on-board execution timelines. The scheduling of resources and building of the timeline is performed using the Consolidated Planning System (CPS). The ISS resources are quite complex due to the large number of components that must be accounted for. The planners at the POIC simplify the process for Payload Developers (PD) by providing the PDs with a application that has the basic functionality PDs need as well as list of simplified resources in the User Requirements Collection (URC) application. The planners maintained a mapping of the URC resources to the CPS resources. The process of manually converting PD's science requirements from a simplified representation to a more complex CPS representation is a time-consuming and tedious process. The goal is to provide a software solution to allow the planners to build a mapping of the complex CPS constraints to the basic URC constraints and automatically convert the PD's requirements into systems requirements during export to CPS.

  18. Methods for determination of extractable complex composition

    International Nuclear Information System (INIS)

    Sergievskij, V.V.

    1984-01-01

    Specific features and restrictions of main methods for determining the extractable complex composition by the distribution data (methods of equilibrium shift, saturation, mathematical models) are considered. Special attention is given to the solution of inverse problems with account for hydration effect on the activity of organic phase components. By example of the systems lithium halides-isoamyl alcohol, thorium nitrate-n-hexyl alcohol, mineral acids tri-n-butyl phosphate (TBP), metal nitrates (uranium lanthanides) - TBP the results on determining stoichiometry of extraction equilibria obtained by various methods are compared

  19. An improved sampling method of complex network

    Science.gov (United States)

    Gao, Qi; Ding, Xintong; Pan, Feng; Li, Weixing

    2014-12-01

    Sampling subnet is an important topic of complex network research. Sampling methods influence the structure and characteristics of subnet. Random multiple snowball with Cohen (RMSC) process sampling which combines the advantages of random sampling and snowball sampling is proposed in this paper. It has the ability to explore global information and discover the local structure at the same time. The experiments indicate that this novel sampling method could keep the similarity between sampling subnet and original network on degree distribution, connectivity rate and average shortest path. This method is applicable to the situation where the prior knowledge about degree distribution of original network is not sufficient.

  20. Level density in the complex scaling method

    International Nuclear Information System (INIS)

    Suzuki, Ryusuke; Kato, Kiyoshi; Myo, Takayuki

    2005-01-01

    It is shown that the continuum level density (CLD) at unbound energies can be calculated with the complex scaling method (CSM), in which the energy spectra of bound states, resonances and continuum states are obtained in terms of L 2 basis functions. In this method, the extended completeness relation is applied to the calculation of the Green functions, and the continuum-state part is approximately expressed in terms of discretized complex scaled continuum solutions. The obtained result is compared with the CLD calculated exactly from the scattering phase shift. The discretization in the CSM is shown to give a very good description of continuum states. We discuss how the scattering phase shifts can inversely be calculated from the discretized CLD using a basis function technique in the CSM. (author)

  1. Cut Based Method for Comparing Complex Networks.

    Science.gov (United States)

    Liu, Qun; Dong, Zhishan; Wang, En

    2018-03-23

    Revealing the underlying similarity of various complex networks has become both a popular and interdisciplinary topic, with a plethora of relevant application domains. The essence of the similarity here is that network features of the same network type are highly similar, while the features of different kinds of networks present low similarity. In this paper, we introduce and explore a new method for comparing various complex networks based on the cut distance. We show correspondence between the cut distance and the similarity of two networks. This correspondence allows us to consider a broad range of complex networks and explicitly compare various networks with high accuracy. Various machine learning technologies such as genetic algorithms, nearest neighbor classification, and model selection are employed during the comparison process. Our cut method is shown to be suited for comparisons of undirected networks and directed networks, as well as weighted networks. In the model selection process, the results demonstrate that our approach outperforms other state-of-the-art methods with respect to accuracy.

  2. Molecular photoionization using the complex Kohn variational method

    International Nuclear Information System (INIS)

    Lynch, D.L.; Schneider, B.I.

    1992-01-01

    We have applied the complex Kohn variational method to the study of molecular-photoionization processes. This requires electron-ion scattering calculations enforcing incoming boundary conditions. The sensitivity of these results to the choice of the cutoff function in the Kohn method has been studied and we have demonstrated that a simple matching of the irregular function to a linear combination of regular functions produces accurate scattering phase shifts

  3. Supplemental design requirements document solid waste operations complex

    International Nuclear Information System (INIS)

    Ocampo, V.P.; Boothe, G.F.; Broz, D.R.; Eaton, H.E.; Greager, T.M.; Huckfeldt, R.A.; Kooiker, S.L.; Lamberd, D.L.; Lang, L.L.; Myers, J.B.

    1994-11-01

    This document provides additional and supplemental information to the WHC-SD-W112-FDC-001, WHC-SD-W113-FDC-001, and WHC-SD-W100-FDC-001. It provides additional requirements for the design and summarizes Westinghouse Hanford Company key design guidance and establishes the technical baseline agreements to be used for definitive design common to the Solid Waste Operations Complex (SWOC) Facilities (Project W-112, Project W-113, and WRAP 2A)

  4. Complex networks principles, methods and applications

    CERN Document Server

    Latora, Vito; Russo, Giovanni

    2017-01-01

    Networks constitute the backbone of complex systems, from the human brain to computer communications, transport infrastructures to online social systems and metabolic reactions to financial markets. Characterising their structure improves our understanding of the physical, biological, economic and social phenomena that shape our world. Rigorous and thorough, this textbook presents a detailed overview of the new theory and methods of network science. Covering algorithms for graph exploration, node ranking and network generation, among the others, the book allows students to experiment with network models and real-world data sets, providing them with a deep understanding of the basics of network theory and its practical applications. Systems of growing complexity are examined in detail, challenging students to increase their level of skill. An engaging presentation of the important principles of network science makes this the perfect reference for researchers and undergraduate and graduate students in physics, ...

  5. A Low Complexity Discrete Radiosity Method

    OpenAIRE

    Chatelier , Pierre Yves; Malgouyres , Rémy

    2006-01-01

    International audience; Rather than using Monte Carlo sampling techniques or patch projections to compute radiosity, it is possible to use a discretization of a scene into voxels and perform some discrete geometry calculus to quickly compute visibility information. In such a framework , the radiosity method may be as precise as a patch-based radiosity using hemicube computation for form-factors, but it lowers the overall theoretical complexity to an O(N log N) + O(N), where the O(N) is largel...

  6. Germination and seedling establishment in orchids: a complex of requirements.

    Science.gov (United States)

    Rasmussen, Hanne N; Dixon, Kingsley W; Jersáková, Jana; Těšitelová, Tamara

    2015-09-01

    Seedling recruitment is essential to the sustainability of any plant population. Due to the minute nature of seeds and early-stage seedlings, orchid germination in situ was for a long time practically impossible to observe, creating an obstacle towards understanding seedling site requirements and fluctuations in orchid populations. The introduction of seed packet techniques for sowing and retrieval in natural sites has brought with it important insights, but many aspects of orchid seed and germination biology remain largely unexplored. The germination niche for orchids is extremely complex, because it is defined by requirements not only for seed lodging and germination, but also for presence of a fungal host and its substrate. A mycobiont that the seedling can parasitize is considered an essential element, and a great diversity of Basidiomycota and Ascomycota have now been identified for their role in orchid seed germination, with fungi identifiable as imperfect Rhizoctonia species predominating. Specificity patterns vary from orchid species employing a single fungal lineage to species associating individually with a limited selection of distantly related fungi. A suitable organic carbon source for the mycobiont constitutes another key requirement. Orchid germination also relies on factors that generally influence the success of plant seeds, both abiotic, such as light/shade, moisture, substrate chemistry and texture, and biotic, such as competitors and antagonists. Complexity is furthermore increased when these factors influence seeds/seedling, fungi and fungal substrate differentially. A better understanding of germination and seedling establishment is needed for conservation of orchid populations. Due to the obligate association with a mycobiont, the germination niches in orchid species are extremely complex and varied. Microsites suitable for germination can be small and transient, and direct observation is difficult. An experimental approach using several

  7. Analytical Method to Estimate the Complex Permittivity of Oil Samples

    Directory of Open Access Journals (Sweden)

    Lijuan Su

    2018-03-01

    Full Text Available In this paper, an analytical method to estimate the complex dielectric constant of liquids is presented. The method is based on the measurement of the transmission coefficient in an embedded microstrip line loaded with a complementary split ring resonator (CSRR, which is etched in the ground plane. From this response, the dielectric constant and loss tangent of the liquid under test (LUT can be extracted, provided that the CSRR is surrounded by such LUT, and the liquid level extends beyond the region where the electromagnetic fields generated by the CSRR are present. For that purpose, a liquid container acting as a pool is added to the structure. The main advantage of this method, which is validated from the measurement of the complex dielectric constant of olive and castor oil, is that reference samples for calibration are not required.

  8. Flow assurance : complex phase behavior and complex work requires confidence and vigilance

    Energy Technology Data Exchange (ETDEWEB)

    Brown, L.D. [ConocoPhillips, Major Projects, Advanced Integrated Simulation, Houston, TX (United States)

    2008-07-01

    Petroleum exploration and development projects and operations increasingly rely on flow assurance definition. Flow assurance is an integrating discipline as it follows the fluid from the reservoir to the market. Flow assurance works across complex technical and non-technical interfaces, including the reservoir, well completions, operation processes, project management, physical/organic chemistry, fluid mechanics, chemical engineering, mechanical engineering and corrosion. The phase behaviour in real fluids also has complex interfaces. The understanding and management of flow assurance of complex phase behaviour must be well communicated in order to enable proper selection, execution, and operation of development concepts designed to manage successful production within the fluid's phase behaviour. Simulation tools facilitate the translation of science into engineering. Academic, industrial, and field research is the core of these tools. The author cautioned that vigilance is required to assist and identify the right time to move innovation into the core tools.

  9. Complexity analysis of accelerated MCMC methods for Bayesian inversion

    International Nuclear Information System (INIS)

    Hoang, Viet Ha; Schwab, Christoph; Stuart, Andrew M

    2013-01-01

    The Bayesian approach to inverse problems, in which the posterior probability distribution on an unknown field is sampled for the purposes of computing posterior expectations of quantities of interest, is starting to become computationally feasible for partial differential equation (PDE) inverse problems. Balancing the sources of error arising from finite-dimensional approximation of the unknown field, the PDE forward solution map and the sampling of the probability space under the posterior distribution are essential for the design of efficient computational Bayesian methods for PDE inverse problems. We study Bayesian inversion for a model elliptic PDE with an unknown diffusion coefficient. We provide complexity analyses of several Markov chain Monte Carlo (MCMC) methods for the efficient numerical evaluation of expectations under the Bayesian posterior distribution, given data δ. Particular attention is given to bounds on the overall work required to achieve a prescribed error level ε. Specifically, we first bound the computational complexity of ‘plain’ MCMC, based on combining MCMC sampling with linear complexity multi-level solvers for elliptic PDE. Our (new) work versus accuracy bounds show that the complexity of this approach can be quite prohibitive. Two strategies for reducing the computational complexity are then proposed and analyzed: first, a sparse, parametric and deterministic generalized polynomial chaos (gpc) ‘surrogate’ representation of the forward response map of the PDE over the entire parameter space, and, second, a novel multi-level Markov chain Monte Carlo strategy which utilizes sampling from a multi-level discretization of the posterior and the forward PDE. For both of these strategies, we derive asymptotic bounds on work versus accuracy, and hence asymptotic bounds on the computational complexity of the algorithms. In particular, we provide sufficient conditions on the regularity of the unknown coefficients of the PDE and on the

  10. Enabling Requirements-Based Programming for Highly-Dependable Complex Parallel and Distributed Systems

    Science.gov (United States)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2005-01-01

    The manual application of formal methods in system specification has produced successes, but in the end, despite any claims and assertions by practitioners, there is no provable relationship between a manually derived system specification or formal model and the customer's original requirements. Complex parallel and distributed system present the worst case implications for today s dearth of viable approaches for achieving system dependability. No avenue other than formal methods constitutes a serious contender for resolving the problem, and so recognition of requirements-based programming has come at a critical juncture. We describe a new, NASA-developed automated requirement-based programming method that can be applied to certain classes of systems, including complex parallel and distributed systems, to achieve a high degree of dependability.

  11. Complex Data Modeling and Computationally Intensive Statistical Methods

    CERN Document Server

    Mantovan, Pietro

    2010-01-01

    The last years have seen the advent and development of many devices able to record and store an always increasing amount of complex and high dimensional data; 3D images generated by medical scanners or satellite remote sensing, DNA microarrays, real time financial data, system control datasets. The analysis of this data poses new challenging problems and requires the development of novel statistical models and computational methods, fueling many fascinating and fast growing research areas of modern statistics. The book offers a wide variety of statistical methods and is addressed to statistici

  12. Measurement methods on the complexity of network

    Institute of Scientific and Technical Information of China (English)

    LIN Lin; DING Gang; CHEN Guo-song

    2010-01-01

    Based on the size of network and the number of paths in the network,we proposed a model of topology complexity of a network to measure the topology complexity of the network.Based on the analyses of the effects of the number of the equipment,the types of equipment and the processing time of the node on the complexity of the network with the equipment-constrained,a complexity model of equipment-constrained network was constructed to measure the integrated complexity of the equipment-constrained network.The algorithms for the two models were also developed.An automatic generator of the random single label network was developed to test the models.The results show that the models can correctly evaluate the topology complexity and the integrated complexity of the networks.

  13. Equivalence of the generalized and complex Kohn variational methods

    Energy Technology Data Exchange (ETDEWEB)

    Cooper, J N; Armour, E A G [School of Mathematical Sciences, University Park, Nottingham NG7 2RD (United Kingdom); Plummer, M, E-mail: pmxjnc@googlemail.co [STFC Daresbury Laboratory, Daresbury, Warrington, Cheshire WA4 4AD (United Kingdom)

    2010-04-30

    For Kohn variational calculations on low energy (e{sup +} - H{sub 2}) elastic scattering, we prove that the phase shift approximation, obtained using the complex Kohn method, is precisely equal to a value which can be obtained immediately via the real-generalized Kohn method. Our treatment is sufficiently general to be applied directly to arbitrary potential scattering or single open channel scattering problems, with exchange if required. In the course of our analysis, we develop a framework formally to describe the anomalous behaviour of our generalized Kohn calculations in the regions of the well-known Schwartz singularities. This framework also explains the mathematical origin of the anomaly-free singularities we reported in a previous article. Moreover, we demonstrate a novelty: that explicit solutions of the Kohn equations are not required in order to calculate optimal phase shift approximations. We relate our rigorous framework to earlier descriptions of the Kohn-type methods.

  14. Equivalence of the generalized and complex Kohn variational methods

    International Nuclear Information System (INIS)

    Cooper, J N; Armour, E A G; Plummer, M

    2010-01-01

    For Kohn variational calculations on low energy (e + - H 2 ) elastic scattering, we prove that the phase shift approximation, obtained using the complex Kohn method, is precisely equal to a value which can be obtained immediately via the real-generalized Kohn method. Our treatment is sufficiently general to be applied directly to arbitrary potential scattering or single open channel scattering problems, with exchange if required. In the course of our analysis, we develop a framework formally to describe the anomalous behaviour of our generalized Kohn calculations in the regions of the well-known Schwartz singularities. This framework also explains the mathematical origin of the anomaly-free singularities we reported in a previous article. Moreover, we demonstrate a novelty: that explicit solutions of the Kohn equations are not required in order to calculate optimal phase shift approximations. We relate our rigorous framework to earlier descriptions of the Kohn-type methods.

  15. Research on image complexity evaluation method based on color information

    Science.gov (United States)

    Wang, Hao; Duan, Jin; Han, Xue-hui; Xiao, Bo

    2017-11-01

    In order to evaluate the complexity of a color image more effectively and find the connection between image complexity and image information, this paper presents a method to compute the complexity of image based on color information.Under the complexity ,the theoretical analysis first divides the complexity from the subjective level, divides into three levels: low complexity, medium complexity and high complexity, and then carries on the image feature extraction, finally establishes the function between the complexity value and the color characteristic model. The experimental results show that this kind of evaluation method can objectively reconstruct the complexity of the image from the image feature research. The experimental results obtained by the method of this paper are in good agreement with the results of human visual perception complexity,Color image complexity has a certain reference value.

  16. Complex finite element sensitivity method for creep analysis

    International Nuclear Information System (INIS)

    Gomez-Farias, Armando; Montoya, Arturo; Millwater, Harry

    2015-01-01

    The complex finite element method (ZFEM) has been extended to perform sensitivity analysis for mechanical and structural systems undergoing creep deformation. ZFEM uses a complex finite element formulation to provide shape, material, and loading derivatives of the system response, providing an insight into the essential factors which control the behavior of the system as a function of time. A complex variable-based quadrilateral user element (UEL) subroutine implementing the power law creep constitutive formulation was incorporated within the Abaqus commercial finite element software. The results of the complex finite element computations were verified by comparing them to the reference solution for the steady-state creep problem of a thick-walled cylinder in the power law creep range. A practical application of the ZFEM implementation to creep deformation analysis is the calculation of the skeletal point of a notched bar test from a single ZFEM run. In contrast, the standard finite element procedure requires multiple runs. The value of the skeletal point is that it identifies the location where the stress state is accurate, regardless of the certainty of the creep material properties. - Highlights: • A novel finite element sensitivity method (ZFEM) for creep was introduced. • ZFEM has the capability to calculate accurate partial derivatives. • ZFEM can be used for identification of the skeletal point of creep structures. • ZFEM can be easily implemented in a commercial software, e.g. Abaqus. • ZFEM results were shown to be in excellent agreement with analytical solutions

  17. Critical requirements of the SSTR method

    International Nuclear Information System (INIS)

    Gold, R.

    1975-08-01

    Discrepancies have been reported in absolute fission rate measurements observed with Solid State Tract Recorders (SSTR) and fission chambers which lie well outside experimental error. As a result of these comparisons, the reliability of the SSTR method has been seriously questioned, and the fission chamber method has been advanced for sole use in absolute fission rate determinations. In view of the absolute accuracy already reported and well documented for the SSTR method, this conclusion is both surprising and unfortunate. Two independent methods are highly desirable. Moreover, these two methods more than compliment one another, since certain in-core experiments may be amenable to either but not both techniques. Consequently, one cannot abandon the SSTR method without sacrificing crucial advantages. A critical reappraisal of certain aspects of the SSTR method is offered in the hope that the source of the current controversy can be uncovered and a long term beneficial agreement between these two methods can therefore be established. (WHK)

  18. Delay generation methods with reduced memory requirements

    DEFF Research Database (Denmark)

    Tomov, Borislav Gueorguiev; Jensen, Jørgen Arendt

    2003-01-01

    Modern diagnostic ultrasound beamformers require delay information for each sample along the image lines. In order to avoid storing large amounts of focusing data, delay generation techniques have to be used. In connection with developing a compact beamformer architecture, recursive algorithms were......) For the best parametric approach, the gate count was 2095, the maximum operation speed was 131.9 MHz, the power consumption at 40 MHz was 10.6 mW, and it requires 4 12-bit words for each image line and channel. 2) For the piecewise-linear approximation, the corresponding numbers are 1125 gates, 184.9 MHz, 7...

  19. Complex operator method of the hydrogen atom

    International Nuclear Information System (INIS)

    Jiang, X.

    1989-01-01

    Frequently the hydrogen atom eigenvalue problem is analytically solved by solving a radial wave equation for a particle in a Coulomb field. In this article, complex coordinates are introduced, and an expression for the energy levels of the hydrogen atom is obtained by means of the algebraic solution of operators. The form of this solution is in accord with that of the analytical solution

  20. Rising Trend: Complex and sophisticated attack methods

    Indian Academy of Sciences (India)

    Stux, DuQu, Nitro, Luckycat, Exploit Kits, FLAME. ADSL/SoHo Router Compromise. Botnets of compromised ADSL/SoHo Routers; User Redirection via malicious DNS entry. Web Application attacks. SQL Injection, RFI etc. More and more Webshells. More utility to hackers; Increasing complexity and evading mechanisms.

  1. Blast casting requires fresh assessment of methods

    Energy Technology Data Exchange (ETDEWEB)

    Pilshaw, S.R.

    1987-08-01

    The article discusses the reasons why conventional blasting operations, mainly that of explosive products, drilling and initiation methods are inefficient, and suggests new methods and materials to overcome the problems of the conventional operations. The author suggests that the use of bulk ANFO for casting, instead of high energy and density explosives with high velocity detonation is more effective in producing heave action results. Similarly the drilling of smaller blast holes than is conventional allows better loading distribution of explosives in the rock mass. The author also suggests that casting would be more efficient if the shot rows were loaded differently to produce a variable burden blasting pattern.

  2. Hybrid recommendation methods in complex networks.

    Science.gov (United States)

    Fiasconaro, A; Tumminello, M; Nicosia, V; Latora, V; Mantegna, R N

    2015-07-01

    We propose two recommendation methods, based on the appropriate normalization of already existing similarity measures, and on the convex combination of the recommendation scores derived from similarity between users and between objects. We validate the proposed measures on three data sets, and we compare the performance of our methods to other recommendation systems recently proposed in the literature. We show that the proposed similarity measures allow us to attain an improvement of performances of up to 20% with respect to existing nonparametric methods, and that the accuracy of a recommendation can vary widely from one specific bipartite network to another, which suggests that a careful choice of the most suitable method is highly relevant for an effective recommendation on a given system. Finally, we study how an increasing presence of random links in the network affects the recommendation scores, finding that one of the two recommendation algorithms introduced here can systematically outperform the others in noisy data sets.

  3. Multistage Spectral Relaxation Method for Solving the Hyperchaotic Complex Systems

    Directory of Open Access Journals (Sweden)

    Hassan Saberi Nik

    2014-01-01

    Full Text Available We present a pseudospectral method application for solving the hyperchaotic complex systems. The proposed method, called the multistage spectral relaxation method (MSRM is based on a technique of extending Gauss-Seidel type relaxation ideas to systems of nonlinear differential equations and using the Chebyshev pseudospectral methods to solve the resulting system on a sequence of multiple intervals. In this new application, the MSRM is used to solve famous hyperchaotic complex systems such as hyperchaotic complex Lorenz system and the complex permanent magnet synchronous motor. We compare this approach to the Runge-Kutta based ode45 solver to show that the MSRM gives accurate results.

  4. The GARP complex is required for cellular sphingolipid homeostasis

    DEFF Research Database (Denmark)

    Fröhlich, Florian; Petit, Constance; Kory, Nora

    2015-01-01

    (GARP) complex, which functions in endosome-to-Golgi retrograde vesicular transport, as a critical player in sphingolipid homeostasis. GARP deficiency leads to accumulation of sphingolipid synthesis intermediates, changes in sterol distribution, and lysosomal dysfunction. A GARP complex mutation...... analogous to a VPS53 allele causing progressive cerebello-cerebral atrophy type 2 (PCCA2) in humans exhibits similar, albeit weaker, phenotypes in yeast, providing mechanistic insights into disease pathogenesis. Inhibition of the first step of de novo sphingolipid synthesis is sufficient to mitigate many...

  5. Curcumin complexation with cyclodextrins by the autoclave process: Method development and characterization of complex formation.

    Science.gov (United States)

    Hagbani, Turki Al; Nazzal, Sami

    2017-03-30

    One approach to enhance curcumin (CUR) aqueous solubility is to use cyclodextrins (CDs) to form inclusion complexes where CUR is encapsulated as a guest molecule within the internal cavity of the water-soluble CD. Several methods have been reported for the complexation of CUR with CDs. Limited information, however, is available on the use of the autoclave process (AU) in complex formation. The aims of this work were therefore to (1) investigate and evaluate the AU cycle as a complex formation method to enhance CUR solubility; (2) compare the efficacy of the AU process with the freeze-drying (FD) and evaporation (EV) processes in complex formation; and (3) confirm CUR stability by characterizing CUR:CD complexes by NMR, Raman spectroscopy, DSC, and XRD. Significant differences were found in the saturation solubility of CUR from its complexes with CD when prepared by the three complexation methods. The AU yielded a complex with expected chemical and physical fingerprints for a CUR:CD inclusion complex that maintained the chemical integrity and stability of CUR and provided the highest solubility of CUR in water. Physical and chemical characterizations of the AU complexes confirmed the encapsulated of CUR inside the CD cavity and the transformation of the crystalline CUR:CD inclusion complex to an amorphous form. It was concluded that the autoclave process with its short processing time could be used as an alternate and efficient methods for drug:CD complexation. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Early Language Learning: Complexity and Mixed Methods

    Science.gov (United States)

    Enever, Janet, Ed.; Lindgren, Eva, Ed.

    2017-01-01

    This is the first collection of research studies to explore the potential for mixed methods to shed light on foreign or second language learning by young learners in instructed contexts. It brings together recent studies undertaken in Cameroon, China, Croatia, Ethiopia, France, Germany, Italy, Kenya, Mexico, Slovenia, Spain, Sweden, Tanzania and…

  7. PNN NGC 246: A Complex Photometric Behaviour That Requires Wet

    Directory of Open Access Journals (Sweden)

    Pérez J. M. González

    2003-03-01

    Full Text Available We present a study over three single-site campaigns to investigate the photometric behaviour of the PNN NGC 246. We observed this object in 2000 and 2001. The analysis of the light curves indicates complex and variable temporal spectra. Using wavelet analysis we have found evidences for changes on time scales of hours in the 2000 dataset. The temporal spectra obtained during 2001 are quite different from the results of the previous year. The modulations in the light curve are more noticeable and the temporal spectra present a higher number of modulation frequencies. One peculiar characteristic is the presence of a variable harmonic structure related to one of these modulation frequencies. This complex photometric behaviour may be explained by a more complicated unresolved combination of modulation frequencies, but more likely due to a combination of pulsations of the star plus modulations related to interaction with a close companion, maybe indicating a disc. However, these characteristics cannot be confirmed from single site observations. The complex and variable behaviour of NGC 246 needs the WET co-operation in order to completely resolve its light curve.

  8. Knowledge based method for solving complexity in design problems

    NARCIS (Netherlands)

    Vermeulen, B.

    2007-01-01

    The process of design aircraft systems is becoming more and more complex, due to an increasing amount of requirements. Moreover, the knowledge on how to solve these complex design problems becomes less readily available, because of a decrease in availability of intellectual resources and reduced

  9. The Sources and Methods of Engineering Design Requirement

    DEFF Research Database (Denmark)

    Li, Xuemeng; Zhang, Zhinan; Ahmed-Kristensen, Saeema

    2014-01-01

    to be defined in a new context. This paper focuses on understanding the design requirement sources at the requirement elicitation phase. It aims at proposing an improved design requirement source classification considering emerging markets and presenting current methods for eliciting requirement for each source...

  10. A Survey of Various Object Oriented Requirement Engineering Methods

    OpenAIRE

    Anandi Mahajan; Dr. Anurag Dixit

    2013-01-01

    In current years many industries have been moving to the use of object-oriented methods for the development of large scale information systems The requirement of Object Oriented approach in the development of software systems is increasing day by day. This paper is basically a survey paper on various Object-oriented requirement engineering methods. This paper contains a summary of the available Object-oriented requirement engineering methods with their relative advantages and disadvantages...

  11. Complex rectal polyps: other treatment modalities required when offering a transanal endoscopic microsurgery service.

    LENUS (Irish Health Repository)

    Joyce, Myles R

    2011-09-01

    Complex rectal polyps may present a clinical challenge. The study aim was to assess different treatment modalities required in the management of patients referred for transanal endoscopic microsurgery.

  12. Complexity, Methodology and Method: Crafting a Critical Process of Research

    Science.gov (United States)

    Alhadeff-Jones, Michel

    2013-01-01

    This paper defines a theoretical framework aiming to support the actions and reflections of researchers looking for a "method" in order to critically conceive the complexity of a scientific process of research. First, it starts with a brief overview of the core assumptions framing Morin's "paradigm of complexity" and Le…

  13. Structuring requirements as necessary premise for customer-oriented development of complex products: A generic approach

    Directory of Open Access Journals (Sweden)

    Sandra Klute

    2011-10-01

    Full Text Available Purpose: Complex products like for example intra-logistical facilities make high demands on developers and producers and involve high investment and operating costs. When planning and developing and also making buying decisions the facility utilization and the thus ensuing requirements on the facility and its components are inadequately considered to date. Nevertheless, with regard to customer-directed product design, these requirements must all be taken into account – especially as they can contribute to possible savings. In this context, it is necessary to survey and systematically regard requirements from a large number of areas like for example the operator, the facility producer and also requirements of external parties such as the law and to implement into adequate product characteristics to produce customer-oriented products. This is, however, a difficult task because of the diversity of stakeholders involved and their numerous and often divergent requirements. Therefore, it is essential to structure the requirements, so that planners and developers are able to manage the large amount of information. Structure models can be used in this context to cluster requirements. Within the German Collaborative Research Centre 696 a 10-dimensional model has been developed. This model allows structuring of all requirements on intra-logistical facilities or respectively complex products in general. In the context of dealing with hundreds of data records, structuring requirements is mandatory to achieve accuracy, clarity and consequently satisfactory results when transforming requirements into product characteristics which fit customer needs. In the paper an excerpt of this model is presented. Design/methodology/approach: In literature a multitude of methods which deal with the topic of structuring exist. The methods have been analysed regarding their purpose and their level of specification, i.e. the number of differentiated categories, to check if

  14. Basic requirements to the methods of personnel monitoring

    International Nuclear Information System (INIS)

    Keirim-Markus, I.B.

    1981-01-01

    Requirements to methods of personnel monitoring (PMM) depending on irradiation conditions are given. The irradiation conditions determine subjected to monitoring types of irradiation, measurement ranges, periodicity of monitoring, operativeness of obtaining results and required accuracy. The PMM based on the photographic effect of ionizing radiation is the main method of the mass monitoring [ru

  15. 77 FR 50726 - Software Requirement Specifications for Digital Computer Software and Complex Electronics Used in...

    Science.gov (United States)

    2012-08-22

    ... Computer Software and Complex Electronics Used in Safety Systems of Nuclear Power Plants AGENCY: Nuclear...-1209, ``Software Requirement Specifications for Digital Computer Software and Complex Electronics used... Electronics Engineers (ANSI/IEEE) Standard 830-1998, ``IEEE Recommended Practice for Software Requirements...

  16. An Engineering Method of Civil Jet Requirements Validation Based on Requirements Project Principle

    Science.gov (United States)

    Wang, Yue; Gao, Dan; Mao, Xuming

    2018-03-01

    A method of requirements validation is developed and defined to meet the needs of civil jet requirements validation in product development. Based on requirements project principle, this method will not affect the conventional design elements, and can effectively connect the requirements with design. It realizes the modern civil jet development concept, which is “requirement is the origin, design is the basis”. So far, the method has been successfully applied in civil jet aircraft development in China. Taking takeoff field length as an example, the validation process and the validation method of the requirements are detailed introduced in the study, with the hope of providing the experiences to other civil jet product design.

  17. How to Compare the Security Quality Requirements Engineering (SQUARE) Method with Other Methods

    National Research Council Canada - National Science Library

    Mead, Nancy R

    2007-01-01

    The Security Quality Requirements Engineering (SQUARE) method, developed at the Carnegie Mellon Software Engineering Institute, provides a systematic way to identify security requirements in a software development project...

  18. Quantitative Nuclear Medicine Imaging: Concepts, Requirements and Methods

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2014-01-15

    The absolute quantification of radionuclide distribution has been a goal since the early days of nuclear medicine. Nevertheless, the apparent complexity and sometimes limited accuracy of these methods have prevented them from being widely used in important applications such as targeted radionuclide therapy or kinetic analysis. The intricacy of the effects degrading nuclear medicine images and the lack of availability of adequate methods to compensate for these effects have frequently been seen as insurmountable obstacles in the use of quantitative nuclear medicine in clinical institutions. In the last few decades, several research groups have consistently devoted their efforts to the filling of these gaps. As a result, many efficient methods are now available that make quantification a clinical reality, provided appropriate compensation tools are used. Despite these efforts, many clinical institutions still lack the knowledge and tools to adequately measure and estimate the accumulated activities in the human body, thereby using potentially outdated protocols and procedures. The purpose of the present publication is to review the current state of the art of image quantification and to provide medical physicists and other related professionals facing quantification tasks with a solid background of tools and methods. It describes and analyses the physical effects that degrade image quality and affect the accuracy of quantification, and describes methods to compensate for them in planar, single photon emission computed tomography (SPECT) and positron emission tomography (PET) images. The fast paced development of the computational infrastructure, both hardware and software, has made drastic changes in the ways image quantification is now performed. The measuring equipment has evolved from the simple blind probes to planar and three dimensional imaging, supported by SPECT, PET and hybrid equipment. Methods of iterative reconstruction have been developed to allow for

  19. An efficient Korringa-Kohn-Rostoker method for ''complex'' lattices

    International Nuclear Information System (INIS)

    Yussouff, M.; Zeller, R.

    1980-10-01

    We present a modification of the exact KKR-band structure method which uses (a) a new energy expansion for structure constants and (b) only the reciprocal lattice summation. It is quite efficient and particularly useful for 'complex' lattices. The band structure of hexagonal-close-packed Beryllium at symmetry points is presented as an example of this method. (author)

  20. A direction of developing a mining method and mining complexes

    Energy Technology Data Exchange (ETDEWEB)

    Gabov, V.V.; Efimov, I.A. [St. Petersburg State Mining Institute, St. Petersburg (Russian Federation). Vorkuta Branch

    1996-12-31

    The analyses of a mining method as a main factor determining the development stages of mining units is presented. The paper suggests a perspective mining method which differs from the known ones by following peculiarities: the direction selectivity of cuts with regard to coal seams structure; the cutting speed, thickness and succession of dusts. This method may be done by modulate complexes (a shield carrying a cutting head for coal mining), their mining devices being supplied with hydraulic drive. An experimental model of the module complex has been developed. 2 refs.

  1. High-resolution method for evolving complex interface networks

    Science.gov (United States)

    Pan, Shucheng; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2018-04-01

    In this paper we describe a high-resolution transport formulation of the regional level-set approach for an improved prediction of the evolution of complex interface networks. The novelty of this method is twofold: (i) construction of local level sets and reconstruction of a global level set, (ii) local transport of the interface network by employing high-order spatial discretization schemes for improved representation of complex topologies. Various numerical test cases of multi-region flow problems, including triple-point advection, single vortex flow, mean curvature flow, normal driven flow, dry foam dynamics and shock-bubble interaction show that the method is accurate and suitable for a wide range of complex interface-network evolutions. Its overall computational cost is comparable to the Semi-Lagrangian regional level-set method while the prediction accuracy is significantly improved. The approach thus offers a viable alternative to previous interface-network level-set method.

  2. An Extended Newmark-FDTD Method for Complex Dispersive Media

    Directory of Open Access Journals (Sweden)

    Yu-Qiang Zhang

    2018-01-01

    Full Text Available Based on polarizability in the form of a complex quadratic rational function, a novel finite-difference time-domain (FDTD approach combined with the Newmark algorithm is presented for dealing with a complex dispersive medium. In this paper, the time-stepping equation of the polarization vector is derived by applying simultaneously the Newmark algorithm to the two sides of a second-order time-domain differential equation obtained from the relation between the polarization vector and electric field intensity in the frequency domain by the inverse Fourier transform. Then, its accuracy and stability are discussed from the two aspects of theoretical analysis and numerical computation. It is observed that this method possesses the advantages of high accuracy, high stability, and a wide application scope and can thus be applied to the treatment of many complex dispersion models, including the complex conjugate pole residue model, critical point model, modified Lorentz model, and complex quadratic rational function.

  3. Experimentation on accuracy of non functional requirement prioritization approaches for different complexity projects

    OpenAIRE

    Raj Kumar Chopra; Varun Gupta; Durg Singh Chauhan

    2016-01-01

    Non functional requirements must be selected for implementation together with functional requirements to enhance the success of software projects. Three approaches exist for performing the prioritization of non functional requirements using the suitable prioritization technique. This paper performs experimentation on three different complexity versions of the industrial software project using cost-value prioritization technique employing three approaches. Experimentation is conducted to analy...

  4. A Qualitative Method to Estimate HSI Display Complexity

    International Nuclear Information System (INIS)

    Hugo, Jacques; Gertman, David

    2013-01-01

    There is mounting evidence that complex computer system displays in control rooms contribute to cognitive complexity and, thus, to the probability of human error. Research shows that reaction time increases and response accuracy decreases as the number of elements in the display screen increase. However, in terms of supporting the control room operator, approaches focusing on addressing display complexity solely in terms of information density and its location and patterning, will fall short of delivering a properly designed interface. This paper argues that information complexity and semantic complexity are mandatory components when considering display complexity and that the addition of these concepts assists in understanding and resolving differences between designers and the preferences and performance of operators. This paper concludes that a number of simplified methods, when combined, can be used to estimate the impact that a particular display may have on the operator's ability to perform a function accurately and effectively. We present a mixed qualitative and quantitative approach and a method for complexity estimation

  5. A Qualitative Method to Estimate HSI Display Complexity

    Energy Technology Data Exchange (ETDEWEB)

    Hugo, Jacques; Gertman, David [Idaho National Laboratory, Idaho (United States)

    2013-04-15

    There is mounting evidence that complex computer system displays in control rooms contribute to cognitive complexity and, thus, to the probability of human error. Research shows that reaction time increases and response accuracy decreases as the number of elements in the display screen increase. However, in terms of supporting the control room operator, approaches focusing on addressing display complexity solely in terms of information density and its location and patterning, will fall short of delivering a properly designed interface. This paper argues that information complexity and semantic complexity are mandatory components when considering display complexity and that the addition of these concepts assists in understanding and resolving differences between designers and the preferences and performance of operators. This paper concludes that a number of simplified methods, when combined, can be used to estimate the impact that a particular display may have on the operator's ability to perform a function accurately and effectively. We present a mixed qualitative and quantitative approach and a method for complexity estimation.

  6. Education requirements for nurses working with people with complex neurological conditions: nurses' perceptions.

    Science.gov (United States)

    Baker, Mark

    2012-01-01

    Following a service evaluation methodology, this paper reports on registered nurses' (RNs) and healthcare assistants' (HCAs) perceptions about education and training requirements in order to work with people with complex neurological disabilities. A service evaluation was undertaken to meet the study aim using a non-probability, convenience method of sampling 368 nurses (n=110 RNs, n=258 HCAs) employed between October and November 2008 at one specialist hospital in south-west London in the U.K. The main results show that respondents were clear about the need to develop an education and training programme for RNs and HCAs working in this speciality area (91% of RNs and 94% of HCAs). A variety of topics were identified to be included within a work-based education and training programme, such as positively managing challenging behaviour, moving and handling, working with families. Adults with complex neurological needs have diverse needs and thus nurses working with this patient group require diverse education and training in order to deliver quality patient-focused nursing care. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. New complex variable meshless method for advection—diffusion problems

    International Nuclear Information System (INIS)

    Wang Jian-Fei; Cheng Yu-Min

    2013-01-01

    In this paper, an improved complex variable meshless method (ICVMM) for two-dimensional advection—diffusion problems is developed based on improved complex variable moving least-square (ICVMLS) approximation. The equivalent functional of two-dimensional advection—diffusion problems is formed, the variation method is used to obtain the equation system, and the penalty method is employed to impose the essential boundary conditions. The difference method for two-point boundary value problems is used to obtain the discrete equations. Then the corresponding formulas of the ICVMM for advection—diffusion problems are presented. Two numerical examples with different node distributions are used to validate and inestigate the accuracy and efficiency of the new method in this paper. It is shown that ICVMM is very effective for advection—diffusion problems, and has a good convergent character, accuracy, and computational efficiency

  8. Nuclear localization of Schizosaccharomyces pombe Mcm2/Cdc19p requires MCM complex assembly.

    Science.gov (United States)

    Pasion, S G; Forsburg, S L

    1999-12-01

    The minichromosome maintenance (MCM) proteins MCM2-MCM7 are conserved eukaryotic replication factors that assemble in a heterohexameric complex. In fission yeast, these proteins are nuclear throughout the cell cycle. In studying the mechanism that regulates assembly of the MCM complex, we analyzed the cis and trans elements required for nuclear localization of a single subunit, Mcm2p. Mutation of any single mcm gene leads to redistribution of wild-type MCM subunits to the cytoplasm, and this redistribution depends on an active nuclear export system. We identified the nuclear localization signal sequences of Mcm2p and showed that these are required for nuclear targeting of other MCM subunits. In turn, Mcm2p must associate with other MCM proteins for its proper localization; nuclear localization of MCM proteins thus requires assembly of MCM proteins in a complex. We suggest that coupling complex assembly to nuclear targeting and retention ensures that only intact heterohexameric MCM complexes remain nuclear.

  9. A Method for Software Requirement Volatility Analysis Using QFD

    Directory of Open Access Journals (Sweden)

    Yunarso Anang

    2016-10-01

    Full Text Available Changes of software requirements are inevitable during the development life cycle. Rather than avoiding the circumstance, it is easier to just accept it and find a way to anticipate those changes. This paper proposes a method to analyze the volatility of requirement by using the Quality Function Deployment (QFD method and the introduced degree of volatility. Customer requirements are deployed to software functions and subsequently to architectural design elements. And then, after determining the potential for changes of the design elements, the degree of volatility of the software requirements is calculated. In this paper the method is described using a flow diagram and illustrated using a simple example, and is evaluated using a case study.

  10. Methods for forming complex oxidation reaction products including superconducting articles

    International Nuclear Information System (INIS)

    Rapp, R.A.; Urquhart, A.W.; Nagelberg, A.S.; Newkirk, M.S.

    1992-01-01

    This patent describes a method for producing a superconducting complex oxidation reaction product of two or more metals in an oxidized state. It comprises positioning at least one parent metal source comprising one of the metals adjacent to a permeable mass comprising at least one metal-containing compound capable of reaction to form the complex oxidation reaction product in step below, the metal component of the at least one metal-containing compound comprising at least a second of the two or more metals, and orienting the parent metal source and the permeable mass relative to each other so that formation of the complex oxidation reaction product will occur in a direction towards and into the permeable mass; and heating the parent metal source in the presence of an oxidant to a temperature region above its melting point to form a body of molten parent metal to permit infiltration and reaction of the molten parent metal into the permeable mass and with the oxidant and the at least one metal-containing compound to form the complex oxidation reaction product, and progressively drawing the molten parent metal source through the complex oxidation reaction product towards the oxidant and towards and into the adjacent permeable mass so that fresh complex oxidation reaction product continues to form within the permeable mass; and recovering the resulting complex oxidation reaction product

  11. Quality functions for requirements engineering in system development methods.

    Science.gov (United States)

    Johansson, M; Timpka, T

    1996-01-01

    Based on a grounded theory framework, this paper analyses the quality characteristics for methods to be used for requirements engineering in the development of medical decision support systems (MDSS). The results from a Quality Function Deployment (QFD) used to rank functions connected to user value and a focus group study were presented to a validation focus group. The focus group studies take advantage of a group process to collect data for further analyses. The results describe factors considered by the participants as important in the development of methods for requirements engineering in health care. Based on the findings, the content which, according to the user a MDSS method should support is established.

  12. 40 CFR 180.1022 - Iodine-detergent complex; exemption from the requirement of a tolerance.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Iodine-detergent complex; exemption... FOOD Exemptions From Tolerances § 180.1022 Iodine-detergent complex; exemption from the requirement of a tolerance. The aqueous solution of hydriodic acid and elemental iodine, including one or both of...

  13. Method for developing cost estimates for generic regulatory requirements

    International Nuclear Information System (INIS)

    1985-01-01

    The NRC has established a practice of performing regulatory analyses, reflecting costs as well as benefits, of proposed new or revised generic requirements. A method had been developed to assist the NRC in preparing the types of cost estimates required for this purpose and for assigning priorities in the resolution of generic safety issues. The cost of a generic requirement is defined as the net present value of total lifetime cost incurred by the public, industry, and government in implementing the requirement for all affected plants. The method described here is for commercial light-water-reactor power plants. Estimating the cost for a generic requirement involves several steps: (1) identifying the activities that must be carried out to fully implement the requirement, (2) defining the work packages associated with the major activities, (3) identifying the individual elements of cost for each work package, (4) estimating the magnitude of each cost element, (5) aggregating individual plant costs over the plant lifetime, and (6) aggregating all plant costs and generic costs to produce a total, national, present value of lifetime cost for the requirement. The method developed addresses all six steps. In this paper, we discuss on the first three

  14. Measurement of complex permittivity of composite materials using waveguide method

    NARCIS (Netherlands)

    Tereshchenko, O.V.; Buesink, Frederik Johannes Karel; Leferink, Frank Bernardus Johannes

    2011-01-01

    Complex dielectric permittivity of 4 different composite materials has been measured using the transmissionline method. A waveguide fixture in L, S, C and X band was used for the measurements. Measurement accuracy is influenced by air gaps between test fixtures and the materials tested. One of the

  15. Ethnographic methods for process evaluations of complex health behaviour interventions.

    Science.gov (United States)

    Morgan-Trimmer, Sarah; Wood, Fiona

    2016-05-04

    This article outlines the contribution that ethnography could make to process evaluations for trials of complex health-behaviour interventions. Process evaluations are increasingly used to examine how health-behaviour interventions operate to produce outcomes and often employ qualitative methods to do this. Ethnography shares commonalities with the qualitative methods currently used in health-behaviour evaluations but has a distinctive approach over and above these methods. It is an overlooked methodology in trials of complex health-behaviour interventions that has much to contribute to the understanding of how interventions work. These benefits are discussed here with respect to three strengths of ethnographic methodology: (1) producing valid data, (2) understanding data within social contexts, and (3) building theory productively. The limitations of ethnography within the context of process evaluations are also discussed.

  16. Unplanned Complex Suicide-A Consideration of Multiple Methods.

    Science.gov (United States)

    Ateriya, Navneet; Kanchan, Tanuj; Shekhawat, Raghvendra Singh; Setia, Puneet; Saraf, Ashish

    2018-05-01

    Detailed death investigations are mandatory to find out the exact cause and manner in non-natural deaths. In this reference, use of multiple methods in suicide poses a challenge for the investigators especially when the choice of methods to cause death is unplanned. There is an increased likelihood that doubts of homicide are raised in cases of unplanned complex suicides. A case of complex suicide is reported where the victim resorted to multiple methods to end his life, and what appeared to be an unplanned variant based on the death scene investigations. A meticulous crime scene examination, interviews of the victim's relatives and other witnesses, and a thorough autopsy are warranted to conclude on the cause and manner of death in all such cases. © 2017 American Academy of Forensic Sciences.

  17. Determination of material irradiation parameters. Required accuracies and available methods

    International Nuclear Information System (INIS)

    Cerles, J.M.; Mas, P.

    1978-01-01

    In this paper, the author reports some main methods to determine the nuclear parameters of material irradiation in testing reactor (nuclear power, burn-up, fluxes, fluences, ...). The different methods (theoretical or experimental) are reviewed: neutronics measurements and calculations, gamma scanning, thermal balance, ... The required accuracies are reviewed: they are of 3-5% on flux, fluences, nuclear power, burn-up, conversion factor, ... These required accuracies are compared with the real accuracies available which are at the present time of order of 5-20% on these parameters

  18. Rapid methods for jugular bleeding of dogs requiring one technician.

    Science.gov (United States)

    Frisk, C S; Richardson, M R

    1979-06-01

    Two methods were used to collect blood from the jugular vein of dogs. In both techniques, only one technician was required. A rope with a slip knot was placed around the base of the neck to assist in restraint and act as a tourniquet for the vein. The technician used one hand to restrain the dog by the muzzle and position the head. The other hand was used for collecting the sample. One of the methods could be accomplished with the dog in its cage. The bleeding techniques were rapid, requiring approximately 1 minute per dog.

  19. Determination of fuel irradiation parameters. Required accuracies and available methods

    International Nuclear Information System (INIS)

    Mas, P.

    1977-01-01

    This paper reports on the present point of some main methods to determine the nuclear parameters of fuel irradiation in testing reactors (nuclear power, burn up, ...) The different methods (theoretical or experimental) are reviewed: neutron measurements and calculations, gamma scanning, heat balance, ... . The required accuracies are reviewed: they are of 3-5 % on flux, fluences, nuclear power, burn-up, conversion factor. These required accuracies are compared with the real accuracies available which are the present time of order of 5-20 % on these parameters

  20. Classical Methods and Calculation Algorithms for Determining Lime Requirements

    Directory of Open Access Journals (Sweden)

    André Guarçoni

    Full Text Available ABSTRACT The methods developed for determination of lime requirements (LR are based on widely accepted principles. However, the formulas used for calculation have evolved little over recent decades, and in some cases there are indications of their inadequacy. The aim of this study was to compare the lime requirements calculated by three classic formulas and three algorithms, defining those most appropriate for supplying Ca and Mg to coffee plants and the smaller possibility of causing overliming. The database used contained 600 soil samples, which were collected in coffee plantings. The LR was estimated by the methods of base saturation, neutralization of Al3+, and elevation of Ca2+ and Mg2+ contents (two formulas and by the three calculation algorithms. Averages of the lime requirements were compared, determining the frequency distribution of the 600 lime requirements (LR estimated through each calculation method. In soils with low cation exchange capacity at pH 7, the base saturation method may fail to adequately supply the plants with Ca and Mg in many situations, while the method of Al3+ neutralization and elevation of Ca2+ and Mg2+ contents can result in the calculation of application rates that will increase the pH above the suitable range. Among the methods studied for calculating lime requirements, the algorithm that predicts reaching a defined base saturation, with adequate Ca and Mg supply and the maximum application rate limited to the H+Al value, proved to be the most efficient calculation method, and it can be recommended for use in numerous crops conditions.

  1. Comparison of association mapping methods in a complex pedigreed population

    DEFF Research Database (Denmark)

    Sahana, Goutam; Guldbrandtsen, Bernt; Janss, Luc

    2010-01-01

    to collect SNP signals in intervals, to avoid the scattering of a QTL signal over multiple neighboring SNPs. Methods not accounting for genetic background (full pedigree information) performed worse, and methods using haplotypes were considerably worse with a high false-positive rate, probably due...... to the presence of low-frequency haplotypes. It was necessary to account for full relationships among individuals to avoid excess false discovery. Although the methods were tested on a cattle pedigree, the results are applicable to any population with a complex pedigree structure...

  2. Experimentation on accuracy of non functional requirement prioritization approaches for different complexity projects

    Directory of Open Access Journals (Sweden)

    Raj Kumar Chopra

    2016-09-01

    Full Text Available Non functional requirements must be selected for implementation together with functional requirements to enhance the success of software projects. Three approaches exist for performing the prioritization of non functional requirements using the suitable prioritization technique. This paper performs experimentation on three different complexity versions of the industrial software project using cost-value prioritization technique employing three approaches. Experimentation is conducted to analyze the accuracy of individual approaches and the variation of accuracy with the complexity of the software project. The results indicate that selecting non functional requirements separately, but in accordance with functionality has higher accuracy amongst the other two approaches. Further, likewise other approaches, it witnesses the decrease in accuracy with increase in software complexity but the decrease is minimal.

  3. Complex problems require complex solutions: the utility of social quality theory for addressing the Social Determinants of Health

    Directory of Open Access Journals (Sweden)

    Ward Paul R

    2011-08-01

    Full Text Available Abstract Background In order to improve the health of the most vulnerable groups in society, the WHO Commission on Social Determinants of Health (CSDH called for multi-sectoral action, which requires research and policy on the multiple and inter-linking factors shaping health outcomes. Most conceptual tools available to researchers tend to focus on singular and specific social determinants of health (SDH (e.g. social capital, empowerment, social inclusion. However, a new and innovative conceptual framework, known as social quality theory, facilitates a more complex and complete understanding of the SDH, with its focus on four domains: social cohesion, social inclusion, social empowerment and socioeconomic security, all within the same conceptual framework. This paper provides both an overview of social quality theory in addition to findings from a national survey of social quality in Australia, as a means of demonstrating the operationalisation of the theory. Methods Data were collected using a national random postal survey of 1044 respondents in September, 2009. Multivariate logistic regression analysis was conducted. Results Statistical analysis revealed that people on lower incomes (less than $45000 experience worse social quality across all of the four domains: lower socio-economic security, lower levels of membership of organisations (lower social cohesion, higher levels of discrimination and less political action (lower social inclusion and lower social empowerment. The findings were mixed in terms of age, with people over 65 years experiencing lower socio-economic security, but having higher levels of social cohesion, experiencing lower levels of discrimination (higher social inclusion and engaging in more political action (higher social empowerment. In terms of gender, women had higher social cohesion than men, although also experienced more discrimination (lower social inclusion. Conclusions Applying social quality theory allows

  4. Methods for ensuring compliance with regulatory requirements: regulators and operators

    International Nuclear Information System (INIS)

    Fleischmann, A.W.

    1989-01-01

    Some of the methods of ensuring compliance with regulatory requirements contained in various radiation protection documents such as Regulations, ICRP Recommendations etc. are considered. These include radiation safety officers and radiation safety committees, personnel monitoring services, dissemination of information, inspection services and legislative power of enforcement. Difficulties in ensuring compliance include outmoded legislation, financial and personnel constraints

  5. Proposed New Method of Interpretation of Infrared Ship Signature Requirements

    NARCIS (Netherlands)

    Neele, F.P.; Wilson, M.T.; Youern, K.

    2005-01-01

    new method of deriving and defining requirements for the infrared signature of new ships is presented. The current approach is to specify the maximum allowed temperature or radiance contrast of the ship with respect to its background. At present, in most NATO countries, it is the contractor’s

  6. Models, methods and software tools for building complex adaptive traffic systems

    International Nuclear Information System (INIS)

    Alyushin, S.A.

    2011-01-01

    The paper studies the modern methods and tools to simulate the behavior of complex adaptive systems (CAS), the existing systems of traffic modeling in simulators and their characteristics; proposes requirements for assessing the suitability of the system to simulate the CAS behavior in simulators. The author has developed a model of adaptive agent representation and its functioning environment to meet certain requirements set above, and has presented methods of agents' interactions and methods of conflict resolution in simulated traffic situations. A simulation system realizing computer modeling for simulating the behavior of CAS in traffic situations has been created [ru

  7. POWER ANALYSIS FOR COMPLEX MEDIATIONAL DESIGNS USING MONTE CARLO METHODS

    OpenAIRE

    Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.

    2010-01-01

    Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex mediational models. The approach is based on the well known technique of generating a large number of samples in a Monte Carlo study, and estimating power...

  8. Learning with Generalization Capability by Kernel Methods of Bounded Complexity

    Czech Academy of Sciences Publication Activity Database

    Kůrková, Věra; Sanguineti, M.

    2005-01-01

    Roč. 21, č. 3 (2005), s. 350-367 ISSN 0885-064X R&D Projects: GA AV ČR 1ET100300419 Institutional research plan: CEZ:AV0Z10300504 Keywords : supervised learning * generalization * model complexity * kernel methods * minimization of regularized empirical errors * upper bounds on rates of approximate optimization Subject RIV: BA - General Mathematics Impact factor: 1.186, year: 2005

  9. Comparison of topotactic fluorination methods for complex oxide films

    Science.gov (United States)

    Moon, E. J.; Choquette, A. K.; Huon, A.; Kulesa, S. Z.; Barbash, D.; May, S. J.

    2015-06-01

    We have investigated the synthesis of SrFeO3-αFγ (α and γ ≤ 1) perovskite films using topotactic fluorination reactions utilizing poly(vinylidene fluoride) as a fluorine source. Two different fluorination methods, a spin-coating and a vapor transport approach, were performed on as-grown SrFeO2.5 films. We highlight differences in the structural, compositional, and optical properties of the oxyfluoride films obtained via the two methods, providing insight into how fluorination reactions can be used to modify electronic and optical behavior in complex oxide heterostructures.

  10. Comparison of topotactic fluorination methods for complex oxide films

    Energy Technology Data Exchange (ETDEWEB)

    Moon, E. J., E-mail: em582@drexel.edu; Choquette, A. K.; Huon, A.; Kulesa, S. Z.; May, S. J., E-mail: smay@coe.drexel.edu [Department of Materials Science and Engineering, Drexel University, Philadelphia, Pennsylvania 19104 (United States); Barbash, D. [Centralized Research Facilities, Drexel University, Philadelphia, Pennsylvania 19104 (United States)

    2015-06-01

    We have investigated the synthesis of SrFeO{sub 3−α}F{sub γ} (α and γ ≤ 1) perovskite films using topotactic fluorination reactions utilizing poly(vinylidene fluoride) as a fluorine source. Two different fluorination methods, a spin-coating and a vapor transport approach, were performed on as-grown SrFeO{sub 2.5} films. We highlight differences in the structural, compositional, and optical properties of the oxyfluoride films obtained via the two methods, providing insight into how fluorination reactions can be used to modify electronic and optical behavior in complex oxide heterostructures.

  11. Comparison of topotactic fluorination methods for complex oxide films

    Directory of Open Access Journals (Sweden)

    E. J. Moon

    2015-06-01

    Full Text Available We have investigated the synthesis of SrFeO3−αFγ (α and γ ≤ 1 perovskite films using topotactic fluorination reactions utilizing poly(vinylidene fluoride as a fluorine source. Two different fluorination methods, a spin-coating and a vapor transport approach, were performed on as-grown SrFeO2.5 films. We highlight differences in the structural, compositional, and optical properties of the oxyfluoride films obtained via the two methods, providing insight into how fluorination reactions can be used to modify electronic and optical behavior in complex oxide heterostructures.

  12. Iterative methods for the solution of very large complex symmetric linear systems of equations in electrodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Clemens, M.; Weiland, T. [Technische Hochschule Darmstadt (Germany)

    1996-12-31

    In the field of computational electrodynamics the discretization of Maxwell`s equations using the Finite Integration Theory (FIT) yields very large, sparse, complex symmetric linear systems of equations. For this class of complex non-Hermitian systems a number of conjugate gradient-type algorithms is considered. The complex version of the biconjugate gradient (BiCG) method by Jacobs can be extended to a whole class of methods for complex-symmetric algorithms SCBiCG(T, n), which only require one matrix vector multiplication per iteration step. In this class the well-known conjugate orthogonal conjugate gradient (COCG) method for complex-symmetric systems corresponds to the case n = 0. The case n = 1 yields the BiCGCR method which corresponds to the conjugate residual algorithm for the real-valued case. These methods in combination with a minimal residual smoothing process are applied separately to practical 3D electro-quasistatical and eddy-current problems in electrodynamics. The practical performance of the SCBiCG methods is compared with other methods such as QMR and TFQMR.

  13. Method for analysis the complex grounding cables system

    International Nuclear Information System (INIS)

    Ackovski, R.; Acevski, N.

    2002-01-01

    A new iterative method for the analysis of the performances of the complex grounding systems (GS) in underground cable power networks with coated and/or uncoated metal sheathed cables is proposed in this paper. The analyzed grounding system consists of the grounding grid of a high voltage (HV) supplying transformer station (TS), middle voltage/low voltage (MV/LV) consumer TSs and arbitrary number of power cables, connecting them. The derived method takes into consideration the drops of voltage in the cable sheets and the mutual influence among all earthing electrodes, due to the resistive coupling through the soil. By means of the presented method it is possible to calculate the main grounding system performances, such as earth electrode potentials under short circuit fault to ground conditions, earth fault current distribution in the whole complex grounding system, step and touch voltages in the nearness of the earthing electrodes dissipating the fault current in the earth, impedances (resistances) to ground of all possible fault locations, apparent shield impedances to ground of all power cables, e.t.c. The proposed method is based on the admittance summation method [1] and is appropriately extended, so that it takes into account resistive coupling between the elements that the GS. (Author)

  14. A dissipative particle dynamics method for arbitrarily complex geometries

    Science.gov (United States)

    Li, Zhen; Bian, Xin; Tang, Yu-Hang; Karniadakis, George Em

    2018-02-01

    Dissipative particle dynamics (DPD) is an effective Lagrangian method for modeling complex fluids in the mesoscale regime but so far it has been limited to relatively simple geometries. Here, we formulate a local detection method for DPD involving arbitrarily shaped geometric three-dimensional domains. By introducing an indicator variable of boundary volume fraction (BVF) for each fluid particle, the boundary of arbitrary-shape objects is detected on-the-fly for the moving fluid particles using only the local particle configuration. Therefore, this approach eliminates the need of an analytical description of the boundary and geometry of objects in DPD simulations and makes it possible to load the geometry of a system directly from experimental images or computer-aided designs/drawings. More specifically, the BVF of a fluid particle is defined by the weighted summation over its neighboring particles within a cutoff distance. Wall penetration is inferred from the value of the BVF and prevented by a predictor-corrector algorithm. The no-slip boundary condition is achieved by employing effective dissipative coefficients for liquid-solid interactions. Quantitative evaluations of the new method are performed for the plane Poiseuille flow, the plane Couette flow and the Wannier flow in a cylindrical domain and compared with their corresponding analytical solutions and (high-order) spectral element solution of the Navier-Stokes equations. We verify that the proposed method yields correct no-slip boundary conditions for velocity and generates negligible fluctuations of density and temperature in the vicinity of the wall surface. Moreover, we construct a very complex 3D geometry - the "Brown Pacman" microfluidic device - to explicitly demonstrate how to construct a DPD system with complex geometry directly from loading a graphical image. Subsequently, we simulate the flow of a surfactant solution through this complex microfluidic device using the new method. Its

  15. Neural Interfaces for Intracortical Recording: Requirements, Fabrication Methods, and Characteristics.

    Science.gov (United States)

    Szostak, Katarzyna M; Grand, Laszlo; Constandinou, Timothy G

    2017-01-01

    Implantable neural interfaces for central nervous system research have been designed with wire, polymer, or micromachining technologies over the past 70 years. Research on biocompatible materials, ideal probe shapes, and insertion methods has resulted in building more and more capable neural interfaces. Although the trend is promising, the long-term reliability of such devices has not yet met the required criteria for chronic human application. The performance of neural interfaces in chronic settings often degrades due to foreign body response to the implant that is initiated by the surgical procedure, and related to the probe structure, and material properties used in fabricating the neural interface. In this review, we identify the key requirements for neural interfaces for intracortical recording, describe the three different types of probes-microwire, micromachined, and polymer-based probes; their materials, fabrication methods, and discuss their characteristics and related challenges.

  16. Neural Interfaces for Intracortical Recording: Requirements, Fabrication Methods, and Characteristics

    Directory of Open Access Journals (Sweden)

    Katarzyna M. Szostak

    2017-12-01

    Full Text Available Implantable neural interfaces for central nervous system research have been designed with wire, polymer, or micromachining technologies over the past 70 years. Research on biocompatible materials, ideal probe shapes, and insertion methods has resulted in building more and more capable neural interfaces. Although the trend is promising, the long-term reliability of such devices has not yet met the required criteria for chronic human application. The performance of neural interfaces in chronic settings often degrades due to foreign body response to the implant that is initiated by the surgical procedure, and related to the probe structure, and material properties used in fabricating the neural interface. In this review, we identify the key requirements for neural interfaces for intracortical recording, describe the three different types of probes—microwire, micromachined, and polymer-based probes; their materials, fabrication methods, and discuss their characteristics and related challenges.

  17. Estimation methods of eco-environmental water requirements: Case study

    Institute of Scientific and Technical Information of China (English)

    YANG Zhifeng; CUI Baoshan; LIU Jingling

    2005-01-01

    Supplying water to the ecological environment with certain quantity and quality is significant for the protection of diversity and the realization of sustainable development. The conception and connotation of eco-environmental water requirements, including the definition of the conception, the composition and characteristics of eco-environmental water requirements, are evaluated in this paper. The classification and estimation methods of eco-environmental water requirements are then proposed. On the basis of the study on the Huang-Huai-Hai Area, the present water use, the minimum and suitable water requirement are estimated and the corresponding water shortage is also calculated. According to the interrelated programs, the eco-environmental water requirements in the coming years (2010, 2030, 2050) are estimated. The result indicates that the minimum and suitable eco-environmental water requirements fluctuate with the differences of function setting and the referential standard of water resources, and so as the water shortage. Moreover, the study indicates that the minimum eco-environmental water requirement of the study area ranges from 2.84×1010m3 to 1.02×1011m3, the suitable water requirement ranges from 6.45×1010m3 to 1.78×1011m3, the water shortage ranges from 9.1×109m3 to 2.16×1010m3 under the minimum water requirement, and it is from 3.07×1010m3 to 7.53×1010m3 under the suitable water requirement. According to the different values of the water shortage, the water priority can be allocated. The ranges of the eco-environmental water requirements in the three coming years (2010, 2030, 2050) are 4.49×1010m3-1.73×1011m3, 5.99×10m3?2.09×1011m3, and 7.44×1010m3-2.52×1011m3, respectively.

  18. Quantitative methods for developing C2 system requirement

    Energy Technology Data Exchange (ETDEWEB)

    Tyler, K.K.

    1992-06-01

    The US Army established the Army Tactical Command and Control System (ATCCS) Experimentation Site (AES) to provide a place where material and combat developers could experiment with command and control systems. The AES conducts fundamental and applied research involving command and control issues using a number of research methods, ranging from large force-level experiments, to controlled laboratory experiments, to studies and analyses. The work summarized in this paper was done by Pacific Northwest Laboratory under task order from the Army Tactical Command and Control System Experimentation Site. The purpose of the task was to develop the functional requirements for army engineer automation and support software, including MCS-ENG. A client, such as an army engineer, has certain needs and requirements of his or her software; these needs must be presented in ways that are readily understandable to the software developer. A requirements analysis then, such as the one described in this paper, is simply the means of communication between those who would use a piece of software and those who would develop it. The analysis from which this paper was derived attempted to bridge the communications gap'' between army combat engineers and software engineers. It sought to derive and state the software needs of army engineers in ways that are meaningful to software engineers. In doing this, it followed a natural sequence of investigation: (1) what does an army engineer do, (2) with which tasks can software help, (3) how much will it cost, and (4) where is the highest payoff This paper demonstrates how each of these questions was addressed during an analysis of the functional requirements of engineer support software. Systems engineering methods are used in a task analysis and a quantitative scoring method was developed to score responses regarding the feasibility of task automation. The paper discusses the methods used to perform utility and cost-benefits estimates.

  19. Quantitative methods for developing C2 system requirement

    Energy Technology Data Exchange (ETDEWEB)

    Tyler, K.K.

    1992-06-01

    The US Army established the Army Tactical Command and Control System (ATCCS) Experimentation Site (AES) to provide a place where material and combat developers could experiment with command and control systems. The AES conducts fundamental and applied research involving command and control issues using a number of research methods, ranging from large force-level experiments, to controlled laboratory experiments, to studies and analyses. The work summarized in this paper was done by Pacific Northwest Laboratory under task order from the Army Tactical Command and Control System Experimentation Site. The purpose of the task was to develop the functional requirements for army engineer automation and support software, including MCS-ENG. A client, such as an army engineer, has certain needs and requirements of his or her software; these needs must be presented in ways that are readily understandable to the software developer. A requirements analysis then, such as the one described in this paper, is simply the means of communication between those who would use a piece of software and those who would develop it. The analysis from which this paper was derived attempted to bridge the ``communications gap`` between army combat engineers and software engineers. It sought to derive and state the software needs of army engineers in ways that are meaningful to software engineers. In doing this, it followed a natural sequence of investigation: (1) what does an army engineer do, (2) with which tasks can software help, (3) how much will it cost, and (4) where is the highest payoff? This paper demonstrates how each of these questions was addressed during an analysis of the functional requirements of engineer support software. Systems engineering methods are used in a task analysis and a quantitative scoring method was developed to score responses regarding the feasibility of task automation. The paper discusses the methods used to perform utility and cost-benefits estimates.

  20. Directed forgetting of complex pictures in an item method paradigm.

    Science.gov (United States)

    Hauswald, Anne; Kissler, Johanna

    2008-11-01

    An item-cued directed forgetting paradigm was used to investigate the ability to control episodic memory and selectively encode complex coloured pictures. A series of photographs was presented to 21 participants who were instructed to either remember or forget each picture after it was presented. Memory performance was later tested with a recognition task where all presented items had to be retrieved, regardless of the initial instructions. A directed forgetting effect--that is, better recognition of "to-be-remembered" than of "to-be-forgotten" pictures--was observed, although its size was smaller than previously reported for words or line drawings. The magnitude of the directed forgetting effect correlated negatively with participants' depression and dissociation scores. The results indicate that, at least in an item method, directed forgetting occurs for complex pictures as well as words and simple line drawings. Furthermore, people with higher levels of dissociative or depressive symptoms exhibit altered memory encoding patterns.

  1. Evaluating the response of complex systems to environmental threats: the Σ II method

    International Nuclear Information System (INIS)

    Corynen, G.C.

    1983-05-01

    The Σ II method was developed to model and compute the probabilistic performance of systems that operate in a threatening environment. Although we emphasize the vulnerability of complex systems to earthquakes and to electromagnetic threats such as EMP (electromagnetic pulse), the method applies in general to most large-scale systems or networks that are embedded in a potentially harmful environment. Other methods exist for obtaining system vulnerability, but their complexity increases exponentially as the size of systems is increased. The complexity of the Σ II method is polynomial, and accurate solutions are now possible for problems for which current methods require the use of rough statistical bounds, confidence statements, and other approximations. For super-large problems, where the costs of precise answers may be prohibitive, a desired accuracy can be specified, and the Σ II algorithms will halt when that accuracy has been reached. We summarize the results of a theoretical complexity analysis - which is reported elsewhere - and validate the theory with computer experiments conducted both on worst-case academic problems and on more reasonable problems occurring in practice. Finally, we compare our method with the exact methods of Abraham and Nakazawa, and with current bounding methods, and we demonstrate the computational efficiency and accuracy of Σ II

  2. Complex Method Mixed with PSO Applying to Optimization Design of Bridge Crane Girder

    Directory of Open Access Journals (Sweden)

    He Yan

    2017-01-01

    Full Text Available In engineer design, basic complex method has not enough global search ability for the nonlinear optimization problem, so it mixed with particle swarm optimization (PSO has been presented in the paper,that is the optimal particle evaluated from fitness function of particle swarm displacement complex vertex in order to realize optimal principle of the largest complex central distance.This method is applied to optimization design problems of box girder of bridge crane with constraint conditions.At first a mathematical model of the girder optimization has been set up,in which box girder cross section area of bridge crane is taken as the objective function, and its four sizes parameters as design variables, girder mechanics performance, manufacturing process, border sizes and so on requirements as constraint conditions. Then complex method mixed with PSO is used to solve optimization design problem of cane box girder from constrained optimization studying approach, and its optimal results have achieved the goal of lightweight design and reducing the crane manufacturing cost . The method is reliable, practical and efficient by the practical engineer calculation and comparative analysis with basic complex method.

  3. Orbiter data reduction complex data processing requirements for the OFT mission evaluation team (level C)

    Science.gov (United States)

    1979-01-01

    This document addresses requirements for post-test data reduction in support of the Orbital Flight Tests (OFT) mission evaluation team, specifically those which are planned to be implemented in the ODRC (Orbiter Data Reduction Complex). Only those requirements which have been previously baselined by the Data Systems and Analysis Directorate configuration control board are included. This document serves as the control document between Institutional Data Systems Division and the Integration Division for OFT mission evaluation data processing requirements, and shall be the basis for detailed design of ODRC data processing systems.

  4. Effective teaching methods in higher education: requirements and barriers

    Directory of Open Access Journals (Sweden)

    NAHID SHIRANI BIDABADI

    2016-10-01

    Full Text Available Introduction: Teaching is one of the main components in educational planning which is a key factor in conducting educational plans. Despite the importance of good teaching, the outcomes are far from ideal. The present qualitative study aimed to investigate effective teaching in higher education in Iran based on the experiences of best professors in the country and the best local professors of Isfahan University of Technology. Methods: This qualitative content analysis study was conducted through purposeful sampling. Semi-structured interviews were conducted with ten faculty members (3 of them from the best professors in the country and 7 from the best local professors. Content analysis was performed by MAXQDA software. The codes, categories and themes were explored through an inductive process that began from semantic units or direct quotations to general themes. Results: According to the results of this study, the best teaching approach is the mixed method (student-centered together with teacher-centered plus educational planning and previous readiness. But whenever the teachers can teach using this method confront with some barriers and requirements; some of these requirements are prerequisite in professors’ behavior and some of these are prerequisite in professors’ outlook. Also, there are some major barriers, some of which are associated with the professors’ operation and others are related to laws and regulations. Implications of these findings for teachers’ preparation in education are discussed. Conclusion: In the present study, it was illustrated that a good teaching method helps the students to question their preconceptions, and motivates them to learn, by putting them in a situation in which they come to see themselves as the authors of answers, as the agents of responsibility for change. But training through this method has some barriers and requirements. To have an effective teaching; the faculty members of the universities

  5. Using Common Graphics Paradigms Implemented in a Java Applet to Represent Complex Scheduling Requirements

    Science.gov (United States)

    Jaap, John; Meyer, Patrick; Davis, Elizabeth

    1997-01-01

    The experiments planned for the International Space Station promise to be complex, lengthy and diverse. The scarcity of the space station resources will cause significant competition for resources between experiments. The scheduling job facing the Space Station mission planning software requires a concise and comprehensive description of the experiments' requirements (to ensure a valid schedule) and a good description of the experiments' flexibility (to effectively utilize available resources). In addition, the continuous operation of the station, the wide geographic dispersion of station users, and the budgetary pressure to reduce operations manpower make a low-cost solution mandatory. A graphical representation of the scheduling requirements for station payloads implemented via an Internet-based application promises to be an elegant solution that addresses all of these issues. The graphical representation of experiment requirements permits a station user to describe his experiment by defining "activities" and "sequences of activities". Activities define the resource requirements (with alternatives) and other quantitative constraints of tasks to be performed. Activities definitions use an "outline" graphics paradigm. Sequences define the time relationships between activities. Sequences may also define time relationships with activities of other payloads or space station systems. Sequences of activities are described by a "network" graphics paradigm. The bulk of this paper will describe the graphical approach to representing requirements and provide examples that show the ease and clarity with which complex requirements can be represented. A Java applet, to run in a web browser, is being developed to support the graphical representation of payload scheduling requirements. Implementing the entry and editing of requirements via the web solves the problems introduced by the geographic dispersion of users. Reducing manpower is accomplished by developing a concise

  6. Effective Teaching Methods in Higher Education: Requirements and Barriers.

    Science.gov (United States)

    Shirani Bidabadi, Nahid; Nasr Isfahani, Ahmmadreza; Rouhollahi, Amir; Khalili, Roya

    2016-10-01

    Teaching is one of the main components in educational planning which is a key factor in conducting educational plans. Despite the importance of good teaching, the outcomes are far from ideal. The present qualitative study aimed to investigate effective teaching in higher education in Iran based on the experiences of best professors in the country and the best local professors of Isfahan University of Technology. This qualitative content analysis study was conducted through purposeful sampling. Semi-structured interviews were conducted with ten faculty members (3 of them from the best professors in the country and 7 from the best local professors). Content analysis was performed by MAXQDA software. The codes, categories and themes were explored through an inductive process that began from semantic units or direct quotations to general themes. According to the results of this study, the best teaching approach is the mixed method (student-centered together with teacher-centered) plus educational planning and previous readiness. But whenever the teachers can teach using this method confront with some barriers and requirements; some of these requirements are prerequisite in professors' behavior and some of these are prerequisite in professors' outlook. Also, there are some major barriers, some of which are associated with the professors' operation and others are related to laws and regulations. Implications of these findings for teachers' preparation in education are discussed. In the present study, it was illustrated that a good teaching method helps the students to question their preconceptions, and motivates them to learn, by putting them in a situation in which they come to see themselves as the authors of answers, as the agents of responsibility for change. But training through this method has some barriers and requirements. To have an effective teaching; the faculty members of the universities should be awarded of these barriers and requirements as a way to

  7. Hexographic Method of Complex Town-Planning Terrain Estimate

    Science.gov (United States)

    Khudyakov, A. Ju

    2017-11-01

    The article deals with the vital problem of a complex town-planning analysis based on the “hexographic” graphic analytic method, makes a comparison with conventional terrain estimate methods and contains the method application examples. It discloses a procedure of the author’s estimate of restrictions and building of a mathematical model which reflects not only conventional town-planning restrictions, but also social and aesthetic aspects of the analyzed territory. The method allows one to quickly get an idea of the territory potential. It is possible to use an unlimited number of estimated factors. The method can be used for the integrated assessment of urban areas. In addition, it is possible to use the methods of preliminary evaluation of the territory commercial attractiveness in the preparation of investment projects. The technique application results in simple informative graphics. Graphical interpretation is straightforward from the experts. A definite advantage is the free perception of the subject results as they are not prepared professionally. Thus, it is possible to build a dialogue between professionals and the public on a new level allowing to take into account the interests of various parties. At the moment, the method is used as a tool for the preparation of integrated urban development projects at the Department of Architecture in Federal State Autonomous Educational Institution of Higher Education “South Ural State University (National Research University)”, FSAEIHE SUSU (NRU). The methodology is included in a course of lectures as the material on architectural and urban design for architecture students. The same methodology was successfully tested in the preparation of business strategies for the development of some territories in the Chelyabinsk region. This publication is the first in a series of planned activities developing and describing the methodology of hexographical analysis in urban and architectural practice. It is also

  8. Polar localization of Escherichia coli chemoreceptors requires an intact Tol–Pal complex

    Science.gov (United States)

    Santos, Thiago M. A.; Lin, Ti-Yu; Rajendran, Madhusudan; Anderson, Samantha M.; Weibel, Douglas B.

    2014-01-01

    Summary Subcellular biomolecular localization is critical for the metabolic and structural properties of the cell. The functional implications of the spatiotemporal distribution of protein complexes during the bacterial cell cycle have long been acknowledged; however, the molecular mechanisms for generating and maintaining their dynamic localization in bacteria are not completely understood. Here we demonstrate that the trans-envelope Tol–Pal complex, a widely conserved component of the cell envelope of Gram-negative bacteria, is required to maintain the polar positioning of chemoreceptor clusters in Escherichia coli. Localization of the chemoreceptors was independent of phospholipid composition of the membrane and the curvature of the cell wall. Instead, our data indicate that chemoreceptors interact with components of the Tol–Pal complex and that this interaction is required to polarly localize chemoreceptor clusters. We found that disruption of the Tol–Pal complex perturbs the polar localization of chemoreceptors, alters cell motility, and affects chemotaxis. We propose that the E. coli Tol–Pal complex restricts mobility of the chemoreceptor clusters at the cell poles and may be involved in regulatory mechanisms that co-ordinate cell division and segregation of the chemosensory machinery. PMID:24720726

  9. Latency in Visionic Systems: Test Methods and Requirements

    Science.gov (United States)

    Bailey, Randall E.; Arthur, J. J., III; Williams, Steven P.; Kramer, Lynda J.

    2005-01-01

    A visionics device creates a pictorial representation of the external scene for the pilot. The ultimate objective of these systems may be to electronically generate a form of Visual Meteorological Conditions (VMC) to eliminate weather or time-of-day as an operational constraint and provide enhancement over actual visual conditions where eye-limiting resolution may be a limiting factor. Empirical evidence has shown that the total system delays or latencies including the imaging sensors and display systems, can critically degrade their utility, usability, and acceptability. Definitions and measurement techniques are offered herein as common test and evaluation methods for latency testing in visionics device applications. Based upon available data, very different latency requirements are indicated based upon the piloting task, the role in which the visionics device is used in this task, and the characteristics of the visionics cockpit display device including its resolution, field-of-regard, and field-of-view. The least stringent latency requirements will involve Head-Up Display (HUD) applications, where the visionics imagery provides situational information as a supplement to symbology guidance and command information. Conversely, the visionics system latency requirement for a large field-of-view Head-Worn Display application, providing a Virtual-VMC capability from which the pilot will derive visual guidance, will be the most stringent, having a value as low as 20 msec.

  10. Analysis and application of classification methods of complex carbonate reservoirs

    Science.gov (United States)

    Li, Xiongyan; Qin, Ruibao; Ping, Haitao; Wei, Dan; Liu, Xiaomei

    2018-06-01

    There are abundant carbonate reservoirs from the Cenozoic to Mesozoic era in the Middle East. Due to variation in sedimentary environment and diagenetic process of carbonate reservoirs, several porosity types coexist in carbonate reservoirs. As a result, because of the complex lithologies and pore types as well as the impact of microfractures, the pore structure is very complicated. Therefore, it is difficult to accurately calculate the reservoir parameters. In order to accurately evaluate carbonate reservoirs, based on the pore structure evaluation of carbonate reservoirs, the classification methods of carbonate reservoirs are analyzed based on capillary pressure curves and flow units. Based on the capillary pressure curves, although the carbonate reservoirs can be classified, the relationship between porosity and permeability after classification is not ideal. On the basis of the flow units, the high-precision functional relationship between porosity and permeability after classification can be established. Therefore, the carbonate reservoirs can be quantitatively evaluated based on the classification of flow units. In the dolomite reservoirs, the average absolute error of calculated permeability decreases from 15.13 to 7.44 mD. Similarly, the average absolute error of calculated permeability of limestone reservoirs is reduced from 20.33 to 7.37 mD. Only by accurately characterizing pore structures and classifying reservoir types, reservoir parameters could be calculated accurately. Therefore, characterizing pore structures and classifying reservoir types are very important to accurate evaluation of complex carbonate reservoirs in the Middle East.

  11. Design Analysis Method for Multidisciplinary Complex Product using SysML

    Directory of Open Access Journals (Sweden)

    Liu Jihong

    2017-01-01

    Full Text Available In the design of multidisciplinary complex products, model-based systems engineering methods are widely used. However, the methodologies only contain only modeling order and simple analysis steps, and lack integrated design analysis methods supporting the whole process. In order to solve the problem, a conceptual design analysis method with integrating modern design methods has been proposed. First, based on the requirement analysis of the quantization matrix, the user’s needs are quantitatively evaluated and translated to system requirements. Then, by the function decomposition of the function knowledge base, the total function is semi-automatically decomposed into the predefined atomic function. The function is matched into predefined structure through the behaviour layer using function-structure mapping based on the interface matching. Finally based on design structure matrix (DSM, the structure reorganization is completed. The process of analysis is implemented with SysML, and illustrated through an aircraft air conditioning system for the system validation.

  12. Reduction in requirements for allogeneic blood products: nonpharmacologic methods.

    Science.gov (United States)

    Hardy, J F; Bélisle, S; Janvier, G; Samama, M

    1996-12-01

    Various strategies have been proposed to decrease bleeding and allogeneic transfusion requirements during and after cardiac operations. This article attempts to document the usefulness, or lack thereof, of the nonpharmacologic methods available in clinical practice. Blood conservation methods were reviewed in chronologic order, as they become available to patients during the perisurgical period. The literature in support of or against each strategy was reexamined critically. Avoidance of preoperative anemia and adherence to published guidelines for the practice of transfusion are of paramount importance. Intraoperatively, tolerance of low hemoglobin concentrations and use of autologous blood (predonated or harvested before bypass) will reduce allogeneic transfusions. The usefulness of plateletpheresis and retransfusion of shed mediastinal fluid remains controversial. Intraoperatively and postoperatively, maintenance of normothermia contributes to improved hemostasis. Several approaches have been shown to be effective. An efficient combination of methods can reduce, and sometimes abolish, the need for allogeneic blood products after cardiac operations, inasmuch as all those involved in the care of cardiac surgical patients adhere thoughtfully to existing transfusion guidelines.

  13. Feeding cells induced by phytoparasitic nematodes require γ-tubulin ring complex for microtubule reorganization.

    Directory of Open Access Journals (Sweden)

    Mohamed Youssef Banora

    2011-12-01

    Full Text Available Reorganization of the microtubule network is important for the fast isodiametric expansion of giant-feeding cells induced by root-knot nematodes. The efficiency of microtubule reorganization depends on the nucleation of new microtubules, their elongation rate and activity of microtubule severing factors. New microtubules in plants are nucleated by cytoplasmic or microtubule-bound γ-tubulin ring complexes. Here we investigate the requirement of γ-tubulin complexes for giant feeding cells development using the interaction between Arabidopsis and Meloidogyne spp. as a model system. Immunocytochemical analyses demonstrate that γ-tubulin localizes to both cortical cytoplasm and mitotic microtubule arrays of the giant cells where it can associate with microtubules. The transcripts of two Arabidopsis γ-tubulin (TUBG1 and TUBG2 and two γ-tubulin complex proteins genes (GCP3 and GCP4 are upregulated in galls. Electron microscopy demonstrates association of GCP3 and γ-tubulin as part of a complex in the cytoplasm of giant cells. Knockout of either or both γ-tubulin genes results in the gene dose-dependent alteration of the morphology of feeding site and failure of nematode life cycle completion. We conclude that the γ-tubulin complex is essential for the control of microtubular network remodelling in the course of initiation and development of giant-feeding cells, and for the successful reproduction of nematodes in their plant hosts.

  14. Organisational reviews - requirements, methods and experience. Progress report 2006

    International Nuclear Information System (INIS)

    Reiman, T.; Oedewald, P.; Wahlstroem, B.; Rollenhagen, C.; Kahlbom, U.

    2007-04-01

    Organisational reviews are important instruments in the continuous quest for improved performance. In the nuclear field there has been an increasing regulatory interest in organisational performance, because incidents and accidents often point to organisational deficiencies as one of the major precursors. Many methods for organisational reviews have been proposed, but they are mostly based on ad hoc approaches to specific problems. The absence of well-established techniques for organisational reviews has already shown to cause discussions and controversies on different levels. The aim of the OrRe project is to collect the experiences from organisational reviews carried out so far and to reflect them in a theoretical model of organisational performance. Furthermore, the project aims to reflect on the criteria for the definition of the scope and content of organisational reviews. Finally, recommendations will be made for guidance for people participating in organisational reviews. This progress report describes regulatory practices in Finland and Sweden together with some case examples of organizational reviews and assessment in both countries. Some issues of concern are raised and an outline for the next year's work is proposed. Issues of concern include the sufficient depth of the assessment, the required competence in assessments, data and criteria problems, definition of the boundaries of the system to be assessed, and the necessary internal support and organisational maturity required for successful assessments. Finally, plans for next year's work are outlined. (au)

  15. Organisational reviews - requirements, methods and experience. Progress report 2006

    Energy Technology Data Exchange (ETDEWEB)

    Reiman, T.; Oedewald, P.; Wahlstroem, B. [VTT, Technical Research Centre of Finland (Finland); Rollenhagen, C.; Kahlbom, U. [Maelardalen University (FI)

    2007-04-15

    Organisational reviews are important instruments in the continuous quest for improved performance. In the nuclear field there has been an increasing regulatory interest in organisational performance, because incidents and accidents often point to organisational deficiencies as one of the major precursors. Many methods for organisational reviews have been proposed, but they are mostly based on ad hoc approaches to specific problems. The absence of well-established techniques for organisational reviews has already shown to cause discussions and controversies on different levels. The aim of the OrRe project is to collect the experiences from organisational reviews carried out so far and to reflect them in a theoretical model of organisational performance. Furthermore, the project aims to reflect on the criteria for the definition of the scope and content of organisational reviews. Finally, recommendations will be made for guidance for people participating in organisational reviews. This progress report describes regulatory practices in Finland and Sweden together with some case examples of organizational reviews and assessment in both countries. Some issues of concern are raised and an outline for the next year's work is proposed. Issues of concern include the sufficient depth of the assessment, the required competence in assessments, data and criteria problems, definition of the boundaries of the system to be assessed, and the necessary internal support and organisational maturity required for successful assessments. Finally, plans for next year's work are outlined. (au)

  16. Approaching complexity by stochastic methods: From biological systems to turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Friedrich, Rudolf [Institute for Theoretical Physics, University of Muenster, D-48149 Muenster (Germany); Peinke, Joachim [Institute of Physics, Carl von Ossietzky University, D-26111 Oldenburg (Germany); Sahimi, Muhammad [Mork Family Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, CA 90089-1211 (United States); Reza Rahimi Tabar, M., E-mail: mohammed.r.rahimi.tabar@uni-oldenburg.de [Department of Physics, Sharif University of Technology, Tehran 11155-9161 (Iran, Islamic Republic of); Institute of Physics, Carl von Ossietzky University, D-26111 Oldenburg (Germany); Fachbereich Physik, Universitaet Osnabrueck, Barbarastrasse 7, 49076 Osnabrueck (Germany)

    2011-09-15

    This review addresses a central question in the field of complex systems: given a fluctuating (in time or space), sequentially measured set of experimental data, how should one analyze the data, assess their underlying trends, and discover the characteristics of the fluctuations that generate the experimental traces? In recent years, significant progress has been made in addressing this question for a class of stochastic processes that can be modeled by Langevin equations, including additive as well as multiplicative fluctuations or noise. Important results have emerged from the analysis of temporal data for such diverse fields as neuroscience, cardiology, finance, economy, surface science, turbulence, seismic time series and epileptic brain dynamics, to name but a few. Furthermore, it has been recognized that a similar approach can be applied to the data that depend on a length scale, such as velocity increments in fully developed turbulent flow, or height increments that characterize rough surfaces. A basic ingredient of the approach to the analysis of fluctuating data is the presence of a Markovian property, which can be detected in real systems above a certain time or length scale. This scale is referred to as the Markov-Einstein (ME) scale, and has turned out to be a useful characteristic of complex systems. We provide a review of the operational methods that have been developed for analyzing stochastic data in time and scale. We address in detail the following issues: (i) reconstruction of stochastic evolution equations from data in terms of the Langevin equations or the corresponding Fokker-Planck equations and (ii) intermittency, cascades, and multiscale correlation functions.

  17. Approaching complexity by stochastic methods: From biological systems to turbulence

    International Nuclear Information System (INIS)

    Friedrich, Rudolf; Peinke, Joachim; Sahimi, Muhammad; Reza Rahimi Tabar, M.

    2011-01-01

    This review addresses a central question in the field of complex systems: given a fluctuating (in time or space), sequentially measured set of experimental data, how should one analyze the data, assess their underlying trends, and discover the characteristics of the fluctuations that generate the experimental traces? In recent years, significant progress has been made in addressing this question for a class of stochastic processes that can be modeled by Langevin equations, including additive as well as multiplicative fluctuations or noise. Important results have emerged from the analysis of temporal data for such diverse fields as neuroscience, cardiology, finance, economy, surface science, turbulence, seismic time series and epileptic brain dynamics, to name but a few. Furthermore, it has been recognized that a similar approach can be applied to the data that depend on a length scale, such as velocity increments in fully developed turbulent flow, or height increments that characterize rough surfaces. A basic ingredient of the approach to the analysis of fluctuating data is the presence of a Markovian property, which can be detected in real systems above a certain time or length scale. This scale is referred to as the Markov-Einstein (ME) scale, and has turned out to be a useful characteristic of complex systems. We provide a review of the operational methods that have been developed for analyzing stochastic data in time and scale. We address in detail the following issues: (i) reconstruction of stochastic evolution equations from data in terms of the Langevin equations or the corresponding Fokker-Planck equations and (ii) intermittency, cascades, and multiscale correlation functions.

  18. Complexity methods applied to turbulence in plasma astrophysics

    Science.gov (United States)

    Vlahos, L.; Isliker, H.

    2016-09-01

    In this review many of the well known tools for the analysis of Complex systems are used in order to study the global coupling of the turbulent convection zone with the solar atmosphere where the magnetic energy is dissipated explosively. Several well documented observations are not easy to interpret with the use of Magnetohydrodynamic (MHD) and/or Kinetic numerical codes. Such observations are: (1) The size distribution of the Active Regions (AR) on the solar surface, (2) The fractal and multi fractal characteristics of the observed magnetograms, (3) The Self-Organised characteristics of the explosive magnetic energy release and (4) the very efficient acceleration of particles during the flaring periods in the solar corona. We review briefly the work published the last twenty five years on the above issues and propose solutions by using methods borrowed from the analysis of complex systems. The scenario which emerged is as follows: (a) The fully developed turbulence in the convection zone generates and transports magnetic flux tubes to the solar surface. Using probabilistic percolation models we were able to reproduce the size distribution and the fractal properties of the emerged and randomly moving magnetic flux tubes. (b) Using a Non Linear Force Free (NLFF) magnetic extrapolation numerical code we can explore how the emerged magnetic flux tubes interact nonlinearly and form thin and Unstable Current Sheets (UCS) inside the coronal part of the AR. (c) The fragmentation of the UCS and the redistribution of the magnetic field locally, when the local current exceeds a Critical threshold, is a key process which drives avalanches and forms coherent structures. This local reorganization of the magnetic field enhances the energy dissipation and influences the global evolution of the complex magnetic topology. Using a Cellular Automaton and following the simple rules of Self Organized Criticality (SOC), we were able to reproduce the statistical characteristics of the

  19. Training requirements and responsibilities for the Buried Waste Integrated Demonstration at the Radioactive Waste Management Complex

    International Nuclear Information System (INIS)

    Vega, H.G.; French, S.B.; Rick, D.L.

    1992-09-01

    The Buried Waste Integrated Demonstration (BWID) is scheduled to conduct intrusive (hydropunch screening tests, bore hole installation, soil sampling, etc.) and nonintrusive (geophysical surveys) studies at the Radioactive Waste Management Complex (RWMC). These studies and activities will be limited to specific locations at the RWMC. The duration of these activities will vary, but most tasks are not expected to exceed 90 days. The BWID personnel requested that the Waste Management Operational Support Group establish the training requirements and training responsibilities for BWID personnel and BWID subcontractor personnel. This document specifies these training requirements and responsibilities. While the responsibilities of BWID and the RWMC are, in general, defined in the interface agreement, the training elements are based on regulatory requirements, DOE orders, DOE-ID guidance, state law, and the nature of the work to be performed

  20. Microscopic methods for the interactions between complex nuclei

    International Nuclear Information System (INIS)

    Ikeda, Kiyomi; Tamagaki, Ryozo; Saito, Sakae; Horiuchi, Hisashi; Tohsaki-Suzuki, Akihiro.

    1978-01-01

    Microscopic study on composite-particle interaction performed in Japan is described in this paper. In chapter 1, brief historical description of the study is presented. In chapter 2, the theory of resonating group method (RGM) for describing microscopically the interaction between nuclei (clusters) is reviewed, and formulation on the description is presented. It is shown that the generator coordinate method (GCM) is a useful one for the description of interaction between shell model clusters, and that the kernels in the RGM are easily obtained from those of the GCM. The inter-cluster interaction can be well described by the orthogonality condition model (OCM). In chapter 3, the calculational procedures for the kernels of GCN, RGM and OCM and some properties related to their calculation are discussed. The GCM kernels for various types of systems are treated. The RGM kernels are evaluated by the integral transformation of GCM kernels. The problems related to the RGM norm kernel (RGM-NK) are discussed. The projection operator onto the Pauli-allowed state in OCM is obtained directly from the solution of the eigenvalue problem of RGM-NK. In chapter 4, the exchange kernels due to the antisymmetrization are derived in analytical way with the symbolical use of computer memory by taking the α + O 16 system as a typical example. New algorisms for deriving analytically the generator coordinate kernel (GCM kernel) are presented. In chapter 5, precise generalization of the Kohn-Hulthen-Kato variational method for scattering matrix is made for the purpose of microscopic study of reactions between complex nuclei with many channels coupled. (Kato, T.)

  1. Formal methods applied to industrial complex systems implementation of the B method

    CERN Document Server

    Boulanger, Jean-Louis

    2014-01-01

    This book presents real-world examples of formal techniques in an industrial context. It covers formal methods such as SCADE and/or the B Method, in various fields such as railways, aeronautics, and the automotive industry. The purpose of this book is to present a summary of experience on the use of "formal methods" (based on formal techniques such as proof, abstract interpretation and model-checking) in industrial examples of complex systems, based on the experience of people currently involved in the creation and assessment of safety critical system software. The involvement of people from

  2. Low-complexity computation of plate eigenmodes with Vekua approximations and the method of particular solutions

    Science.gov (United States)

    Chardon, Gilles; Daudet, Laurent

    2013-11-01

    This paper extends the method of particular solutions (MPS) to the computation of eigenfrequencies and eigenmodes of thin plates, in the framework of the Kirchhoff-Love plate theory. Specific approximation schemes are developed, with plane waves (MPS-PW) or Fourier-Bessel functions (MPS-FB). This framework also requires a suitable formulation of the boundary conditions. Numerical tests, on two plates with various boundary conditions, demonstrate that the proposed approach provides competitive results with standard numerical schemes such as the finite element method, at reduced complexity, and with large flexibility in the implementation choices.

  3. Functional Requirements for Fab-7 Boundary Activity in the Bithorax Complex

    Science.gov (United States)

    Wolle, Daniel; Cleard, Fabienne; Aoki, Tsutomu; Deshpande, Girish; Karch, Francois

    2015-01-01

    Chromatin boundaries are architectural elements that determine the three-dimensional folding of the chromatin fiber and organize the chromosome into independent units of genetic activity. The Fab-7 boundary from the Drosophila bithorax complex (BX-C) is required for the parasegment-specific expression of the Abd-B gene. We have used a replacement strategy to identify sequences that are necessary and sufficient for Fab-7 boundary function in the BX-C. Fab-7 boundary activity is known to depend on factors that are stage specific, and we describe a novel ∼700-kDa complex, the late boundary complex (LBC), that binds to Fab-7 sequences that have insulator functions in late embryos and adults. We show that the LBC is enriched in nuclear extracts from late, but not early, embryos and that it contains three insulator proteins, GAF, Mod(mdg4), and E(y)2. Its DNA binding properties are unusual in that it requires a minimal sequence of >65 bp; however, other than a GAGA motif, the three Fab-7 LBC recognition elements display few sequence similarities. Finally, we show that mutations which abrogate LBC binding in vitro inactivate the Fab-7 boundary in the BX-C. PMID:26303531

  4. Functional Requirements for Fab-7 Boundary Activity in the Bithorax Complex.

    Science.gov (United States)

    Wolle, Daniel; Cleard, Fabienne; Aoki, Tsutomu; Deshpande, Girish; Schedl, Paul; Karch, Francois

    2015-11-01

    Chromatin boundaries are architectural elements that determine the three-dimensional folding of the chromatin fiber and organize the chromosome into independent units of genetic activity. The Fab-7 boundary from the Drosophila bithorax complex (BX-C) is required for the parasegment-specific expression of the Abd-B gene. We have used a replacement strategy to identify sequences that are necessary and sufficient for Fab-7 boundary function in the BX-C. Fab-7 boundary activity is known to depend on factors that are stage specific, and we describe a novel ∼700-kDa complex, the late boundary complex (LBC), that binds to Fab-7 sequences that have insulator functions in late embryos and adults. We show that the LBC is enriched in nuclear extracts from late, but not early, embryos and that it contains three insulator proteins, GAF, Mod(mdg4), and E(y)2. Its DNA binding properties are unusual in that it requires a minimal sequence of >65 bp; however, other than a GAGA motif, the three Fab-7 LBC recognition elements display few sequence similarities. Finally, we show that mutations which abrogate LBC binding in vitro inactivate the Fab-7 boundary in the BX-C. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  5. Complex of radioanalytical methods for radioecological study of STS

    International Nuclear Information System (INIS)

    Artemev, O.I.; Larin, V.N.; Ptitskaya, L.D.; Smagulova, G.S.

    1998-01-01

    Today the main task of the Institute of Radiation Safety and Ecology is the assessment of parameters of radioecological situation in areas of nuclear testing on the territory of the former Semipalatinsk Test Site (STS). According to the diagram below, the radioecological study begins with the Field radiometry and environmental sampling followed by the coordinate fixation. This work is performed by the staff of the Radioecology Laboratory equipped with the state-of-the-art devices of dosimetry and radiometry. All the devices annually undergo the State Check by the RK Gosstandard Centre in Almaty. The air samples are also collected for determination of radon content. Environmental samples are measured for the total gamma activity in order to dispatch and discard samples with the insufficient level of homogenization. Samples are measured with the gamma radiometry installation containing NaJ(TI) scintillation detector. The installation background is measured everyday and many times. Time duration of measurement depends on sample activity. Further, samples are measured with alpha and beta radiometers for the total alpha and beta activity that characterizes the radioactive contamination of sampling locations. Apart from the Radiometry Laboratory the analytical complex includes the Radiochemistry and Gamma Spectrometry Laboratories. The direct gamma spectral (instrumental) methods in most cases allow to obtain the sufficiently rapid information about the radionuclides present in a sample. The state-of-the-art equipment together with the computer technology provide the high quantitative and qualitative precision and high productivity as well. One of the advantages of the method is that samples after measurement maintain their state and can be used for the repeated measurements or radiochemical reanalyzes. The Gamma Spectrometry Laboratory has three state-of-the-art gamma spectral installations consisting of high resolution semi-conductive detectors and equipped with

  6. Petascale Many Body Methods for Complex Correlated Systems

    Science.gov (United States)

    Pruschke, Thomas

    2012-02-01

    Correlated systems constitute an important class of materials in modern condensed matter physics. Correlation among electrons are at the heart of all ordering phenomena and many intriguing novel aspects, such as quantum phase transitions or topological insulators, observed in a variety of compounds. Yet, theoretically describing these phenomena is still a formidable task, even if one restricts the models used to the smallest possible set of degrees of freedom. Here, modern computer architectures play an essential role, and the joint effort to devise efficient algorithms and implement them on state-of-the art hardware has become an extremely active field in condensed-matter research. To tackle this task single-handed is quite obviously not possible. The NSF-OISE funded PIRE collaboration ``Graduate Education and Research in Petascale Many Body Methods for Complex Correlated Systems'' is a successful initiative to bring together leading experts around the world to form a virtual international organization for addressing these emerging challenges and educate the next generation of computational condensed matter physicists. The collaboration includes research groups developing novel theoretical tools to reliably and systematically study correlated solids, experts in efficient computational algorithms needed to solve the emerging equations, and those able to use modern heterogeneous computer architectures to make then working tools for the growing community.

  7. Number theoretic methods in cryptography complexity lower bounds

    CERN Document Server

    Shparlinski, Igor

    1999-01-01

    The book introduces new techniques which imply rigorous lower bounds on the complexity of some number theoretic and cryptographic problems. These methods and techniques are based on bounds of character sums and numbers of solutions of some polynomial equations over finite fields and residue rings. It also contains a number of open problems and proposals for further research. We obtain several lower bounds, exponential in terms of logp, on the de­ grees and orders of • polynomials; • algebraic functions; • Boolean functions; • linear recurring sequences; coinciding with values of the discrete logarithm modulo a prime p at suf­ ficiently many points (the number of points can be as small as pI/He). These functions are considered over the residue ring modulo p and over the residue ring modulo an arbitrary divisor d of p - 1. The case of d = 2 is of special interest since it corresponds to the representation of the right­ most bit of the discrete logarithm and defines whether the argument is a quadratic...

  8. Fuzzy Entropy Method for Quantifying Supply Chain Networks Complexity

    Science.gov (United States)

    Zhang, Jihui; Xu, Junqin

    Supply chain is a special kind of complex network. Its complexity and uncertainty makes it very difficult to control and manage. Supply chains are faced with a rising complexity of products, structures, and processes. Because of the strong link between a supply chain’s complexity and its efficiency the supply chain complexity management becomes a major challenge of today’s business management. The aim of this paper is to quantify the complexity and organization level of an industrial network working towards the development of a ‘Supply Chain Network Analysis’ (SCNA). By measuring flows of goods and interaction costs between different sectors of activity within the supply chain borders, a network of flows is built and successively investigated by network analysis. The result of this study shows that our approach can provide an interesting conceptual perspective in which the modern supply network can be framed, and that network analysis can handle these issues in practice.

  9. a Range Based Method for Complex Facade Modeling

    Science.gov (United States)

    Adami, A.; Fregonese, L.; Taffurelli, L.

    2011-09-01

    the complex architecture. From the point cloud we can extract a false colour map depending on the distance of each point from the average plane. In this way we can represent each point of the facades by a height map in grayscale. In this operation it is important to define the scale of the final result in order to set the correct pixel size in the map. The following step is concerning the use of a modifier which is well-known in computer graphics. In fact the modifier Displacement allows to simulate on a planar surface the original roughness of the object according to a grayscale map. The value of gray is read by the modifier as the distance from the reference plane and it represents the displacement of the corresponding element of the virtual plane. Similar to the bump map, the displacement modifier does not only simulate the effect, but it really deforms the planar surface. In this way the 3d model can be use not only in a static representation, but also in dynamic animation or interactive application. The setting of the plane to be deformed is the most important step in this process. In 3d Max the planar surface has to be characterized by the real dimension of the façade and also by a correct number of quadrangular faces which are the smallest part of the whole surface. In this way we can consider the modified surface as a 3d raster representation where each quadrangular face (corresponding to traditional pixel) is displaced according the value of gray (= distance from the plane). This method can be applied in different context, above all when the object to be represented can be considered as a 2,5 dimension such as facades of architecture in city model or large scale representation. But also it can be used to represent particular effect such as deformation of walls in a complete 3d way.

  10. A RANGE BASED METHOD FOR COMPLEX FACADE MODELING

    Directory of Open Access Journals (Sweden)

    A. Adami

    2012-09-01

    homogeneous point cloud of the complex architecture. From the point cloud we can extract a false colour map depending on the distance of each point from the average plane. In this way we can represent each point of the facades by a height map in grayscale. In this operation it is important to define the scale of the final result in order to set the correct pixel size in the map. The following step is concerning the use of a modifier which is well-known in computer graphics. In fact the modifier Displacement allows to simulate on a planar surface the original roughness of the object according to a grayscale map. The value of gray is read by the modifier as the distance from the reference plane and it represents the displacement of the corresponding element of the virtual plane. Similar to the bump map, the displacement modifier does not only simulate the effect, but it really deforms the planar surface. In this way the 3d model can be use not only in a static representation, but also in dynamic animation or interactive application. The setting of the plane to be deformed is the most important step in this process. In 3d Max the planar surface has to be characterized by the real dimension of the façade and also by a correct number of quadrangular faces which are the smallest part of the whole surface. In this way we can consider the modified surface as a 3d raster representation where each quadrangular face (corresponding to traditional pixel is displaced according the value of gray (= distance from the plane. This method can be applied in different context, above all when the object to be represented can be considered as a 2,5 dimension such as facades of architecture in city model or large scale representation. But also it can be used to represent particular effect such as deformation of walls in a complete 3d way.

  11. Symmetrized complex amplitudes for He double photoionization from the time-dependent close coupling and exterior complex scaling methods

    International Nuclear Information System (INIS)

    Horner, D.A.; Colgan, J.; Martin, F.; McCurdy, C.W.; Pindzola, M.S.; Rescigno, T.N.

    2004-01-01

    Symmetrized complex amplitudes for the double photoionization of helium are computed by the time-dependent close-coupling and exterior complex scaling methods, and it is demonstrated that both methods are capable of the direct calculation of these amplitudes. The results are found to be in excellent agreement with each other and in very good agreement with results of other ab initio methods and experiment

  12. Application of Lattice Boltzmann Methods in Complex Mass Transfer Systems

    Science.gov (United States)

    Sun, Ning

    Lattice Boltzmann Method (LBM) is a novel computational fluid dynamics method that can easily handle complex and dynamic boundaries, couple local or interfacial interactions/reactions, and be easily parallelized allowing for simulation of large systems. While most of the current studies in LBM mainly focus on fluid dynamics, however, the inherent power of this method makes it an ideal candidate for the study of mass transfer systems involving complex/dynamic microstructures and local reactions. In this thesis, LBM is introduced to be an alternative computational method for the study of electrochemical energy storage systems (Li-ion batteries (LIBs) and electric double layer capacitors (EDLCs)) and transdermal drug design on mesoscopic scale. Based on traditional LBM, the following in-depth studies have been carried out: (1) For EDLCs, the simulation of diffuse charge dynamics is carried out for both the charge and the discharge processes on 2D systems of complex random electrode geometries (pure random, random spheres and random fibers). Steric effect of concentrated solutions is considered by using modified Poisson-Nernst-Plank (MPNP) equations and compared with regular Poisson-Nernst-Plank (PNP) systems. The effects of electrode microstructures (electrode density, electrode filler morphology, filler size, etc.) on the net charge distribution and charge/discharge time are studied in detail. The influence of applied potential during discharging process is also discussed. (2) For the study of dendrite formation on the anode of LIBs, it is shown that the Lattice Boltzmann model can capture all the experimentally observed features of microstructure evolution at the anode, from smooth to mossy to dendritic. The mechanism of dendrite formation process in mesoscopic scale is discussed in detail and compared with the traditional Sand's time theories. It shows that dendrite formation is closely related to the inhomogeneous reactively at the electrode-electrolyte interface

  13. X-ray-enhanced cancer cell migration requires the linker of nucleoskeleton and cytoskeleton complex.

    Science.gov (United States)

    Imaizumi, Hiromasa; Sato, Katsutoshi; Nishihara, Asuka; Minami, Kazumasa; Koizumi, Masahiko; Matsuura, Nariaki; Hieda, Miki

    2018-04-01

    The linker of nucleoskeleton and cytoskeleton (LINC) complex is a multifunctional protein complex that is involved in various processes at the nuclear envelope, including nuclear migration, mechanotransduction, chromatin tethering and DNA damage response. We recently showed that a nuclear envelope protein, Sad1 and UNC84 domain protein 1 (SUN1), a component of the LINC complex, has a critical function in cell migration. Although ionizing radiation activates cell migration and invasion in vivo and in vitro, the underlying molecular mechanism remains unknown. Here, we examined the involvement of the LINC complex in radiation-enhanced cell migration and invasion. A sublethal dose of X-ray radiation promoted human breast cancer MDA-MB-231 cell migration and invasion, whereas carbon ion beam radiation suppressed these processes in a dose-dependent manner. Depletion of SUN1 and SUN2 significantly suppressed X-ray-enhanced cell migration and invasion. Moreover, depletion or overexpression of each SUN1 splicing variant revealed that SUN1_888 containing 888 amino acids of SUN1 but not SUN1_916 was required for X-ray-enhanced migration and invasion. In addition, the results suggested that X-ray irradiation affected the expression level of SUN1 splicing variants and a SUN protein binding partner, nesprins. Taken together, our findings supported that the LINC complex contributed to photon-enhanced cell migration and invasion. © 2018 The Authors. Cancer Science published by John Wiley & Sons Australia, Ltd on behalf of Japanese Cancer Association.

  14. Managing today's complex healthcare business enterprise: reflections on distinctive requirements of healthcare management education.

    Science.gov (United States)

    Welton, William E

    2004-01-01

    In early 2001, the community of educational programs offering master's-level education in healthcare management began an odyssey to modernize its approach to the organization and delivery of healthcare management education. The community recognized that cumulative long-term changes within healthcare management practice required a careful examination of healthcare management context and manpower requirements. This article suggests an evidence-based rationale for defining the distinctive elements of healthcare management, thus suggesting a basis for review and transformation of master's-level healthcare management curricula. It also suggests ways to modernize these curricula in a manner that recognizes the distinctiveness of the healthcare business enterprise as well as the changing management roles and careers within these complex organizations and systems. Through such efforts, the healthcare management master's-level education community would be better prepared to meet current and future challenges, to increase its relevance to the management practice community, and to allocate scarce faculty and program resources more effectively.

  15. Integrated complex care coordination for children with medical complexity: A mixed-methods evaluation of tertiary care-community collaboration

    Directory of Open Access Journals (Sweden)

    Cohen Eyal

    2012-10-01

    Full Text Available Abstract Background Primary care medical homes may improve health outcomes for children with special healthcare needs (CSHCN, by improving care coordination. However, community-based primary care practices may be challenged to deliver comprehensive care coordination to complex subsets of CSHCN such as children with medical complexity (CMC. Linking a tertiary care center with the community may achieve cost effective and high quality care for CMC. The objective of this study was to evaluate the outcomes of community-based complex care clinics integrated with a tertiary care center. Methods A before- and after-intervention study design with mixed (quantitative/qualitative methods was utilized. Clinics at two community hospitals distant from tertiary care were staffed by local community pediatricians with the tertiary care center nurse practitioner and linked with primary care providers. Eighty-one children with underlying chronic conditions, fragility, requirement for high intensity care and/or technology assistance, and involvement of multiple providers participated. Main outcome measures included health care utilization and expenditures, parent reports of parent- and child-quality of life [QOL (SF-36®, CPCHILD©, PedsQL™], and family-centered care (MPOC-20®. Comparisons were made in equal (up to 1 year pre- and post-periods supplemented by qualitative perspectives of families and pediatricians. Results Total health care system costs decreased from median (IQR $244 (981 per patient per month (PPPM pre-enrolment to $131 (355 PPPM post-enrolment (p=.007, driven primarily by fewer inpatient days in the tertiary care center (p=.006. Parents reported decreased out of pocket expenses (p© domains [Health Standardization Section (p=.04; Comfort and Emotions (p=.03], while total CPCHILD© score decreased between baseline and 1 year (p=.003. Parents and providers reported the ability to receive care close to home as a key benefit. Conclusions Complex

  16. Estimating the complexity of 3D structural models using machine learning methods

    Science.gov (United States)

    Mejía-Herrera, Pablo; Kakurina, Maria; Royer, Jean-Jacques

    2016-04-01

    Quantifying the complexity of 3D geological structural models can play a major role in natural resources exploration surveys, for predicting environmental hazards or for forecasting fossil resources. This paper proposes a structural complexity index which can be used to help in defining the degree of effort necessary to build a 3D model for a given degree of confidence, and also to identify locations where addition efforts are required to meet a given acceptable risk of uncertainty. In this work, it is considered that the structural complexity index can be estimated using machine learning methods on raw geo-data. More precisely, the metrics for measuring the complexity can be approximated as the difficulty degree associated to the prediction of the geological objects distribution calculated based on partial information on the actual structural distribution of materials. The proposed methodology is tested on a set of 3D synthetic structural models for which the degree of effort during their building is assessed using various parameters (such as number of faults, number of part in a surface object, number of borders, ...), the rank of geological elements contained in each model, and, finally, their level of deformation (folding and faulting). The results show how the estimated complexity in a 3D model can be approximated by the quantity of partial data necessaries to simulated at a given precision the actual 3D model without error using machine learning algorithms.

  17. The dynein regulatory complex is required for ciliary motility and otolith biogenesis in the inner ear.

    Science.gov (United States)

    Colantonio, Jessica R; Vermot, Julien; Wu, David; Langenbacher, Adam D; Fraser, Scott; Chen, Jau-Nian; Hill, Kent L

    2009-01-08

    In teleosts, proper balance and hearing depend on mechanical sensors in the inner ear. These sensors include actin-based microvilli and microtubule-based cilia that extend from the surface of sensory hair cells and attach to biomineralized 'ear stones' (or otoliths). Otolith number, size and placement are under strict developmental control, but the mechanisms that ensure otolith assembly atop specific cells of the sensory epithelium are unclear. Here we demonstrate that cilia motility is required for normal otolith assembly and localization. Using in vivo video microscopy, we show that motile tether cilia at opposite poles of the otic vesicle create fluid vortices that attract otolith precursor particles, thereby biasing an otherwise random distribution to direct localized otolith seeding on tether cilia. Independent knockdown of subunits for the dynein regulatory complex and outer-arm dynein disrupt cilia motility, leading to defective otolith biogenesis. These results demonstrate a requirement for the dynein regulatory complex in vertebrates and show that cilia-driven flow is a key epigenetic factor in controlling otolith biomineralization.

  18. A Protein Complex Required for Polymerase V Transcripts and RNA- Directed DNA Methylation in Arabidopsis

    KAUST Repository

    Law, Julie A.

    2010-05-01

    DNA methylation is an epigenetic modification associated with gene silencing. In Arabidopsis, DNA methylation is established by DOMAINS REARRANGED METHYLTRANSFERASE 2 (DRM2), which is targeted by small interfering RNAs through a pathway termed RNA-directed DNA methylation (RdDM) [1, 2]. Recently, RdDM was shown to require intergenic noncoding (IGN) transcripts that are dependent on the Pol V polymerase. These transcripts are proposed to function as scaffolds for the recruitment of downstream RdDM proteins, including DRM2, to loci that produce both siRNAs and IGN transcripts [3]. However, the mechanism(s) through which Pol V is targeted to specific genomic loci remains largely unknown. Through affinity purification of two known RdDM components, DEFECTIVE IN RNA-DIRECTED DNA METHYLATION 1 (DRD1) [4] and DEFECTIVE IN MERISTEM SILENCING 3 (DMS3) [5, 6], we found that they copurify with each other and with a novel protein, RNA-DIRECTED DNA METHYLATION 1 (RDM1), forming a complex we term DDR. We also found that DRD1 copurified with Pol V subunits and that RDM1, like DRD1 [3] and DMS3 [7], is required for the production of Pol V-dependent transcripts. These results suggest that the DDR complex acts in RdDM at a step upstream of the recruitment or activation of Pol V. © 2010 Elsevier Ltd. All rights reserved.

  19. A Protein Complex Required for Polymerase V Transcripts and RNA- Directed DNA Methylation in Arabidopsis

    KAUST Repository

    Law, Julie A.; Ausí n, Israel; Johnson, Lianna M.; Vashisht, Ajay  A Amar; Zhu, Jian-Kang; Wohlschlegel, James  A A.; Jacobsen, Steven E.

    2010-01-01

    DNA methylation is an epigenetic modification associated with gene silencing. In Arabidopsis, DNA methylation is established by DOMAINS REARRANGED METHYLTRANSFERASE 2 (DRM2), which is targeted by small interfering RNAs through a pathway termed RNA-directed DNA methylation (RdDM) [1, 2]. Recently, RdDM was shown to require intergenic noncoding (IGN) transcripts that are dependent on the Pol V polymerase. These transcripts are proposed to function as scaffolds for the recruitment of downstream RdDM proteins, including DRM2, to loci that produce both siRNAs and IGN transcripts [3]. However, the mechanism(s) through which Pol V is targeted to specific genomic loci remains largely unknown. Through affinity purification of two known RdDM components, DEFECTIVE IN RNA-DIRECTED DNA METHYLATION 1 (DRD1) [4] and DEFECTIVE IN MERISTEM SILENCING 3 (DMS3) [5, 6], we found that they copurify with each other and with a novel protein, RNA-DIRECTED DNA METHYLATION 1 (RDM1), forming a complex we term DDR. We also found that DRD1 copurified with Pol V subunits and that RDM1, like DRD1 [3] and DMS3 [7], is required for the production of Pol V-dependent transcripts. These results suggest that the DDR complex acts in RdDM at a step upstream of the recruitment or activation of Pol V. © 2010 Elsevier Ltd. All rights reserved.

  20. Elongator complex is required for long-term olfactory memory formation in Drosophila.

    Science.gov (United States)

    Yu, Dinghui; Tan, Ying; Chakraborty, Molee; Tomchik, Seth; Davis, Ronald L

    2018-04-01

    The evolutionarily conserved Elongator Complex associates with RNA polymerase II for transcriptional elongation. Elp3 is the catalytic subunit, contains histone acetyltransferase activity, and is associated with neurodegeneration in humans. Elp1 is a scaffolding subunit and when mutated causes familial dysautonomia. Here, we show that elp3 and elp1 are required for aversive long-term olfactory memory in Drosophila RNAi knockdown of elp3 in adult mushroom bodies impairs long-term memory (LTM) without affecting earlier forms of memory. RNAi knockdown with coexpression of elp3 cDNA reverses the impairment. Similarly, RNAi knockdown of elp1 impairs LTM and coexpression of elp1 cDNA reverses this phenotype. The LTM deficit in elp3 and elp1 knockdown flies is accompanied by the abolishment of a LTM trace, which is registered as increased calcium influx in response to the CS+ odor in the α-branch of mushroom body neurons. Coexpression of elp1 or elp3 cDNA rescues the memory trace in parallel with LTM. These data show that the Elongator complex is required in adult mushroom body neurons for long-term behavioral memory and the associated long-term memory trace. © 2018 Yu et al.; Published by Cold Spring Harbor Laboratory Press.

  1. Thinking Inside the Box: Simple Methods to Evaluate Complex Treatments

    Directory of Open Access Journals (Sweden)

    J. Michael Menke

    2011-10-01

    Full Text Available We risk ignoring cheaper and safer medical treatments because they cannot be patented, lack profit potential, require too much patient-contact time, or do not have scientific results. Novel medical treatments may be difficult to evaluate for a variety of reasons such as patient selection bias, the effect of the package of care, or the lack of identifying the active elements of treatment. Whole Systems Research (WSR is an approach designed to assess the performance of complete packages of clinical management. While the WSR method is compelling, there is no standard procedure for WSR, and its implementation may be intimidating. The truth is that WSR methodological tools are neither new nor complicated. There are two sequential steps, or boxes, that guide WSR methodology: establishing system predictability, followed by an audit of system element effectiveness. We describe the implementation of WSR with a particular attention to threats to validity (Shadish, Cook, & Campbell, 2002; Shadish & Heinsman, 1997. DOI: 10.2458/azu_jmmss.v2i1.12365

  2. Complex molecular orbital method: open-shell theory

    International Nuclear Information System (INIS)

    Hendekovic, J.

    1976-01-01

    A singe-determinant open-shell formalism for complex molecular orbitals is developed. An iterative algorithm for solving the resulting secular equations is constructed. It is based on a sequence of similarity transformations and matrix triangularizations

  3. Uranium complex recycling method of purifying uranium liquors

    International Nuclear Information System (INIS)

    Elikan, L.; Lyon, W.L.; Sundar, P.S.

    1976-01-01

    Uranium is separated from contaminating cations in an aqueous liquor containing uranyl ions. The liquor is mixed with sufficient recycled uranium complex to raise the weight ratio of uranium to said cations preferably to at least about three. The liquor is then extracted with at least enough non-interfering, water-immiscible, organic solvent to theoretically extract about all of the uranium in the liquor. The organic solvent contains a reagent which reacts with the uranyl ions to form a complex soluble in the solvent. If the aqueous liquor is acidic, the organic solvent is then scrubbed with water. The organic solvent is stripped with a solution containing at least enough ammonium carbonate to precipitate the uranium complex. A portion of the uranium complex is recycled and the remainder can be collected and calcined to produce U 3 O 8 or UO 2

  4. Required doses for projection methods in X-ray diagnosis

    International Nuclear Information System (INIS)

    Hagemann, G.

    1992-01-01

    The ideal dose requirement has been stated by Cohen et al. (1981) by a formula basing on parallel beam, maximum quantum yield and Bucky grid effect depending on the signal to noise ratio and object contrast. This was checked by means of contrast detail diagrams measured at the hole phantom, and was additionally compared with measurement results obtained with acrylic glass phantoms. The optimal dose requirement is obtained by the maximum technically possible approach to the ideal requirement level. Examples are given, besides for x-ray equipment with Gd 2 O 2 S screen film systems for grid screen mammography, and new thoracic examination systems for mass screenings. Finally, a few values concerning the dose requirement or the analogous time required for fluorscent screening in angiography and interventional radiology, are stated, as well as for dentistry and paediatric x-ray diagnostics. (orig./HP) [de

  5. Developing integrated methods to address complex resource and environmental issues

    Science.gov (United States)

    Smith, Kathleen S.; Phillips, Jeffrey D.; McCafferty, Anne E.; Clark, Roger N.

    2016-02-08

    IntroductionThis circular provides an overview of selected activities that were conducted within the U.S. Geological Survey (USGS) Integrated Methods Development Project, an interdisciplinary project designed to develop new tools and conduct innovative research requiring integration of geologic, geophysical, geochemical, and remote-sensing expertise. The project was supported by the USGS Mineral Resources Program, and its products and acquired capabilities have broad applications to missions throughout the USGS and beyond.In addressing challenges associated with understanding the location, quantity, and quality of mineral resources, and in investigating the potential environmental consequences of resource development, a number of field and laboratory capabilities and interpretative methodologies evolved from the project that have applications to traditional resource studies as well as to studies related to ecosystem health, human health, disaster and hazard assessment, and planetary science. New or improved tools and research findings developed within the project have been applied to other projects and activities. Specifically, geophysical equipment and techniques have been applied to a variety of traditional and nontraditional mineral- and energy-resource studies, military applications, environmental investigations, and applied research activities that involve climate change, mapping techniques, and monitoring capabilities. Diverse applied geochemistry activities provide a process-level understanding of the mobility, chemical speciation, and bioavailability of elements, particularly metals and metalloids, in a variety of environmental settings. Imaging spectroscopy capabilities maintained and developed within the project have been applied to traditional resource studies as well as to studies related to ecosystem health, human health, disaster assessment, and planetary science. Brief descriptions of capabilities and laboratory facilities and summaries of some

  6. Evaluation of Information Requirements of Reliability Methods in Engineering Design

    DEFF Research Database (Denmark)

    Marini, Vinicius Kaster; Restrepo-Giraldo, John Dairo; Ahmed-Kristensen, Saeema

    2010-01-01

    This paper aims to characterize the information needed to perform methods for robustness and reliability, and verify their applicability to early design stages. Several methods were evaluated on their support to synthesis in engineering design. Of those methods, FMEA, FTA and HAZOP were selected...

  7. A new entropy based method for computing software structural complexity

    CERN Document Server

    Roca, J L

    2002-01-01

    In this paper a new methodology for the evaluation of software structural complexity is described. It is based on the entropy evaluation of the random uniform response function associated with the so called software characteristic function SCF. The behavior of the SCF with the different software structures and their relationship with the number of inherent errors is investigated. It is also investigated how the entropy concept can be used to evaluate the complexity of a software structure considering the SCF as a canonical representation of the graph associated with the control flow diagram. The functions, parameters and algorithms that allow to carry out this evaluation are also introduced. After this analytic phase follows the experimental phase, verifying the consistency of the proposed metric and their boundary conditions. The conclusion is that the degree of software structural complexity can be measured as the entropy of the random uniform response function of the SCF. That entropy is in direct relation...

  8. Viability and resilience of complex systems concepts, methods and case studies from ecology and society

    CERN Document Server

    Deffuant, Guillaume

    2011-01-01

    One common characteristic of a complex system is its ability to withstand major disturbances and the capacity to rebuild itself. Understanding how such systems demonstrate resilience by absorbing or recovering from major external perturbations requires both quantitative foundations and a multidisciplinary view of the topic. This book demonstrates how new methods can be used to identify the actions favouring the recovery from perturbations on a variety of examples including the dynamics of bacterial biofilms, grassland savannahs, language competition and Internet social networking sites. The reader is taken through an introduction to the idea of resilience and viability and shown the mathematical basis of the techniques used to analyse systems. The idea of individual or agent-based modelling of complex systems is introduced and related to analytically tractable approximations of such models. A set of case studies illustrates the use of the techniques in real applications, and the final section describes how on...

  9. Benchmarking of London Dispersion-Accounting Density Functional Theory Methods on Very Large Molecular Complexes.

    Science.gov (United States)

    Risthaus, Tobias; Grimme, Stefan

    2013-03-12

    A new test set (S12L) containing 12 supramolecular noncovalently bound complexes is presented and used to evaluate seven different methods to account for dispersion in DFT (DFT-D3, DFT-D2, DFT-NL, XDM, dDsC, TS-vdW, M06-L) at different basis set levels against experimental, back-corrected reference energies. This allows conclusions about the performance of each method in an explorative research setting on "real-life" problems. Most DFT methods show satisfactory performance but, due to the largeness of the complexes, almost always require an explicit correction for the nonadditive Axilrod-Teller-Muto three-body dispersion interaction to get accurate results. The necessity of using a method capable of accounting for dispersion is clearly demonstrated in that the two-body dispersion contributions are on the order of 20-150% of the total interaction energy. MP2 and some variants thereof are shown to be insufficient for this while a few tested D3-corrected semiempirical MO methods perform reasonably well. Overall, we suggest the use of this benchmark set as a "sanity check" against overfitting to too small molecular cases.

  10. Defining Requirements and Related Methods for Designing Sensorized Garments

    Directory of Open Access Journals (Sweden)

    Giuseppe Andreoni

    2016-05-01

    Full Text Available Designing smart garments has strong interdisciplinary implications, specifically related to user and technical requirements, but also because of the very different applications they have: medicine, sport and fitness, lifestyle monitoring, workplace and job conditions analysis, etc. This paper aims to discuss some user, textile, and technical issues to be faced in sensorized clothes development. In relation to the user, the main requirements are anthropometric, gender-related, and aesthetical. In terms of these requirements, the user’s age, the target application, and fashion trends cannot be ignored, because they determine the compliance with the wearable system. Regarding textile requirements, functional factors—also influencing user comfort—are elasticity and washability, while more technical properties are the stability of the chemical agents’ effects for preserving the sensors’ efficacy and reliability, and assuring the proper duration of the product for the complete life cycle. From the technical side, the physiological issues are the most important: skin conductance, tolerance, irritation, and the effect of sweat and perspiration are key factors for reliable sensing. Other technical features such as battery size and duration, and the form factor of the sensor collector, should be considered, as they affect aesthetical requirements, which have proven to be crucial, as well as comfort and wearability.

  11. Robust design requirements specification: a quantitative method for requirements development using quality loss functions

    DEFF Research Database (Denmark)

    Pedersen, Søren Nygaard; Christensen, Martin Ebro; Howard, Thomas J.

    2016-01-01

    Product requirements serve many purposes in the product development process. Most importantly, they are meant to capture and facilitate product goals and acceptance criteria, as defined by stakeholders. Accurately communicating stakeholder goals and acceptance criteria can be challenging and more...

  12. Efficacy of Two Different Instructional Methods Involving Complex Ecological Content

    Science.gov (United States)

    Randler, Christoph; Bogner, Franz X.

    2009-01-01

    Teaching and learning approaches in ecology very often follow linear conceptions of ecosystems. Empirical studies with an ecological focus consistent with existing syllabi and focusing on cognitive achievement are scarce. Consequently, we concentrated on a classroom unit that offers learning materials and highlights the existing complexity rather…

  13. Markov Renewal Methods in Restart Problems in Complex Systems

    DEFF Research Database (Denmark)

    Asmussen, Søren; Lipsky, Lester; Thompson, Stephen

    A task with ideal execution time L such as the execution of a computer program or the transmission of a file on a data link may fail, and the task then needs to be restarted. The task is handled by a complex system with features similar to the ones in classical reliability: failures may...

  14. Studies of lanthanide complexes by a combination of spectroscopic methods

    Czech Academy of Sciences Publication Activity Database

    Krupová, Monika; Bouř, Petr; Andrushchenko, Valery

    2015-01-01

    Roč. 22, č. 1 (2015), s. 44 ISSN 1211-5894. [Discussions in Structural Molecular Biology. Annual Meeting of the Czech Society for Structural Biology /13./. 19.03.2015-21.03.2015, Nové Hrady] Institutional support: RVO:61388963 Keywords : lanthanide complexes * chirality sensing * chirality amplification * spectroscopy Subject RIV: CF - Physical ; Theoretical Chemistry

  15. Generation of new solutions of the stationary axisymmetric Einstein equations by a double complex function method

    International Nuclear Information System (INIS)

    Zhong, Z.

    1985-01-01

    A new approach to the solution of certain differential equations, the double complex function method, is developed, combining ordinary complex numbers and hyperbolic complex numbers. This method is applied to the theory of stationary axisymmetric Einstein equations in general relativity. A family of exact double solutions, double transformation groups, and n-soliton double solutions are obtained

  16. Efficient nuclear export of p65-IkappaBalpha complexes requires 14-3-3 proteins.

    Science.gov (United States)

    Aguilera, Cristina; Fernández-Majada, Vanessa; Inglés-Esteve, Julia; Rodilla, Verónica; Bigas, Anna; Espinosa, Lluís

    2006-09-01

    IkappaB are responsible for maintaining p65 in the cytoplasm under non-stimulating conditions and promoting the active export of p65 from the nucleus following NFkappaB activation to terminate the signal. We now show that 14-3-3 proteins regulate the NFkappaB signaling pathway by physically interacting with p65 and IkappaBalpha proteins. We identify two functional 14-3-3 binding domains in the p65 protein involving residues 38-44 and 278-283, and map the interaction region of IkappaBalpha in residues 60-65. Mutation of these 14-3-3 binding domains in p65 or IkappaBalpha results in a predominantly nuclear distribution of both proteins. TNFalpha treatment promotes recruitment of 14-3-3 and IkappaBalpha to NFkappaB-dependent promoters and enhances the binding of 14-3-3 to p65. Disrupting 14-3-3 activity by transfection with a dominant-negative 14-3-3 leads to the accumulation of nuclear p65-IkappaBalpha complexes and the constitutive association of p65 with the chromatin. In this situation, NFkappaB-dependent genes become unresponsive to TNFalpha stimulation. Together our results indicate that 14-3-3 proteins facilitate the nuclear export of IkappaBalpha-p65 complexes and are required for the appropriate regulation of NFkappaB signaling.

  17. 40 CFR 136.6 - Method modifications and analytical requirements.

    Science.gov (United States)

    2010-07-01

    ... modifications and analytical requirements. (a) Definitions of terms used in this section. (1) Analyst means the..., oil and grease, total suspended solids, total phenolics, turbidity, chemical oxygen demand, and.... Except as set forth in paragraph (b)(3) of this section, an analyst may modify an approved test procedure...

  18. A Survey of Requirements Engineering Methods for Pervasive Services

    NARCIS (Netherlands)

    Kolos, L.; van Eck, Pascal; Wieringa, Roelf J.

    Designing and deploying ubiquitous computing systems, such as those delivering large-scale mobile services, still requires large-scale investments in both development effort as well as infrastructure costs. Therefore, in order to develop the right system, the design process merits a thorough

  19. Review of analytical methods for the quantification of iodine in complex matrices

    Energy Technology Data Exchange (ETDEWEB)

    Shelor, C. Phillip [Department of Chemistry and Biochemistry, University of Texas at Arlington, Arlington, TX 76019-0065 (United States); Dasgupta, Purnendu K., E-mail: Dasgupta@uta.edu [Department of Chemistry and Biochemistry, University of Texas at Arlington, Arlington, TX 76019-0065 (United States)

    2011-09-19

    Highlights: {yields} We focus on iodine in biological samples, notably urine and milk. {yields} Sample preparation and the Sandell-Kolthoff method are extensively discussed. - Abstract: Iodine is an essential element of human nutrition. Nearly a third of the global population has insufficient iodine intake and is at risk of developing Iodine Deficiency Disorders (IDD). Most countries have iodine supplementation and monitoring programs. Urinary iodide (UI) is the biomarker used for epidemiological studies; only a few methods are currently used routinely for analysis. These methods either require expensive instrumentation with qualified personnel (inductively coupled plasma-mass spectrometry, instrumental nuclear activation analysis) or oxidative sample digestion to remove potential interferences prior to analysis by a kinetic colorimetric method originally introduced by Sandell and Kolthoff {approx}75 years ago. The Sandell-Kolthoff (S-K) method is based on the catalytic effect of iodide on the reaction between Ce{sup 4+} and As{sup 3+}. No available technique fully fits the needs of developing countries; research into inexpensive reliable methods and instrumentation are needed. There have been multiple reviews of methods used for epidemiological studies and specific techniques. However, a general review of iodine determination on a wide-ranging set of complex matrices is not available. While this review is not comprehensive, we cover the principal developments since the original development of the S-K method.

  20. Sensitivity Analysis of Hydraulic Methods Regarding Hydromorphologic Data Derivation Methods to Determine Environmental Water Requirements

    Directory of Open Access Journals (Sweden)

    Alireza Shokoohi

    2015-07-01

    Full Text Available This paper studies the accuracy of hydraulic methods in determining environmental flow requirements. Despite the vital importance of deriving river cross sectional data for hydraulic methods, few studies have focused on the criteria for deriving this data. The present study shows that the depth of cross section has a meaningful effect on the results obtained from hydraulic methods and that, considering fish as the index species for river habitat analysis, an optimum depth of 1 m should be assumed for deriving information from cross sections. The second important parameter required for extracting the geometric and hydraulic properties of rivers is the selection of an appropriate depth increment; ∆y. In the present research, this parameter was found to be equal to 1 cm. The uncertainty of the environmental discharge evaluation, when allocating water in areas with water scarcity, should be kept as low as possible. The Manning friction coefficient (n is an important factor in river discharge calculation. Using a range of "n" equal to 3 times the standard deviation for the study area, it is shown that the influence of friction coefficient on the estimation of environmental flow is much less than that on the calculation of river discharge.

  1. Method for synthesizing metal bis(borano) hypophosphite complexes

    Science.gov (United States)

    Cordaro, Joseph G.

    2013-06-18

    The present invention describes the synthesis of a family of metal bis(borano) hypophosphite complexes. One procedure described in detail is the syntheses of complexes beginning from phosphorus trichloride and sodium borohydride. Temperature, solvent, concentration, and atmosphere are all critical to ensure product formation. In the case of sodium bis(borano) hypophosphite, hydrogen gas was evolved upon heating at temperatures above 150.degree. C. Included in this family of materials are the salts of the alkali metals Li, Na and K, and those of the alkaline earth metals Mg and Ca. Hydrogen storage materials are possible. In particular the lithium salt, Li[PH.sub.2(BH.sub.3).sub.2], theoretically would contain nearly 12 wt % hydrogen. Analytical data for product characterization and thermal properties are given.

  2. Determinantal method for complex angular momenta in potential scattering

    Energy Technology Data Exchange (ETDEWEB)

    Lee, B. W. [University of Pennsylvania, Philadelphia, PA (United States)

    1963-01-15

    In this paper I would like do describe a formulation of the complex angular momenta in potential scattering based on the Lippmann-Schwinger integral equation rather than on the Schrödinger differential equation. This is intended as a preliminary to the paper by SAWYER on the Regge poles and high energy limits in field theory (Bethe-Salpeter amplitudes), where the integral formulation is definitely more advantageous than the differential formulation.

  3. Directed forgetting of complex pictures in an item method paradigm

    OpenAIRE

    Hauswald, Anne; Kissler, Johanna

    2008-01-01

    An item-cued directed forgetting paradigm was used to investigate the ability to control episodic memory and selectively encode complex coloured pictures. A series of photographs was presented to 21 participants who were instructed to either remember or forget each picture after it was presented. Memory performance was later tested with a recognition task where all presented items had to be retrieved, regardless of the initial instructions. A directed forgetting effect that is, better recogni...

  4. A new entropy based method for computing software structural complexity

    International Nuclear Information System (INIS)

    Roca, Jose L.

    2002-01-01

    In this paper a new methodology for the evaluation of software structural complexity is described. It is based on the entropy evaluation of the random uniform response function associated with the so called software characteristic function SCF. The behavior of the SCF with the different software structures and their relationship with the number of inherent errors is investigated. It is also investigated how the entropy concept can be used to evaluate the complexity of a software structure considering the SCF as a canonical representation of the graph associated with the control flow diagram. The functions, parameters and algorithms that allow to carry out this evaluation are also introduced. After this analytic phase follows the experimental phase, verifying the consistency of the proposed metric and their boundary conditions. The conclusion is that the degree of software structural complexity can be measured as the entropy of the random uniform response function of the SCF. That entropy is in direct relationship with the number of inherent software errors and it implies a basic hazard failure rate for it, so that a minimum structure assures a certain stability and maturity of the program. This metric can be used, either to evaluate the product or the process of software development, as development tool or for monitoring the stability and the quality of the final product. (author)

  5. Investigation of complexing equilibrium of polyacrylate-anion with cadmium ions by polarographic method

    Energy Technology Data Exchange (ETDEWEB)

    Avlyanov, Zh K; Kabanov, N M; Zezin, A B

    1985-01-01

    Polarographic investigation of cadmium complex with polyacrylate-anion in aqueous KCl solution is carried out. It is shown that the polarographic method allows one to define equilibrium constants of polymer metallic complex (PMC) formation even in the case when current magnitudes are defined by PMC dissociation reaction kinetic characteristics. The obtained equilibrium constants of stepped complexing provide the values of mean coordination PAAxCd complex number of approximately 1.5, that coincides with the value obtained by the potentiometric method.

  6. Investigation of complexing equilibrium of polyacrylate-anion with cadmium ions by polarographic method

    International Nuclear Information System (INIS)

    Avlyanov, Zh.K.; Kabanov, N.M.; Zezin, A.B.

    1985-01-01

    Polarographic investigation of cadmium complex with polyacrylate-anion in aqueous KCl solution is carried out. It is shown that the polarographic method allows one to define equilibrium constants of polymer metallic complex (PMC) formation even in the case, when current magnitudes are defined by PMC dissociation reaction kinetic characteristics. The obtained equilibrium constants of stepped complexing provide the values of mean coordination PAAxCd complex number of approximately 1.5, that coinsides with the value obtained by the potentiometric method

  7. Technique of Substantiating Requirements for the Vision Systems of Industrial Robotic Complexes

    Directory of Open Access Journals (Sweden)

    V. Ya. Kolyuchkin

    2015-01-01

    Full Text Available In references, there is a lack of approaches to describe the justified technical requirements for the vision systems (VS of industrial robotics complexes (IRC. Therefore, an objective of the work is to develop a technique that allows substantiating requirements for the main quality indicators of VS, functioning as a part of the IRC.The proposed technique uses a model representation of VS, which, as a part of the IRC information system, sorts the objects in the work area, as well as measures their linear and angular coordinates. To solve the problem of statement there is a proposal to define the target function of a designed IRC as a dependence of the IRC indicator efficiency on the VS quality indicators. The paper proposes to use, as an indicator of the IRC efficiency, the probability of a lack of fault products when manufacturing. Based on the functions the VS perform as a part of the IRC information system, the accepted indicators of VS quality are as follows: a probability of the proper recognition of objects in the working IRC area, and confidential probabilities of measuring linear and angular orientation coordinates of objects with the specified values of permissible error. Specific values of these errors depend on the orientation errors of working bodies of manipulators that are a part of the IRC. The paper presents mathematical expressions that determine the functional dependence of the probability of a lack of fault products when manufacturing on the VS quality indicators and the probability of failures of IRC technological equipment.The offered technique for substantiating engineering requirements for the VS of IRC has novelty. The results obtained in this work can be useful for professionals involved in IRC VS development, and, in particular, in development of VS algorithms and software.

  8. Level III Reliability methods feasible for complex structures

    NARCIS (Netherlands)

    Waarts, P.H.; Boer, A. de

    2001-01-01

    The paper describes the comparison between three types of reliability methods: code type level I used by a designer, full level I and a level III method. Two cases that are typical for civil engineering practise, a cable-stayed subjected to traffic load and the installation of a soil retaining sheet

  9. Adaptive calibration method with on-line growing complexity

    Directory of Open Access Journals (Sweden)

    Šika Z.

    2011-12-01

    Full Text Available This paper describes a modified variant of a kinematical calibration algorithm. In the beginning, a brief review of the calibration algorithm and its simple modification are described. As the described calibration modification uses some ideas used by the Lolimot algorithm, the algorithm is described and explained. Main topic of this paper is a description of a synthesis of the Lolimot-based calibration that leads to an adaptive algorithm with an on-line growing complexity. The paper contains a comparison of simple examples results and a discussion. A note about future research topics is also included.

  10. Method and program for complex calculation of heterogeneous reactor

    International Nuclear Information System (INIS)

    Kalashnikov, A.G.; Glebov, A.P.; Elovskaya, L.F.; Kuznetsova, L.I.

    1988-01-01

    An algorithm and the GITA program for complex one-dimensional calculation of a heterogeneous reactor which permits to conduct calculations for the reactor and its cell simultaneously using the same algorithm are described. Multigroup macrocross sections for reactor zones in the thermal energy range are determined according to the technique for calculating a cell with complicate structure and then the continuous multi group calculation of the reactor in the thermal energy range and in the range of neutron thermalization is made. The kinetic equation is solved using the Pi- and DSn- approximations [fr

  11. Thermal test requirements and their verification by different test methods

    International Nuclear Information System (INIS)

    Droste, B.; Wieser, G.; Probst, U.

    1993-01-01

    The paper discusses the parameters influencing the thermal test conditions for type B-packages. Criteria for different test methods (by analytical as well as by experimental means) will be developed. A comparison of experimental results from fuel oil pool and LPG fire tests will be given. (J.P.N.)

  12. Complexity and accuracy of image registration methods in SPECT-guided radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    Yin, L S; Duzenli, C; Moiseenko, V [Physics and Astronomy, University of British Columbia, 6224 Agricultural Road, Vancouver, BC, V6T 1Z1 (Canada); Tang, L; Hamarneh, G [Computing Science, Simon Fraser University, 9400 TASC1, Burnaby, BC, V5A 1S6 (Canada); Gill, B [Medical Physics, Vancouver Cancer Centre, BC Cancer Agency, 600 West 10th Ave, Vancouver, BC, V5Z 4E6 (Canada); Celler, A; Shcherbinin, S [Department of Radiology, University of British Columbia, 828 West 10th Ave, Vancouver, BC, V5Z 1L8 (Canada); Fua, T F; Thompson, A; Sheehan, F [Radiation Oncology, Vancouver Cancer Centre, BC Cancer Agency, 600 West 10th Ave, Vancouver, BC, V5Z 4E6 (Canada); Liu, M [Radiation Oncology, Fraser Valley Cancer Centre, BC Cancer Agency, 13750 9th Ave, Surrey, BC, V3V 1Z2 (Canada)], E-mail: lyin@bccancer.bc.ca

    2010-01-07

    The use of functional imaging in radiotherapy treatment (RT) planning requires accurate co-registration of functional imaging scans to CT scans. We evaluated six methods of image registration for use in SPECT-guided radiotherapy treatment planning. Methods varied in complexity from 3D affine transform based on control points to diffeomorphic demons and level set non-rigid registration. Ten lung cancer patients underwent perfusion SPECT-scans prior to their radiotherapy. CT images from a hybrid SPECT/CT scanner were registered to a planning CT, and then the same transformation was applied to the SPECT images. According to registration evaluation measures computed based on the intensity difference between the registered CT images or based on target registration error, non-rigid registrations provided a higher degree of accuracy than rigid methods. However, due to the irregularities in some of the obtained deformation fields, warping the SPECT using these fields may result in unacceptable changes to the SPECT intensity distribution that would preclude use in RT planning. Moreover, the differences between intensity histograms in the original and registered SPECT image sets were the largest for diffeomorphic demons and level set methods. In conclusion, the use of intensity-based validation measures alone is not sufficient for SPECT/CT registration for RTTP. It was also found that the proper evaluation of image registration requires the use of several accuracy metrics.

  13. Distributed Cooperation Solution Method of Complex System Based on MAS

    Science.gov (United States)

    Weijin, Jiang; Yuhui, Xu

    To adapt the model in reconfiguring fault diagnosing to dynamic environment and the needs of solving the tasks of complex system fully, the paper introduced multi-Agent and related technology to the complicated fault diagnosis, an integrated intelligent control system is studied in this paper. Based on the thought of the structure of diagnostic decision and hierarchy in modeling, based on multi-layer decomposition strategy of diagnosis task, a multi-agent synchronous diagnosis federation integrated different knowledge expression modes and inference mechanisms are presented, the functions of management agent, diagnosis agent and decision agent are analyzed, the organization and evolution of agents in the system are proposed, and the corresponding conflict resolution algorithm in given, Layered structure of abstract agent with public attributes is build. System architecture is realized based on MAS distributed layered blackboard. The real world application shows that the proposed control structure successfully solves the fault diagnose problem of the complex plant, and the special advantage in the distributed domain.

  14. Kernel methods and flexible inference for complex stochastic dynamics

    Science.gov (United States)

    Capobianco, Enrico

    2008-07-01

    Approximation theory suggests that series expansions and projections represent standard tools for random process applications from both numerical and statistical standpoints. Such instruments emphasize the role of both sparsity and smoothness for compression purposes, the decorrelation power achieved in the expansion coefficients space compared to the signal space, and the reproducing kernel property when some special conditions are met. We consider these three aspects central to the discussion in this paper, and attempt to analyze the characteristics of some known approximation instruments employed in a complex application domain such as financial market time series. Volatility models are often built ad hoc, parametrically and through very sophisticated methodologies. But they can hardly deal with stochastic processes with regard to non-Gaussianity, covariance non-stationarity or complex dependence without paying a big price in terms of either model mis-specification or computational efficiency. It is thus a good idea to look at other more flexible inference tools; hence the strategy of combining greedy approximation and space dimensionality reduction techniques, which are less dependent on distributional assumptions and more targeted to achieve computationally efficient performances. Advantages and limitations of their use will be evaluated by looking at algorithmic and model building strategies, and by reporting statistical diagnostics.

  15. Comparing methods of determining Legionella spp. in complex water matrices.

    Science.gov (United States)

    Díaz-Flores, Álvaro; Montero, Juan Carlos; Castro, Francisco Javier; Alejandres, Eva María; Bayón, Carmen; Solís, Inmaculada; Fernández-Lafuente, Roberto; Rodríguez, Guillermo

    2015-04-29

    Legionella testing conducted at environmental laboratories plays an essential role in assessing the risk of disease transmission associated with water systems. However, drawbacks of culture-based methodology used for Legionella enumeration can have great impact on the results and interpretation which together can lead to underestimation of the actual risk. Up to 20% of the samples analysed by these laboratories produced inconclusive results, making effective risk management impossible. Overgrowth of competing microbiota was reported as an important factor for culture failure. For quantitative polymerase chain reaction (qPCR), the interpretation of the results from the environmental samples still remains a challenge. Inhibitors may cause up to 10% of inconclusive results. This study compared a quantitative method based on immunomagnetic separation (IMS method) with culture and qPCR, as a new approach to routine monitoring of Legionella. First, pilot studies evaluated the recovery and detectability of Legionella spp using an IMS method, in the presence of microbiota and biocides. The IMS method results were not affected by microbiota while culture counts were significantly reduced (1.4 log) or negative in the same samples. Damage by biocides of viable Legionella was detected by the IMS method. Secondly, a total of 65 water samples were assayed by all three techniques (culture, qPCR and the IMS method). Of these, 27 (41.5%) were recorded as positive by at least one test. Legionella spp was detected by culture in 7 (25.9%) of the 27 samples. Eighteen (66.7%) of the 27 samples were positive by the IMS method, thirteen of them reporting counts below 10(3) colony forming units per liter (CFU l(-1)), six presented interfering microbiota and three presented PCR inhibition. Of the 65 water samples, 24 presented interfering microbiota by culture and 8 presented partial or complete inhibition of the PCR reaction. So the rate of inconclusive results of culture and PCR was 36

  16. An Investigation of the Variety and Complexity of Statistical Methods Used in Current Internal Medicine Literature.

    Science.gov (United States)

    Narayanan, Roshni; Nugent, Rebecca; Nugent, Kenneth

    2015-10-01

    Accreditation Council for Graduate Medical Education guidelines require internal medicine residents to develop skills in the interpretation of medical literature and to understand the principles of research. A necessary component is the ability to understand the statistical methods used and their results, material that is not an in-depth focus of most medical school curricula and residency programs. Given the breadth and depth of the current medical literature and an increasing emphasis on complex, sophisticated statistical analyses, the statistical foundation and education necessary for residents are uncertain. We reviewed the statistical methods and terms used in 49 articles discussed at the journal club in the Department of Internal Medicine residency program at Texas Tech University between January 1, 2013 and June 30, 2013. We collected information on the study type and on the statistical methods used for summarizing and comparing samples, determining the relations between independent variables and dependent variables, and estimating models. We then identified the typical statistics education level at which each term or method is learned. A total of 14 articles came from the Journal of the American Medical Association Internal Medicine, 11 from the New England Journal of Medicine, 6 from the Annals of Internal Medicine, 5 from the Journal of the American Medical Association, and 13 from other journals. Twenty reported randomized controlled trials. Summary statistics included mean values (39 articles), category counts (38), and medians (28). Group comparisons were based on t tests (14 articles), χ2 tests (21), and nonparametric ranking tests (10). The relations between dependent and independent variables were analyzed with simple regression (6 articles), multivariate regression (11), and logistic regression (8). Nine studies reported odds ratios with 95% confidence intervals, and seven analyzed test performance using sensitivity and specificity calculations

  17. Computational RNA secondary structure design: empirical complexity and improved methods

    Directory of Open Access Journals (Sweden)

    Condon Anne

    2007-01-01

    Full Text Available Abstract Background We investigate the empirical complexity of the RNA secondary structure design problem, that is, the scaling of the typical difficulty of the design task for various classes of RNA structures as the size of the target structure is increased. The purpose of this work is to understand better the factors that make RNA structures hard to design for existing, high-performance algorithms. Such understanding provides the basis for improving the performance of one of the best algorithms for this problem, RNA-SSD, and for characterising its limitations. Results To gain insights into the practical complexity of the problem, we present a scaling analysis on random and biologically motivated structures using an improved version of the RNA-SSD algorithm, and also the RNAinverse algorithm from the Vienna package. Since primary structure constraints are relevant for designing RNA structures, we also investigate the correlation between the number and the location of the primary structure constraints when designing structures and the performance of the RNA-SSD algorithm. The scaling analysis on random and biologically motivated structures supports the hypothesis that the running time of both algorithms scales polynomially with the size of the structure. We also found that the algorithms are in general faster when constraints are placed only on paired bases in the structure. Furthermore, we prove that, according to the standard thermodynamic model, for some structures that the RNA-SSD algorithm was unable to design, there exists no sequence whose minimum free energy structure is the target structure. Conclusion Our analysis helps to better understand the strengths and limitations of both the RNA-SSD and RNAinverse algorithms, and suggests ways in which the performance of these algorithms can be further improved.

  18. A method to compute the inverse of a complex n-block tridiagonal quasi-hermitian matrix

    International Nuclear Information System (INIS)

    Godfrin, Elena

    1990-01-01

    This paper presents a method to compute the inverse of a complex n-block tridiagonal quasi-hermitian matrix using adequate partitions of the complete matrix. This type of matrix is very usual in quantum mechanics and, more specifically, in solid state physics (e.g., interfaces and superlattices), when the tight-binding approximation is used. The efficiency of the method is analyzed comparing the required CPU time and work-area for different usual techniques. (Author)

  19. Conducting organizational safety reviews - requirements, methods and experience

    International Nuclear Information System (INIS)

    Reiman, T.; Oedewald, P.; Wahlstroem, B.; Rollenhagen, C.; Kahlbom, U.

    2008-03-01

    Organizational safety reviews are part of the safety management process of power plants. They are typically performed after major reorganizations, significant incidents or according to specified review programs. Organizational reviews can also be a part of a benchmarking between organizations that aims to improve work practices. Thus, they are important instruments in proactive safety management and safety culture. Most methods that have been used for organizational reviews are based more on practical considerations than a sound scientific theory of how various organizational or technical issues influence safety. Review practices and methods also vary considerably. The objective of this research is to promote understanding on approaches used in organizational safety reviews as well as to initiate discussion on criteria and methods of organizational assessment. The research identified a set of issues that need to be taken into account when planning and conducting organizational safety reviews. Examples of the issues are definition of appropriate criteria for evaluation, the expertise needed in the assessment and the organizational motivation for conducting the assessment. The study indicates that organizational safety assessments involve plenty of issues and situations where choices have to be made regarding what is considered valid information and a balance has to be struck between focus on various organizational phenomena. It is very important that these choices are based on a sound theoretical framework and that these choices can later be evaluated together with the assessment findings. The research concludes that at its best, the organizational safety reviews can be utilised as a source of information concerning the changing vulnerabilities and the actual safety performance of the organization. In order to do this, certain basic organizational phenomena and assessment issues have to be acknowledged and considered. The research concludes with recommendations on

  20. Conducting organizational safety reviews - requirements, methods and experience

    Energy Technology Data Exchange (ETDEWEB)

    Reiman, T.; Oedewald, P.; Wahlstroem, B. [Technical Research Centre of Finland, VTT (Finland); Rollenhagen, C. [Royal Institute of Technology, KTH, (Sweden); Kahlbom, U. [RiskPilot (Sweden)

    2008-03-15

    Organizational safety reviews are part of the safety management process of power plants. They are typically performed after major reorganizations, significant incidents or according to specified review programs. Organizational reviews can also be a part of a benchmarking between organizations that aims to improve work practices. Thus, they are important instruments in proactive safety management and safety culture. Most methods that have been used for organizational reviews are based more on practical considerations than a sound scientific theory of how various organizational or technical issues influence safety. Review practices and methods also vary considerably. The objective of this research is to promote understanding on approaches used in organizational safety reviews as well as to initiate discussion on criteria and methods of organizational assessment. The research identified a set of issues that need to be taken into account when planning and conducting organizational safety reviews. Examples of the issues are definition of appropriate criteria for evaluation, the expertise needed in the assessment and the organizational motivation for conducting the assessment. The study indicates that organizational safety assessments involve plenty of issues and situations where choices have to be made regarding what is considered valid information and a balance has to be struck between focus on various organizational phenomena. It is very important that these choices are based on a sound theoretical framework and that these choices can later be evaluated together with the assessment findings. The research concludes that at its best, the organizational safety reviews can be utilised as a source of information concerning the changing vulnerabilities and the actual safety performance of the organization. In order to do this, certain basic organizational phenomena and assessment issues have to be acknowledged and considered. The research concludes with recommendations on

  1. Predicting protein complexes using a supervised learning method combined with local structural information.

    Science.gov (United States)

    Dong, Yadong; Sun, Yongqi; Qin, Chao

    2018-01-01

    The existing protein complex detection methods can be broadly divided into two categories: unsupervised and supervised learning methods. Most of the unsupervised learning methods assume that protein complexes are in dense regions of protein-protein interaction (PPI) networks even though many true complexes are not dense subgraphs. Supervised learning methods utilize the informative properties of known complexes; they often extract features from existing complexes and then use the features to train a classification model. The trained model is used to guide the search process for new complexes. However, insufficient extracted features, noise in the PPI data and the incompleteness of complex data make the classification model imprecise. Consequently, the classification model is not sufficient for guiding the detection of complexes. Therefore, we propose a new robust score function that combines the classification model with local structural information. Based on the score function, we provide a search method that works both forwards and backwards. The results from experiments on six benchmark PPI datasets and three protein complex datasets show that our approach can achieve better performance compared with the state-of-the-art supervised, semi-supervised and unsupervised methods for protein complex detection, occasionally significantly outperforming such methods.

  2. Methods of Complex Data Processing from Technical Means of Monitoring

    Directory of Open Access Journals (Sweden)

    Serhii Tymchuk

    2017-03-01

    Full Text Available The problem of processing the information from different types of monitoring equipment was examined. The use of generalized methods of information processing, based on the techniques of clustering combined territorial information sources for monitoring and the use of framing model of knowledge base for identification of monitoring objects was proposed as a possible solution of the problem. Clustering methods were formed on the basis of Lance-Williams hierarchical agglomerative procedure using the Ward metrics. Frame model of knowledge base was built using the tools of object-oriented modeling.

  3. Endosomal sorting complexes required for ESCRTing cells toward death during neurogenesis, neurodevelopment and neurodegeneration.

    Science.gov (United States)

    Kaul, Zenia; Chakrabarti, Oishee

    2018-03-25

    The endosomal sorting complexes required for transport (ESCRT) proteins help in the recognition, sorting and degradation of ubiquitinated cargoes from the cell surface, long-lived proteins or aggregates, and aged organelles present in the cytosol. These proteins take part in the endo-lysosomal system of degradation. The ESCRT proteins also play an integral role in cytokinesis, viral budding and mRNA transport. Many neurodegenerative diseases are caused by toxic accumulation of cargo in the cell, which causes stress and ultimately leads to neuronal death. This accumulation of cargo occurs because of defects in the endo-lysosomal degradative pathway-loss of function of ESCRTs has been implicated in this mechanism. ESCRTs also take part in many survival processes, lack of which can culminate in neuronal cell death. While the role played by the ESCRT proteins in maintaining healthy neurons is known, their role in neurodegenerative diseases is still poorly understood. In this review, we highlight the importance of ESCRTs in maintaining healthy neurons and then suggest how perturbations in many of the survival mechanisms governed by these proteins could eventually lead to cell death; quite often these correlations are not so obviously laid out. Extensive neuronal death eventually culminates in neurodegeneration. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  4. Power Analysis for Complex Mediational Designs Using Monte Carlo Methods

    Science.gov (United States)

    Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.

    2010-01-01

    Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex…

  5. Unsteady panel method for complex configurations including wake modeling

    CSIR Research Space (South Africa)

    Van Zyl, Lourens H

    2008-01-01

    Full Text Available implementations of the DLM are however not very versatile in terms of geometries that can be modeled. The ZONA6 code offers a versatile surface panel body model including a separated wake model, but uses a pressure panel method for lifting surfaces. This paper...

  6. The γ-tubulin complex in Trypanosoma brucei: molecular composition, subunit interdependence and requirement for axonemal central pair protein assembly

    Science.gov (United States)

    Zhou, Qing; Li, Ziyin

    2015-01-01

    The γ-tubulin complex constitutes a key component of the microtubule-organizing center and nucleates microtubule assembly. This complex differs in complexity in different organisms: the budding yeast contains the γ-tubulin small complex (γTuSC) composed of γ-tubulin, GCP2 and GCP3, whereas animals contain the γ-tubulin ring complex (γTuRC) composed of γTuSC and three additional proteins, GCP4, GCP5 and GCP6. In Trypanosoma brucei, the composition of the γ-tubulin complex remains elusive, and it is not known whether it also regulates assembly of the subpellicular microtubules and the spindle microtubules. Here we report that the γ-tubulin complex in T. brucei is composed of γ-tubulin and three GCP proteins, GCP2-GCP4, and is primarily localized in the basal body throughout the cell cycle. Depletion of GCP2 and GCP3, but not GCP4, disrupted the axonemal central pair microtubules, but not the subpellicular microtubules and the spindle microtubules. Furthermore, we showed that the γTuSC is required for assembly of two central pair proteins and that γTuSC subunits are mutually required for stability. Together, these results identified an unusual γ-tubulin complex in T. brucei, uncovered an essential role of γTuSC in central pair protein assembly, and demonstrated the interdependence of individual γTuSC components for maintaining a stable complex. PMID:26224545

  7. Ergonomic requirements to control room design - evaluation method

    International Nuclear Information System (INIS)

    Hinz, W.

    1985-01-01

    The method of evaluation introduced is the result of work carried out by the sub-committee 'Control Room Design' of the Engineering Standards Committee in DIN Standards, Ergonomy. This committee compiles standards for the design of control rooms (instrumentation and control) for the monitoring and operation of process engineering cycles. With the agreement of the committee - whom we wish to take the opportunity of thanking at this point for their constructive collaboration - a planned partial standard will be introduced thematically in the following, in order that knowledge gained from the discussion can be included in further work on the subject. The matter in question is a procedure for the qualitative evaluation of the duties to be performed under the control of operators in order that an assessment can be made of existing control concepts or such concepts as are to be found in the draft phase. (orig./GL) [de

  8. Method for VAWT Placement on a Complex Building Structure

    Science.gov (United States)

    2013-06-01

    Dec. 2013. [5] F. Balduzzi, A. Bianchini, E. Carnevale, L Ferrari, S. Magnani, “Feasibility analysis of a Darrieus vertical-axis wind turbine ... turbines used to power the cooling system. A simulation of Building 216, which is the planned site of the cooling system, was performed. A wind flow...analysis found that optimum placement of the wind turbines is at the front of the south end of the building. The method for placing the wind turbines is

  9. Laser absorption spectroscopy - Method for monitoring complex trace gas mixtures

    Science.gov (United States)

    Green, B. D.; Steinfeld, J. I.

    1976-01-01

    A frequency stabilized CO2 laser was used for accurate determinations of the absorption coefficients of various gases in the wavelength region from 9 to 11 microns. The gases investigated were representative of the types of contaminants expected to build up in recycled atmospheres. These absorption coefficients were then used in determining the presence and amount of the gases in prepared mixtures. The effect of interferences on the minimum detectable concentration of the gases was measured. The accuracies of various methods of solution were also evaluated.

  10. High-level inhibition of mitochondrial complexes III and IV is required to increase glutamate release from the nerve terminal

    Directory of Open Access Journals (Sweden)

    Kilbride Seán M

    2011-07-01

    Full Text Available Abstract Background The activities of mitochondrial complex III (ubiquinol-cytochrome c reductase, EC 1.10.2.2 and complex IV (cytochrome c oxidase EC 1.9.3.1 are reduced by 30-70% in Huntington's disease and Alzheimer's disease, respectively, and are associated with excitotoxic cell death in these disorders. In this study, we investigated the control that complexes III and complex IV exert on glutamate release from the isolated nerve terminal. Results Inhibition of complex III activity by 60-90% was necessary for a major increase in the rate of Ca2+-independent glutamate release to occur from isolated nerve terminals (synaptosomes depolarized with 4-aminopyridine or KCl. Similarly, an 85-90% inhibition of complex IV activity was required before a major increase in the rate of Ca2+-independent glutamate release from depolarized synaptosomes was observed. Inhibition of complex III and IV activities by ~ 60% and above was required before rates of glutamate efflux from polarized synaptosomes were increased. Conclusions These results suggest that nerve terminal mitochondria possess high reserves of complex III and IV activity and that high inhibition thresholds must be reached before excess glutamate is released from the nerve terminal. The implications of the results in the context of the relationship between electron transport chain enzyme deficiencies and excitotoxicity in neurodegenerative disorders are discussed.

  11. High-level inhibition of mitochondrial complexes III and IV is required to increase glutamate release from the nerve terminal

    LENUS (Irish Health Repository)

    Kilbride, Sean M

    2011-07-26

    Abstract Background The activities of mitochondrial complex III (ubiquinol-cytochrome c reductase, EC 1.10.2.2) and complex IV (cytochrome c oxidase EC 1.9.3.1) are reduced by 30-70% in Huntington\\'s disease and Alzheimer\\'s disease, respectively, and are associated with excitotoxic cell death in these disorders. In this study, we investigated the control that complexes III and complex IV exert on glutamate release from the isolated nerve terminal. Results Inhibition of complex III activity by 60-90% was necessary for a major increase in the rate of Ca2+-independent glutamate release to occur from isolated nerve terminals (synaptosomes) depolarized with 4-aminopyridine or KCl. Similarly, an 85-90% inhibition of complex IV activity was required before a major increase in the rate of Ca2+-independent glutamate release from depolarized synaptosomes was observed. Inhibition of complex III and IV activities by ~ 60% and above was required before rates of glutamate efflux from polarized synaptosomes were increased. Conclusions These results suggest that nerve terminal mitochondria possess high reserves of complex III and IV activity and that high inhibition thresholds must be reached before excess glutamate is released from the nerve terminal. The implications of the results in the context of the relationship between electron transport chain enzyme deficiencies and excitotoxicity in neurodegenerative disorders are discussed.

  12. A direct algebraic method applied to obtain complex solutions of some nonlinear partial differential equations

    International Nuclear Information System (INIS)

    Zhang Huiqun

    2009-01-01

    By using some exact solutions of an auxiliary ordinary differential equation, a direct algebraic method is described to construct the exact complex solutions for nonlinear partial differential equations. The method is implemented for the NLS equation, a new Hamiltonian amplitude equation, the coupled Schrodinger-KdV equations and the Hirota-Maccari equations. New exact complex solutions are obtained.

  13. Requirements for construction of a functional hybrid complex of photosystem I and [NiFe]-hydrogenase.

    Science.gov (United States)

    Schwarze, Alexander; Kopczak, Marta J; Rögner, Matthias; Lenz, Oliver

    2010-04-01

    The development of cellular systems in which the enzyme hydrogenase is efficiently coupled to the oxygenic photosynthesis apparatus represents an attractive avenue to produce H(2) sustainably from light and water. Here we describe the molecular design of the individual components required for the direct coupling of the O(2)-tolerant membrane-bound hydrogenase (MBH) from Ralstonia eutropha H16 to the acceptor site of photosystem I (PS I) from Synechocystis sp. PCC 6803. By genetic engineering, the peripheral subunit PsaE of PS I was fused to the MBH, and the resulting hybrid protein was purified from R. eutropha to apparent homogeneity via two independent affinity chromatographical steps. The catalytically active MBH-PsaE (MBH(PsaE)) hybrid protein could be isolated only from the cytoplasmic fraction. This was surprising, since the MBH is a substrate of the twin-arginine translocation system and was expected to reside in the periplasm. We conclude that the attachment of the additional PsaE domain to the small, electron-transferring subunit of the MBH completely abolished the export competence of the protein. Activity measurements revealed that the H(2) production capacity of the purified MBH(PsaE) fusion protein was very similar to that of wild-type MBH. In order to analyze the specific interaction of MBH(PsaE) with PS I, His-tagged PS I lacking the PsaE subunit was purified via Ni-nitrilotriacetic acid affinity and subsequent hydrophobic interaction chromatography. Formation of PS I-hydrogenase supercomplexes was demonstrated by blue native gel electrophoresis. The results indicate a vital prerequisite for the quantitative analysis of the MBH(PsaE)-PS I complex formation and its light-driven H(2) production capacity by means of spectroelectrochemistry.

  14. Clueless, a protein required for mitochondrial function, interacts with the PINK1-Parkin complex in Drosophila

    Directory of Open Access Journals (Sweden)

    Aditya Sen

    2015-06-01

    Full Text Available Loss of mitochondrial function often leads to neurodegeneration and is thought to be one of the underlying causes of neurodegenerative diseases such as Parkinson's disease (PD. However, the precise events linking mitochondrial dysfunction to neuronal death remain elusive. PTEN-induced putative kinase 1 (PINK1 and Parkin (Park, either of which, when mutated, are responsible for early-onset PD, mark individual mitochondria for destruction at the mitochondrial outer membrane. The specific molecular pathways that regulate signaling between the nucleus and mitochondria to sense mitochondrial dysfunction under normal physiological conditions are not well understood. Here, we show that Drosophila Clueless (Clu, a highly conserved protein required for normal mitochondrial function, can associate with Translocase of the outer membrane (TOM 20, Porin and PINK1, and is thus located at the mitochondrial outer membrane. Previously, we found that clu genetically interacts with park in Drosophila female germ cells. Here, we show that clu also genetically interacts with PINK1, and our epistasis analysis places clu downstream of PINK1 and upstream of park. In addition, Clu forms a complex with PINK1 and Park, further supporting that Clu links mitochondrial function with the PINK1-Park pathway. Lack of Clu causes PINK1 and Park to interact with each other, and clu mutants have decreased mitochondrial protein levels, suggesting that Clu can act as a negative regulator of the PINK1-Park pathway. Taken together, these results suggest that Clu directly modulates mitochondrial function, and that Clu's function contributes to the PINK1-Park pathway of mitochondrial quality control.

  15. Rybp, a polycomb complex-associated protein, is required for mouse eye development

    Directory of Open Access Journals (Sweden)

    Schreiber-Agus Nicole

    2007-04-01

    Full Text Available Abstract Background Rybp (Ring1 and YY1 binding protein is a zinc finger protein which interacts with the members of the mammalian polycomb complexes. Previously we have shown that Rybp is critical for early embryogenesis and that haploinsufficiency of Rybp in a subset of embryos causes failure of neural tube closure. Here we investigated the requirement for Rybp in ocular development using four in vivo mouse models which resulted in either the ablation or overexpression of Rybp. Results Our results demonstrate that loss of a single Rybp allele in conventional knockout mice often resulted in retinal coloboma, an incomplete closure of the optic fissure, characterized by perturbed localization of Pax6 but not of Pax2. In addition, about one half of Rybp-/- Rybp+/+ chimeric embryos also developed retinal colobomas and malformed lenses. Tissue-specific transgenic overexpression of Rybp in the lens resulted in abnormal fiber cell differentiation and severe lens opacification with increased levels of AP-2α and Sox2, and reduced levels of βA4-crystallin gene expression. Ubiquitous transgenic overexpression of Rybp in the entire eye caused abnormal retinal folds, corneal neovascularization, and lens opacification. Additional changes included defects in anterior eye development. Conclusion These studies establish Rybp as a novel gene that has been associated with coloboma. Other genes linked to coloboma encode various classes of transcription factors such as BCOR, CBP, Chx10, Pax2, Pax6, Six3, Ski, Vax1 and Vax2. We propose that the multiple functions for Rybp in regulating mouse retinal and lens development are mediated by genetic, epigenetic and physical interactions between these genes and proteins.

  16. Modelling of complex heat transfer systems by the coupling method

    Energy Technology Data Exchange (ETDEWEB)

    Bacot, P.; Bonfils, R.; Neveu, A.; Ribuot, J. (Centre d' Energetique de l' Ecole des Mines de Paris, 75 (France))

    1985-04-01

    The coupling method proposed here is designed to reduce the size of matrices which appear in the modelling of heat transfer systems. It consists in isolating the elements that can be modelled separately, and among the input variables of a component, identifying those which will couple it to another component. By grouping these types of variable, one can thus identify a so-called coupling matrix of reduced size, and relate it to the overall system. This matrix allows the calculation of the coupling temperatures as a function of external stresses, and of the state of the overall system at the previous instant. The internal temperatures of the components are determined from for previous ones. Two examples of applications are presented, one concerning a dwelling unit, and the second a solar water heater.

  17. Conjugate gradient type methods for linear systems with complex symmetric coefficient matrices

    Science.gov (United States)

    Freund, Roland

    1989-01-01

    We consider conjugate gradient type methods for the solution of large sparse linear system Ax equals b with complex symmetric coefficient matrices A equals A(T). Such linear systems arise in important applications, such as the numerical solution of the complex Helmholtz equation. Furthermore, most complex non-Hermitian linear systems which occur in practice are actually complex symmetric. We investigate conjugate gradient type iterations which are based on a variant of the nonsymmetric Lanczos algorithm for complex symmetric matrices. We propose a new approach with iterates defined by a quasi-minimal residual property. The resulting algorithm presents several advantages over the standard biconjugate gradient method. We also include some remarks on the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.

  18. A fluorescence anisotropy method for measuring protein concentration in complex cell culture media.

    Science.gov (United States)

    Groza, Radu Constantin; Calvet, Amandine; Ryder, Alan G

    2014-04-22

    The rapid, quantitative analysis of the complex cell culture media used in biopharmaceutical manufacturing is of critical importance. Requirements for cell culture media composition profiling, or changes in specific analyte concentrations (e.g. amino acids in the media or product protein in the bioprocess broth) often necessitate the use of complicated analytical methods and extensive sample handling. Rapid spectroscopic methods like multi-dimensional fluorescence (MDF) spectroscopy have been successfully applied for the routine determination of compositional changes in cell culture media and bioprocess broths. Quantifying macromolecules in cell culture media is a specific challenge as there is a need to implement measurements rapidly on the prepared media. However, the use of standard fluorescence spectroscopy is complicated by the emission overlap from many media components. Here, we demonstrate how combining anisotropy measurements with standard total synchronous fluorescence spectroscopy (TSFS) provides a rapid, accurate quantitation method for cell culture media. Anisotropy provides emission resolution between large and small fluorophores while TSFS provides a robust measurement space. Model cell culture media was prepared using yeastolate (2.5 mg mL(-1)) spiked with bovine serum albumin (0 to 5 mg mL(-1)). Using this method, protein emission is clearly discriminated from background yeastolate emission, allowing for accurate bovine serum albumin (BSA) quantification over a 0.1 to 4.0 mg mL(-1) range with a limit of detection (LOD) of 13.8 μg mL(-1). Copyright © 2014. Published by Elsevier B.V.

  19. Identifying deterministic signals in simulated gravitational wave data: algorithmic complexity and the surrogate data method

    International Nuclear Information System (INIS)

    Zhao Yi; Small, Michael; Coward, David; Howell, Eric; Zhao Chunnong; Ju Li; Blair, David

    2006-01-01

    We describe the application of complexity estimation and the surrogate data method to identify deterministic dynamics in simulated gravitational wave (GW) data contaminated with white and coloured noises. The surrogate method uses algorithmic complexity as a discriminating statistic to decide if noisy data contain a statistically significant level of deterministic dynamics (the GW signal). The results illustrate that the complexity method is sensitive to a small amplitude simulated GW background (SNR down to 0.08 for white noise and 0.05 for coloured noise) and is also more robust than commonly used linear methods (autocorrelation or Fourier analysis)

  20. Arc Requires PSD95 for Assembly into Postsynaptic Complexes Involved with Neural Dysfunction and Intelligence

    Directory of Open Access Journals (Sweden)

    Esperanza Fernández

    2017-10-01

    Full Text Available Arc is an activity-regulated neuronal protein, but little is known about its interactions, assembly into multiprotein complexes, and role in human disease and cognition. We applied an integrated proteomic and genetic strategy by targeting a tandem affinity purification (TAP tag and Venus fluorescent protein into the endogenous Arc gene in mice. This allowed biochemical and proteomic characterization of native complexes in wild-type and knockout mice. We identified many Arc-interacting proteins, of which PSD95 was the most abundant. PSD95 was essential for Arc assembly into 1.5-MDa complexes and activity-dependent recruitment to excitatory synapses. Integrating human genetic data with proteomic data showed that Arc-PSD95 complexes are enriched in schizophrenia, intellectual disability, autism, and epilepsy mutations and normal variants in intelligence. We propose that Arc-PSD95 postsynaptic complexes potentially affect human cognitive function.

  1. A primary method for the complex calibration of a hydrophone from 1 Hz to 2 kHz

    Science.gov (United States)

    Slater, W. H.; E Crocker, S.; Baker, S. R.

    2018-02-01

    A primary calibration method is demonstrated to obtain the magnitude and phase of the complex sensitivity for a hydrophone at frequencies between 1 Hz and 2 kHz. The measurement is performed in a coupler reciprocity chamber (‘coupler’) a closed test chamber where time harmonic oscillations in pressure can be achieved and the reciprocity conditions required for a primary calibration can be realized. Relevant theory is reviewed and the reciprocity parameter updated for the complex measurement. Systematic errors and corrections for magnitude are reviewed and more added for phase. The combined expanded uncertainties of the magnitude and phase of the complex sensitivity at 1 Hz were 0.1 dB re 1 V μ Pa-1 and  ± 1\\circ , respectively. Complex sensitivity, sensitivity magnitude, and phase measurements are presented on an example primary reference hydrophone.

  2. γ-Tubulin complex in Trypanosoma brucei: molecular composition, subunit interdependence and requirement for axonemal central pair protein assembly.

    Science.gov (United States)

    Zhou, Qing; Li, Ziyin

    2015-11-01

    γ-Tubulin complex constitutes a key component of the microtubule-organizing center and nucleates microtubule assembly. This complex differs in complexity in different organisms: the budding yeast contains the γ-tubulin small complex (γTuSC) composed of γ-tubulin, gamma-tubulin complex protein (GCP)2 and GCP3, whereas animals contain the γ-tubulin ring complex (γTuRC) composed of γTuSC and three additional proteins, GCP4, GCP5 and GCP6. In Trypanosoma brucei, the composition of the γ-tubulin complex remains elusive, and it is not known whether it also regulates assembly of the subpellicular microtubules and the spindle microtubules. Here we report that the γ-tubulin complex in T. brucei is composed of γ-tubulin and three GCP proteins, GCP2-GCP4, and is primarily localized in the basal body throughout the cell cycle. Depletion of GCP2 and GCP3, but not GCP4, disrupted the axonemal central pair microtubules, but not the subpellicular microtubules and the spindle microtubules. Furthermore, we showed that the γTuSC is required for assembly of two central pair proteins and that γTuSC subunits are mutually required for stability. Together, these results identified an unusual γ-tubulin complex in T. brucei, uncovered an essential role of γTuSC in central pair protein assembly, and demonstrated the interdependence of individual γTuSC components for maintaining a stable complex. © 2015 John Wiley & Sons Ltd.

  3. [Children's medically complex diseases unit. A model required in all our hospitals].

    Science.gov (United States)

    Climent Alcalá, Francisco José; García Fernández de Villalta, Marta; Escosa García, Luis; Rodríguez Alonso, Aroa; Albajara Velasco, Luis Adolfo

    2018-01-01

    The increase in survival of children with severe diseases has led to the rise of children with chronic diseases, sometimes with lifelong disabilities. In 2008, a unit for the specific care of medically complex children (MCC) was created in Hospital La Paz. To describe the work and care activities of this Unit. Patients and methods An analysis was performed on all discharge reports of the Unit between January 2014 and July 2016. The MCC Unit has 6 beds and daily outpatient clinic. A total of 1,027 patients have been treated since the creation of the unit, with 243 from 2014. The median age was 24.2 months (IQ: 10.21-84.25). The large majority (92.59%) have multiple diseases, the most frequent chronic conditions observed were neurological (76.95%), gastrointestinal (63.78%), and respiratory diseases (61.72%). More than two-thirds (69.54%) of MCC are dependent on technology, 53.49% on respiratory support, and 35.80% on nutritional support. Hospital admission rates have increased annually. There have been 403 admissions since 2014, of which 8.93% were re-admissions within 30 days of hospital discharge. The median stay during 2014-2016 was 6 days (IQ: 3-14). The occupancy rate has been above 100% for this period. Currently, 210 patients remain on follow-up (86.42%), and 11 children (4.53%) were discharged to their referral hospitals. The mortality rate is 9.05% (22 deaths). The main condition of these 22 patients was neurological (9 patients). Infectious diseases were the leading cause of death. MCC should be treated in specialized units in tertiary or high-level hospitals. Copyright © 2016 Asociación Española de Pediatría. Publicado por Elsevier España, S.L.U. All rights reserved.

  4. Using mixed methods to develop and evaluate complex interventions in palliative care research.

    Science.gov (United States)

    Farquhar, Morag C; Ewing, Gail; Booth, Sara

    2011-12-01

    there is increasing interest in combining qualitative and quantitative research methods to provide comprehensiveness and greater knowledge yield. Mixed methods are valuable in the development and evaluation of complex interventions. They are therefore particularly valuable in palliative care research where the majority of interventions are complex, and the identification of outcomes particularly challenging. this paper aims to introduce the role of mixed methods in the development and evaluation of complex interventions in palliative care, and how they may be used in palliative care research. the paper defines mixed methods and outlines why and how mixed methods are used to develop and evaluate complex interventions, with a pragmatic focus on design and data collection issues and data analysis. Useful texts are signposted and illustrative examples provided of mixed method studies in palliative care, including a detailed worked example of the development and evaluation of a complex intervention in palliative care for breathlessness. Key challenges to conducting mixed methods in palliative care research are identified in relation to data collection, data integration in analysis, costs and dissemination and how these might be addressed. the development and evaluation of complex interventions in palliative care benefit from the application of mixed methods. Mixed methods enable better understanding of whether and how an intervention works (or does not work) and inform the design of subsequent studies. However, they can be challenging: mixed method studies in palliative care will benefit from working with agreed protocols, multidisciplinary teams and engaging staff with appropriate skill sets.

  5. Using Project Complexity Determinations to Establish Required Levels of Project Rigor

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, Thomas D.

    2015-10-01

    This presentation discusses the project complexity determination process that was developed by National Security Technologies, LLC, for the U.S. Department of Energy, National Nuclear Security Administration Nevada Field Office for implementation at the Nevada National Security Site (NNSS). The complexity determination process was developed to address the diversity of NNSS project types, size, and complexity; to fill the need for one procedure but with provision for tailoring the level of rigor to the project type, size, and complexity; and to provide consistent, repeatable, effective application of project management processes across the enterprise; and to achieve higher levels of efficiency in project delivery. These needs are illustrated by the wide diversity of NNSS projects: Defense Experimentation, Global Security, weapons tests, military training areas, sensor development and testing, training in realistic environments, intelligence community support, sensor development, environmental restoration/waste management, and disposal of radioactive waste, among others.

  6. The Videographic Requirements Gathering Method for Adolescent-Focused Interaction Design

    Directory of Open Access Journals (Sweden)

    Tamara Peyton

    2014-08-01

    Full Text Available We present a novel method for conducting requirements gathering with adolescent populations. Called videographic requirements gathering, this technique makes use of mobile phone data capture and participant creation of media images. The videographic requirements gathering method can help researchers and designers gain intimate insight into adolescent lives while simultaneously reducing power imbalances. We provide rationale for this approach, pragmatics of using the method, and advice on overcoming common challenges facing researchers and designers relying on this technique.

  7. Purohit's spectrophotometric method for determination of stability constants of complexes using Job's curves

    International Nuclear Information System (INIS)

    Purohit, D.N.; Goswami, A.K.; Chauhan, R.S.; Ressalan, S.

    1999-01-01

    A spectrophotometric method for determination of stability constants making use of Job's curves has been developed. Using this method stability constants of Zn(II), Cd(II), Mo(VI) and V(V) complexes of hydroxytriazenes have been determined. For the sake of comparison, values of the stability constants were also determined using Harvey and Manning's method. The values of the stability constants developed by two methods compare well. This new method has been named as Purohit's method. (author)

  8. Development of an Evaluation Method for the Design Complexity of Computer-Based Displays

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyoung Ju; Lee, Seung Woo; Kang, Hyun Gook; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of); Park, Jin Kyun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2011-10-15

    The importance of the design of human machine interfaces (HMIs) for human performance and the safety of process industries has long been continuously recognized for many decades. Especially, in the case of nuclear power plants (NPPs), HMIs have significant implications for the safety of the NPPs because poor HMIs can impair the decision making ability of human operators. In order to support and increase the decision making ability of human operators, advanced HMIs based on the up-to-date computer technology are provided. Human operators in advanced main control room (MCR) acquire information through video display units (VDUs) and large display panel (LDP), which is required for the operation of NPPs. These computer-based displays contain a huge amount of information and present it with a variety of formats compared to those of a conventional MCR. For example, these displays contain more display elements such as abbreviations, labels, icons, symbols, coding, etc. As computer-based displays contain more information, the complexity of advanced displays becomes greater due to less distinctiveness of each display element. A greater understanding is emerging about the effectiveness of designs of computer-based displays, including how distinctively display elements should be designed. This study covers the early phase in the development of an evaluation method for the design complexity of computer-based displays. To this end, a series of existing studies were reviewed to suggest an appropriate concept that is serviceable to unravel this problem

  9. A new spinning reserve requirement forecast method for deregulated electricity markets

    International Nuclear Information System (INIS)

    Amjady, Nima; Keynia, Farshid

    2010-01-01

    Ancillary services are necessary for maintaining the security and reliability of power systems and constitute an important part of trade in competitive electricity markets. Spinning Reserve (SR) is one of the most important ancillary services for saving power system stability and integrity in response to contingencies and disturbances that continuously occur in the power systems. Hence, an accurate day-ahead forecast of SR requirement helps the Independent System Operator (ISO) to conduct a reliable and economic operation of the power system. However, SR signal has complex, non-stationary and volatile behavior along the time domain and depends greatly on system load. In this paper, a new hybrid forecast engine is proposed for SR requirement prediction. The proposed forecast engine has an iterative training mechanism composed of Levenberg-Marquadt (LM) learning algorithm and Real Coded Genetic Algorithm (RCGA), implemented on the Multi-Layer Perceptron (MLP) neural network. The proposed forecast methodology is examined by means of real data of Pennsylvania-New Jersey-Maryland (PJM) electricity market and the California ISO (CAISO) controlled grid. The obtained forecast results are presented and compared with those of the other SR forecast methods. (author)

  10. A new spinning reserve requirement forecast method for deregulated electricity markets

    Energy Technology Data Exchange (ETDEWEB)

    Amjady, Nima; Keynia, Farshid [Department of Electrical Engineering, Semnan University, Semnan (Iran)

    2010-06-15

    Ancillary services are necessary for maintaining the security and reliability of power systems and constitute an important part of trade in competitive electricity markets. Spinning Reserve (SR) is one of the most important ancillary services for saving power system stability and integrity in response to contingencies and disturbances that continuously occur in the power systems. Hence, an accurate day-ahead forecast of SR requirement helps the Independent System Operator (ISO) to conduct a reliable and economic operation of the power system. However, SR signal has complex, non-stationary and volatile behavior along the time domain and depends greatly on system load. In this paper, a new hybrid forecast engine is proposed for SR requirement prediction. The proposed forecast engine has an iterative training mechanism composed of Levenberg-Marquadt (LM) learning algorithm and Real Coded Genetic Algorithm (RCGA), implemented on the Multi-Layer Perceptron (MLP) neural network. The proposed forecast methodology is examined by means of real data of Pennsylvania-New Jersey-Maryland (PJM) electricity market and the California ISO (CAISO) controlled grid. The obtained forecast results are presented and compared with those of the other SR forecast methods. (author)

  11. Complex data modeling and computationally intensive methods for estimation and prediction

    CERN Document Server

    Secchi, Piercesare; Advances in Complex Data Modeling and Computational Methods in Statistics

    2015-01-01

    The book is addressed to statisticians working at the forefront of the statistical analysis of complex and high dimensional data and offers a wide variety of statistical models, computer intensive methods and applications: network inference from the analysis of high dimensional data; new developments for bootstrapping complex data; regression analysis for measuring the downsize reputational risk; statistical methods for research on the human genome dynamics; inference in non-euclidean settings and for shape data; Bayesian methods for reliability and the analysis of complex data; methodological issues in using administrative data for clinical and epidemiological research; regression models with differential regularization; geostatistical methods for mobility analysis through mobile phone data exploration. This volume is the result of a careful selection among the contributions presented at the conference "S.Co.2013: Complex data modeling and computationally intensive methods for estimation and prediction" held...

  12. Purification of 2-oxo acid dehydrogenase multienzyme complexes from ox heart by a new method.

    OpenAIRE

    Stanley, C J; Perham, R N

    1980-01-01

    A new method is described that allows the parallel purification of the pyruvate dehydrogenase and 2-oxoglutarate dehydrogenase multienzyme complexes from ox heart without the need for prior isolation of mitochondria. All the assayable activity of the 2-oxo acid dehydrogenase complexes in the disrupted tissue is made soluble by the inclusion of non-ionic detergents such as Triton X-100 or Tween-80 in the buffer used for the initial extraction of the enzyme complexes. The yields of the pyruvate...

  13. BRAND program complex for neutron-physical experiment simulation by the Monte-Carlo method

    International Nuclear Information System (INIS)

    Androsenko, A.A.; Androsenko, P.A.

    1984-01-01

    Possibilities of the BRAND program complex for neutron and γ-radiation transport simulation by the Monte-Carlo method are described in short. The complex includes the following modules: geometric module, source module, detector module, modules of simulation of a vector of particle motion direction after interaction and a free path. The complex is written in the FORTRAN langauage and realized by the BESM-6 computer

  14. Multiple domains of fission yeast Cdc19p (MCM2) are required for its association with the core MCM complex.

    Science.gov (United States)

    Sherman, D A; Pasion, S G; Forsburg, S L

    1998-07-01

    The members of the MCM protein family are essential eukaryotic DNA replication factors that form a six-member protein complex. In this study, we use antibodies to four MCM proteins to investigate the structure of and requirements for the formation of fission yeast MCM complexes in vivo, with particular regard to Cdc19p (MCM2). Gel filtration analysis shows that the MCM protein complexes are unstable and can be broken down to subcomplexes. Using coimmunoprecipitation, we find that Mis5p (MCM6) and Cdc21p (MCM4) are tightly associated with one another in a core complex with which Cdc19p loosely associates. Assembly of Cdc19p with the core depends upon Cdc21p. Interestingly, there is no obvious change in Cdc19p-containing MCM complexes through the cell cycle. Using a panel of Cdc19p mutants, we find that multiple domains of Cdc19p are required for MCM binding. These studies indicate that MCM complexes in fission yeast have distinct substructures, which may be relevant for function.

  15. The complex nature of mixed farming systems requires multidimensional actions supported by integrative research and development efforts

    DEFF Research Database (Denmark)

    González-García, E; Gourdine, J L; Alexandre, G

    2012-01-01

    the requirement for a change in research strategies and initiatives through the development of a complex but necessary multi-/inter-/trans-disciplinary teamwork spirit. We stress as essential the collaboration and active participation of local and regional actors, stakeholders and end-users in the identification...

  16. Simplified Method for Predicting a Functional Class of Proteins in Transcription Factor Complexes

    KAUST Repository

    Piatek, Marek J.

    2013-07-12

    Background:Initiation of transcription is essential for most of the cellular responses to environmental conditions and for cell and tissue specificity. This process is regulated through numerous proteins, their ligands and mutual interactions, as well as interactions with DNA. The key such regulatory proteins are transcription factors (TFs) and transcription co-factors (TcoFs). TcoFs are important since they modulate the transcription initiation process through interaction with TFs. In eukaryotes, transcription requires that TFs form different protein complexes with various nuclear proteins. To better understand transcription regulation, it is important to know the functional class of proteins interacting with TFs during transcription initiation. Such information is not fully available, since not all proteins that act as TFs or TcoFs are yet annotated as such, due to generally partial functional annotation of proteins. In this study we have developed a method to predict, using only sequence composition of the interacting proteins, the functional class of human TF binding partners to be (i) TF, (ii) TcoF, or (iii) other nuclear protein. This allows for complementing the annotation of the currently known pool of nuclear proteins. Since only the knowledge of protein sequences is required in addition to protein interaction, the method should be easily applicable to many species.Results:Based on experimentally validated interactions between human TFs with different TFs, TcoFs and other nuclear proteins, our two classification systems (implemented as a web-based application) achieve high accuracies in distinguishing TFs and TcoFs from other nuclear proteins, and TFs from TcoFs respectively.Conclusion:As demonstrated, given the fact that two proteins are capable of forming direct physical interactions and using only information about their sequence composition, we have developed a completely new method for predicting a functional class of TF interacting protein partners

  17. Structural requirements and biological significance of interactions between peptides and the major histocompatibility complex

    DEFF Research Database (Denmark)

    Grey, H M; Buus, S; Colon, S

    1989-01-01

    Previous studies indicate that T cells recognize a complex between the major histocompatibility complex (MHC) restriction-element and peptide-antigen fragments. Two aspects of this complex formation are considered in this paper: (1) what is the nature of the specificity of the interactions that a...... of binding to Ia (i.e. determinant selection was operative), we found that about 40% of Ia-binding peptides were not immunogenic (i.e. there were also 'holes in the T-cell repertoire')....... responsiveness, we present data that suggest both mechanisms operate in concert with one another. Thus only about 30% of a collection of peptides that in sum represent the sequence of a protein molecule were found to bind to Ia. Although immunogenicity was restricted to those peptides that were capable...

  18. Multiple Stressors and Ecological Complexity Require A New Approach to Coral Reef Research

    Directory of Open Access Journals (Sweden)

    Linwood Hagan Pendleton

    2016-03-01

    Full Text Available Ocean acidification, climate change, and other environmental stressors threaten coral reef ecosystems and the people who depend upon them. New science reveals that these multiple stressors interact and may affect a multitude of physiological and ecological processes in complex ways. The interaction of multiple stressors and ecological complexity may mean that the negative effects on coral reef ecosystems will happen sooner and be more severe than previously thought. Yet, most research on the effects of global change on coral reefs focus on one or few stressors and pathways or outcomes (e.g. bleaching. Based on a critical review of the literature, we call for a regionally targeted strategy of mesocosm-level research that addresses this complexity and provides more realistic projections about coral reef impacts in the face of global environmental change. We believe similar approaches are needed for other ecosystems that face global environmental change.

  19. From Collective Knowledge to Intelligence : Pre-Requirements Analysis of Large and Complex Systems

    NARCIS (Netherlands)

    Liang, Peng; Avgeriou, Paris; He, Keqing; Xu, Lai

    2010-01-01

    Requirements engineering is essentially a social collaborative activity in which involved stakeholders have to closely work together to communicate, elicit, negotiate, define, confirm, and finally come up with the requirements for the system to be implemented or upgraded. In the development of large

  20. Membranes linked by trans-SNARE complexes require lipids prone to non-bilayer structure for progression to fusion.

    Science.gov (United States)

    Zick, Michael; Stroupe, Christopher; Orr, Amy; Douville, Deborah; Wickner, William T

    2014-01-01

    Like other intracellular fusion events, the homotypic fusion of yeast vacuoles requires a Rab GTPase, a large Rab effector complex, SNARE proteins which can form a 4-helical bundle, and the SNARE disassembly chaperones Sec17p and Sec18p. In addition to these proteins, specific vacuole lipids are required for efficient fusion in vivo and with the purified organelle. Reconstitution of vacuole fusion with all purified components reveals that high SNARE levels can mask the requirement for a complex mixture of vacuole lipids. At lower, more physiological SNARE levels, neutral lipids with small headgroups that tend to form non-bilayer structures (phosphatidylethanolamine, diacylglycerol, and ergosterol) are essential. Membranes without these three lipids can dock and complete trans-SNARE pairing but cannot rearrange their lipids for fusion. DOI: http://dx.doi.org/10.7554/eLife.01879.001.

  1. A novel method of complex evaluation of meibomian glands morphological and functional state

    Directory of Open Access Journals (Sweden)

    V. N. Trubilin

    2014-01-01

    Full Text Available A novel method that provides complex assessment of meibomian glands morphological and functional state — biometry of meibomian glands — was developed. The results of complex examination (including meibomian glands biometry, correlation analysis data and clinical findings demonstrate direct association between the objective (i.e., meibomian glands dysfunction by biomicroscopy, tear film break-up time / TBUT, symptomatic TBUT, compression testing and subjective signs of meibomian glands dysfunction (patient’s complaints and the parameters of meibomian glands biometry. High direct correlation between biometrical index and compression test result was revealed (p = 0.002, Spearman’s rank correlation coefficient = 0.6644. Meibomian glands dysfunction is characterized by biometric parameters abnormalities, i.e., dilatation of meibomian glands orifices, decrease of distance between meibomian glands orifices, partial or total atrophy of meibomian glands (even up to gland collapse with its visual reduction and increase of distance between the glands. The suppression of inflammatory process and the recovery of meibomian glands secretion improve biometric parameters and result in the opening of meibomian glands orifices, liquefaction of clogs, evacuation of meibomian glands secretion, narrowing of meibomian glands orifices and increase of distance between them. The proposed method expands the armamentarium of meibomian glands dysfunction and lipid-deficient dry eye diagnosing. Meibomian glands biometry can be applied in specialized ophthalmological hospitals and outpatient departments. It is a simple procedure of short duration that does not require any special equipment or professional skills. Meibomian glands biometry enables to prescribe pathogenically targeted therapy and to improve quality of life. 

  2. Handbook of methods for risk-based analysis of technical specification requirements

    International Nuclear Information System (INIS)

    Samanta, P.K.; Vesely, W.E.

    1994-01-01

    Technical Specifications (TS) requirements for nuclear power plants define the Limiting Conditions for Operation (LCOs) and Surveillance Requirements (SRs) to assure safety during operation. In general, these requirements were based on deterministic analysis and engineering judgments. Experiences with plant operation indicate that some elements of the requirements are unnecessarily restrictive, while others may not be conducive to safety. Improvements in these requirements are facilitated by the availability of plant specific Probabilistic Safety Assessments (PSAs). The use of risk and reliability-based methods to improve TS requirements has gained wide interest because these methods can: Quantitatively evaluate the risk and justify changes based on objective risk arguments; Provide a defensible basis for these requirements for regulatory applications. The US NRC Office of Research is sponsoring research to develop systematic risk-based methods to improve various aspects of TS requirements. The handbook of methods, which is being prepared, summarizes such risk-based methods. The scope of the handbook includes reliability and risk-based methods for evaluating allowed outage times (AOTs), action statements requiring shutdown where shutdown risk may be substantial, surveillance test intervals (STIs), defenses against common-cause failures, managing plant configurations, and scheduling maintenances. For each topic, the handbook summarizes methods of analysis and data needs, outlines the insights to be gained, lists additional references, and presents examples of evaluations

  3. Handbook of methods for risk-based analysis of Technical Specification requirements

    International Nuclear Information System (INIS)

    Samanta, P.K.; Vesely, W.E.

    1993-01-01

    Technical Specifications (TS) requirements for nuclear power plants define the Limiting Conditions for Operation (LCOs) and Surveillance Requirements (SRs) to assure safety during operation. In general, these requirements were based on deterministic analysis and engineering judgments. Experiences with plant operation indicate that some elements of the requirements are unnecessarily restrictive, while others may not be conducive to safety. Improvements in these requirements are facilitated by the availability of plant specific Probabilistic Safety Assessments (PSAs). The use of risk and reliability-based methods to improve TS requirements has gained wide interest because these methods can: quantitatively evaluate the risk impact and justify changes based on objective risk arguments. Provide a defensible basis for these requirements for regulatory applications. The United States Nuclear Regulatory Commission (USNRC) Office of Research is sponsoring research to develop systematic risk-based methods to improve various aspects of TS requirements. The handbook of methods, which is being prepared, summarizes such risk-based methods. The scope of the handbook includes reliability and risk-based methods for evaluating allowed outage times (AOTs), action statements requiring shutdown where shutdown risk may be substantial, surveillance test intervals (STIs), defenses against common-cause failures, managing plant configurations, and scheduling maintenances. For each topic, the handbook summarizes methods of analysis and data needs, outlines the insights to be gained, lists additional references, and presents examples of evaluations

  4. Studies on the complexation of diclofenac sodium with β-cyclodextrin: Influence of method of preparation

    Science.gov (United States)

    Das, Subhraseema; Subuddhi, Usharani

    2015-11-01

    Inclusion complexes of diclofenac sodium (DS) with β-cyclodextrin (β-CD) were prepared in order to improve the solubility, dissolution and oral bioavailability of the poorly water soluble drug. The effect of method of preparation of the DS/β-CD inclusion complexes (ICs) was investigated. The ICs were prepared by microwave irradiation and also by the conventional methods of kneading, co-precipitation and freeze drying. Though freeze drying method is usually referred to as the gold standard among all the conventional methods, its long processing time limits the utility. Microwave irradiation accomplishes the process in a very short span of time and is a more environmentally benign method. Better efficacy of the microwaved inclusion product (MW) was observed in terms of dissolution, antimicrobial activity and antibiofilm properties of the drug. Thus microwave irradiation can be utilized as an improved, time-saving and cost-effective method for the generation of DS/β-CD inclusion complexes.

  5. Structural requirements for the assembly of LINC complexes and their function in cellular mechanical stiffness

    International Nuclear Information System (INIS)

    Stewart-Hutchinson, P.J.; Hale, Christopher M.; Wirtz, Denis; Hodzic, Didier

    2008-01-01

    The evolutionary-conserved interactions between KASH and SUN domain-containing proteins within the perinuclear space establish physical connections, called LINC complexes, between the nucleus and the cytoskeleton. Here, we show that the KASH domains of Nesprins 1, 2 and 3 interact promiscuously with luminal domains of Sun1 and Sun2. These constructs disrupt endogenous LINC complexes as indicated by the displacement of endogenous Nesprins from the nuclear envelope. We also provide evidence that KASH domains most probably fit a pocket provided by SUN domains and that post-translational modifications are dispensable for that interaction. We demonstrate that the disruption of endogenous LINC complexes affect cellular mechanical stiffness to an extent that compares to the loss of mechanical stiffness previously reported in embryonic fibroblasts derived from mouse lacking A-type lamins, a mouse model of muscular dystrophies and cardiomyopathies. These findings support a model whereby physical connections between the nucleus and the cytoskeleton are mediated by interactions between diverse combinations of Sun proteins and Nesprins through their respective evolutionary-conserved domains. Furthermore, they emphasize, for the first time, the relevance of LINC complexes in cellular mechanical stiffness suggesting a possible involvement of their disruption in various laminopathies, a group of human diseases linked to mutations of A-type lamins

  6. The TIP30 protein complex, arachidonic acid and coenzyme A are required for vesicle membrane fusion.

    Directory of Open Access Journals (Sweden)

    Chengliang Zhang

    Full Text Available Efficient membrane fusion has been successfully mimicked in vitro using artificial membranes and a number of cellular proteins that are currently known to participate in membrane fusion. However, these proteins are not sufficient to promote efficient fusion between biological membranes, indicating that critical fusogenic factors remain unidentified. We have recently identified a TIP30 protein complex containing TIP30, acyl-CoA synthetase long-chain family member 4 (ACSL4 and Endophilin B1 (Endo B1 that promotes the fusion of endocytic vesicles with Rab5a vesicles, which transport endosomal acidification enzymes vacuolar (H⁺-ATPases (V-ATPases to the early endosomes in vivo. Here, we demonstrate that the TIP30 protein complex facilitates the fusion of endocytic vesicles with Rab5a vesicles in vitro. Fusion of the two vesicles also depends on arachidonic acid, coenzyme A and the synthesis of arachidonyl-CoA by ACSL4. Moreover, the TIP30 complex is able to transfer arachidonyl groups onto phosphatidic acid (PA, producing a new lipid species that is capable of inducing close contact between membranes. Together, our data suggest that the TIP30 complex facilitates biological membrane fusion through modification of PA on membranes.

  7. The application of HP-GFC chromatographic method for the analysis of oligosaccharides in bioactive complexes

    Directory of Open Access Journals (Sweden)

    Savić Ivan

    2009-01-01

    Full Text Available The aim of this work was to optimize a GFC method for the analysis of bioactive metal (Cu, Co and Fe complexes with olygosaccharides (dextran and pullulan. Bioactive metal complexes with olygosaccharides were synthesized by original procedure. GFC was used to study the molecular weight distribution, polymerization degree of oligosaccharides and bioactive metal complexes. The metal bounding in complexes depends on the ligand polymerization degree and the presence of OH groups in coordinative sphere of the central metal ion. The interaction between oligosaccharide and metal ions are very important in veterinary medicine, agriculture, pharmacy and medicine.

  8. Biocoordination chemistry. pH-metry titration method during study of biometal complexing with bioligands

    International Nuclear Information System (INIS)

    Dobrynina, N.A.

    1992-01-01

    Position of bioinorganic chemistry in the system of naturl science, as well as relations between bioinorganic and biocoordination chemistry, were considered. The content of chemical elements in geosphere and biosphere was analyzed. Characteristic features of biometal complexing with bioligands were pointed out. By way of example complex equilibria in solution were studie by the method of pH-metric titration using mathematical simulation. Advantages of the methods totality, when studying biosystems, were emphasized

  9. The relationship between the Wigner-Weyl kinetic formalism and the complex geometrical optics method

    OpenAIRE

    Maj, Omar

    2004-01-01

    The relationship between two different asymptotic techniques developed in order to describe the propagation of waves beyond the standard geometrical optics approximation, namely, the Wigner-Weyl kinetic formalism and the complex geometrical optics method, is addressed. More specifically, a solution of the wave kinetic equation, relevant to the Wigner-Weyl formalism, is obtained which yields the same wavefield intensity as the complex geometrical optics method. Such a relationship is also disc...

  10. 42 CFR 84.146 - Method of measuring the power and torque required to operate blowers.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Method of measuring the power and torque required... RESPIRATORY PROTECTIVE DEVICES Supplied-Air Respirators § 84.146 Method of measuring the power and torque.... These are used to facilitate timing. To determine the torque or horsepower required to operate the...

  11. KidReporter : a method for engaging children in making a newspaper to gather user requirements

    NARCIS (Netherlands)

    Bekker, M.M.; Beusmans, J.; Keyson, D.V.; Lloyd, P.A.; Bekker, M.M.; Markopoulos, P.; Tsikalkina, M.

    2002-01-01

    We describe a design method, called the KidReporter method, for gathering user requirements from children. Two school classes participated in making a newspaper about a zoo, to gather requirements for the design process of an interactive educational game. The educational game was developed to

  12. A numerical method for solving the 3D unsteady incompressible Navier Stokes equations in curvilinear domains with complex immersed boundaries

    Science.gov (United States)

    Ge, Liang; Sotiropoulos, Fotis

    2007-08-01

    A novel numerical method is developed that integrates boundary-conforming grids with a sharp interface, immersed boundary methodology. The method is intended for simulating internal flows containing complex, moving immersed boundaries such as those encountered in several cardiovascular applications. The background domain (e.g. the empty aorta) is discretized efficiently with a curvilinear boundary-fitted mesh while the complex moving immersed boundary (say a prosthetic heart valve) is treated with the sharp-interface, hybrid Cartesian/immersed-boundary approach of Gilmanov and Sotiropoulos [A. Gilmanov, F. Sotiropoulos, A hybrid cartesian/immersed boundary method for simulating flows with 3d, geometrically complex, moving bodies, Journal of Computational Physics 207 (2005) 457-492.]. To facilitate the implementation of this novel modeling paradigm in complex flow simulations, an accurate and efficient numerical method is developed for solving the unsteady, incompressible Navier-Stokes equations in generalized curvilinear coordinates. The method employs a novel, fully-curvilinear staggered grid discretization approach, which does not require either the explicit evaluation of the Christoffel symbols or the discretization of all three momentum equations at cell interfaces as done in previous formulations. The equations are integrated in time using an efficient, second-order accurate fractional step methodology coupled with a Jacobian-free, Newton-Krylov solver for the momentum equations and a GMRES solver enhanced with multigrid as preconditioner for the Poisson equation. Several numerical experiments are carried out on fine computational meshes to demonstrate the accuracy and efficiency of the proposed method for standard benchmark problems as well as for unsteady, pulsatile flow through a curved, pipe bend. To demonstrate the ability of the method to simulate flows with complex, moving immersed boundaries we apply it to calculate pulsatile, physiological flow

  13. Requirement of the Mre11 complex and exonuclease 1 for activation of the Mec1 signaling pathway.

    Science.gov (United States)

    Nakada, Daisuke; Hirano, Yukinori; Sugimoto, Katsunori

    2004-11-01

    The large protein kinases, ataxia-telangiectasia mutated (ATM) and ATM-Rad3-related (ATR), orchestrate DNA damage checkpoint pathways. In budding yeast, ATM and ATR homologs are encoded by TEL1 and MEC1, respectively. The Mre11 complex consists of two highly related proteins, Mre11 and Rad50, and a third protein, Xrs2 in budding yeast or Nbs1 in mammals. The Mre11 complex controls the ATM/Tel1 signaling pathway in response to double-strand break (DSB) induction. We show here that the Mre11 complex functions together with exonuclease 1 (Exo1) in activation of the Mec1 signaling pathway after DNA damage and replication block. Mec1 controls the checkpoint responses following UV irradiation as well as DSB induction. Correspondingly, the Mre11 complex and Exo1 play an overlapping role in activation of DSB- and UV-induced checkpoints. The Mre11 complex and Exo1 collaborate in producing long single-stranded DNA (ssDNA) tails at DSB ends and promote Mec1 association with the DSBs. The Ddc1-Mec3-Rad17 complex associates with sites of DNA damage and modulates the Mec1 signaling pathway. However, Ddc1 association with DSBs does not require the function of the Mre11 complex and Exo1. Mec1 controls checkpoint responses to stalled DNA replication as well. Accordingly, the Mre11 complex and Exo1 contribute to activation of the replication checkpoint pathway. Our results provide a model in which the Mre11 complex and Exo1 cooperate in generating long ssDNA tracts and thereby facilitate Mec1 association with sites of DNA damage or replication block.

  14. Statistical methods for anomaly detection in the complex process; Methodes statistiques de detection d'anomalies de fonctionnement dans les processus complexes

    Energy Technology Data Exchange (ETDEWEB)

    Al Mouhamed, Mayez

    1977-09-15

    In a number of complex physical systems the accessible signals are often characterized by random fluctuations about a mean value. The fluctuations (signature) often transmit information about the state of the system that the mean value cannot predict. This study is undertaken to elaborate statistical methods of anomaly detection on the basis of signature analysis of the noise inherent in the process. The algorithm presented first learns the characteristics of normal operation of a complex process. Then it detects small deviations from the normal behavior. The algorithm can be implemented in a medium-sized computer for on line application. (author) [French] Dans de nombreux systemes physiques complexes les grandeurs accessibles a l'homme sont souvent caracterisees par des fluctuations aleatoires autour d'une valeur moyenne. Les fluctuations (signatures) transmettent souvent des informations sur l'etat du systeme que la valeur moyenne ne peut predire. Cette etude est entreprise pour elaborer des methodes statistiques de detection d'anomalies de fonctionnement sur la base de l'analyse des signatures contenues dans les signaux de bruit provenant du processus. L'algorithme presente est capable de: 1/ Apprendre les caracteristiques des operations normales dans un processus complexe. 2/ Detecter des petites deviations par rapport a la conduite normale du processus. L'algorithme peut etre implante sur un calculateur de taille moyenne pour les applications en ligne. (auteur)

  15. Critical evaluation of the JDO API for the persistence and portability requirements of complex biological databases

    Directory of Open Access Journals (Sweden)

    Schwieger Michael

    2005-01-01

    Full Text Available Abstract Background Complex biological database systems have become key computational tools used daily by scientists and researchers. Many of these systems must be capable of executing on multiple different hardware and software configurations and are also often made available to users via the Internet. We have used the Java Data Object (JDO persistence technology to develop the database layer of such a system known as the SigPath information management system. SigPath is an example of a complex biological database that needs to store various types of information connected by many relationships. Results Using this system as an example, we perform a critical evaluation of current JDO technology; discuss the suitability of the JDO standard to achieve portability, scalability and performance. We show that JDO supports portability of the SigPath system from a relational database backend to an object database backend and achieves acceptable scalability. To answer the performance question, we have created the SigPath JDO application benchmark that we distribute under the Gnu General Public License. This benchmark can be used as an example of using JDO technology to create a complex biological database and makes it possible for vendors and users of the technology to evaluate the performance of other JDO implementations for similar applications. Conclusions The SigPath JDO benchmark and our discussion of JDO technology in the context of biological databases will be useful to bioinformaticians who design new complex biological databases and aim to create systems that can be ported easily to a variety of database backends.

  16. Complex interventions required to comprehensively educate allied health practitioners on evidence-based practice

    Directory of Open Access Journals (Sweden)

    Dizon JM

    2011-05-01

    Full Text Available Janine Margarita Dizon1,2, Karen Grimmer-Somers11International Centre for Allied Health Evidence, University of South Australia, Adelaide, SA, Australia; 2University of Santo Tomas, Manila, PhilippinesAbstract: There is currently no strong evidence regarding the most effective training approach for allied health professionals that will support them to consistently apply the best research evidence in daily practice. Current evidence-based practice training tends to be 'one size fits all', and is unlikely to be appropriate for all allied health disciplines because of the variability in their tasks and scope of practice. The scant body of evidence regarding the effectiveness of evidence-based practice training for allied health practitioners provides some support for improving knowledge and skills, but equivocal evidence about influencing behaviors and attitudes. We propose a new model of evidence-based practice training, based on the concept of complex interventions reported in the literature. We believe that by offering training in evidence-based practice based on complex interventions relevant to the needs of the attendees, using fixed and variable components, there may be greater success in significantly influencing knowledge skills, attitudes, and behaviors.Keywords: complex interventions, evidence-based practice training, allied health

  17. Random walk-based similarity measure method for patterns in complex object

    Directory of Open Access Journals (Sweden)

    Liu Shihu

    2017-04-01

    Full Text Available This paper discusses the similarity of the patterns in complex objects. The complex object is composed both of the attribute information of patterns and the relational information between patterns. Bearing in mind the specificity of complex object, a random walk-based similarity measurement method for patterns is constructed. In this method, the reachability of any two patterns with respect to the relational information is fully studied, and in the case of similarity of patterns with respect to the relational information can be calculated. On this bases, an integrated similarity measurement method is proposed, and algorithms 1 and 2 show the performed calculation procedure. One can find that this method makes full use of the attribute information and relational information. Finally, a synthetic example shows that our proposed similarity measurement method is validated.

  18. A path method for finding energy barriers and minimum energy paths in complex micromagnetic systems

    International Nuclear Information System (INIS)

    Dittrich, R.; Schrefl, T.; Suess, D.; Scholz, W.; Forster, H.; Fidler, J.

    2002-01-01

    Minimum energy paths and energy barriers are calculated for complex micromagnetic systems. The method is based on the nudged elastic band method and uses finite-element techniques to represent granular structures. The method was found to be robust and fast for both simple test problems as well as for large systems such as patterned granular media. The method is used to estimate the energy barriers in CoCr-based perpendicular recording media

  19. Curvilinear immersed boundary method for simulating fluid structure interaction with complex 3D rigid bodies

    Science.gov (United States)

    Borazjani, Iman; Ge, Liang; Sotiropoulos, Fotis

    2008-08-01

    The sharp-interface CURVIB approach of Ge and Sotiropoulos [L. Ge, F. Sotiropoulos, A numerical method for solving the 3D unsteady incompressible Navier-Stokes equations in curvilinear domains with complex immersed boundaries, Journal of Computational Physics 225 (2007) 1782-1809] is extended to simulate fluid structure interaction (FSI) problems involving complex 3D rigid bodies undergoing large structural displacements. The FSI solver adopts the partitioned FSI solution approach and both loose and strong coupling strategies are implemented. The interfaces between immersed bodies and the fluid are discretized with a Lagrangian grid and tracked with an explicit front-tracking approach. An efficient ray-tracing algorithm is developed to quickly identify the relationship between the background grid and the moving bodies. Numerical experiments are carried out for two FSI problems: vortex induced vibration of elastically mounted cylinders and flow through a bileaflet mechanical heart valve at physiologic conditions. For both cases the computed results are in excellent agreement with benchmark simulations and experimental measurements. The numerical experiments suggest that both the properties of the structure (mass, geometry) and the local flow conditions can play an important role in determining the stability of the FSI algorithm. Under certain conditions the FSI algorithm is unconditionally unstable even when strong coupling FSI is employed. For such cases, however, combining the strong coupling iteration with under-relaxation in conjunction with the Aitken's acceleration technique is shown to effectively resolve the stability problems. A theoretical analysis is presented to explain the findings of the numerical experiments. It is shown that the ratio of the added mass to the mass of the structure as well as the sign of the local time rate of change of the force or moment imparted on the structure by the fluid determine the stability and convergence of the FSI

  20. A simple method for determining polymeric IgA-containing immune complexes.

    Science.gov (United States)

    Sancho, J; Egido, J; González, E

    1983-06-10

    A simplified assay to measure polymeric IgA-immune complexes in biological fluids is described. The assay is based upon the specific binding of a secretory component for polymeric IgA. In the first step, multimeric IgA (monomeric and polymeric) immune complexes are determined by the standard Raji cell assay. Secondly, labeled secretory component added to the assay is bound to polymeric IgA-immune complexes previously fixed to Raji cells, but not to monomeric IgA immune complexes. To avoid false positives due to possible complement-fixing IgM immune complexes, prior IgM immunoadsorption is performed. Using anti-IgM antiserum coupled to CNBr-activated Sepharose 4B this step is not time-consuming. Polymeric IgA has a low affinity constant and binds weakly to Raji cells, as Scatchard analysis of the data shows. Thus, polymeric IgA immune complexes do not bind to Raji cells directly through Fc receptors, but through complement breakdown products, as with IgG-immune complexes. Using this method, we have been successful in detecting specific polymeric-IgA immune complexes in patients with IgA nephropathy (Berger's disease) and alcoholic liver disease, as well as in normal subjects after meals of high protein content. This new, simple, rapid and reproducible assay might help to study the physiopathological role of polymeric IgA immune complexes in humans and animals.

  1. A new high-throughput LC-MS method for the analysis of complex fructan mixtures

    DEFF Research Database (Denmark)

    Verspreet, Joran; Hansen, Anders Holmgaard; Dornez, Emmie

    2014-01-01

    In this paper, a new liquid chromatography-mass spectrometry (LC-MS) method for the analysis of complex fructan mixtures is presented. In this method, columns with a trifunctional C18 alkyl stationary phase (T3) were used and their performance compared with that of a porous graphitized carbon (PGC...

  2. Memory Indexing: A Novel Method for Tracing Memory Processes in Complex Cognitive Tasks

    Science.gov (United States)

    Renkewitz, Frank; Jahn, Georg

    2012-01-01

    We validate an eye-tracking method applicable for studying memory processes in complex cognitive tasks. The method is tested with a task on probabilistic inferences from memory. It provides valuable data on the time course of processing, thus clarifying previous results on heuristic probabilistic inference. Participants learned cue values of…

  3. Quantitative Research Methods in Chaos and Complexity: From Probability to Post Hoc Regression Analyses

    Science.gov (United States)

    Gilstrap, Donald L.

    2013-01-01

    In addition to qualitative methods presented in chaos and complexity theories in educational research, this article addresses quantitative methods that may show potential for future research studies. Although much in the social and behavioral sciences literature has focused on computer simulations, this article explores current chaos and…

  4. An Improved Conceptually-Based Method for Analysis of Communication Network Structure of Large Complex Organizations.

    Science.gov (United States)

    Richards, William D., Jr.

    Previous methods for determining the communication structure of organizations work well for small or simple organizations, but are either inadequate or unwieldy for use with large complex organizations. An improved method uses a number of different measures and a series of successive approximations to order the communication matrix such that…

  5. On a computational method for modelling complex ecosystems by superposition procedure

    International Nuclear Information System (INIS)

    He Shanyu.

    1986-12-01

    In this paper, the Superposition Procedure is concisely described, and a computational method for modelling a complex ecosystem is proposed. With this method, the information contained in acceptable submodels and observed data can be utilized to maximal degree. (author). 1 ref

  6. Cognitive Task Complexity Effects on L2 Writing Performance: An Application of Mixed-Methods Approaches

    Science.gov (United States)

    Abdi Tabari, Mahmoud; Ivey, Toni A.

    2015-01-01

    This paper provides a methodological review of previous research on cognitive task complexity, since the term emerged in 1995, and investigates why much research was more quantitative rather than qualitative. Moreover, it sheds light onto the studies which used the mixed-methods approach and determines which version of the mixed-methods designs…

  7. Low-complexity video encoding method for wireless image transmission in capsule endoscope.

    Science.gov (United States)

    Takizawa, Kenichi; Hamaguchi, Kiyoshi

    2010-01-01

    This paper presents a low-complexity video encoding method applicable for wireless image transmission in capsule endoscopes. This encoding method is based on Wyner-Ziv theory, in which side information available at a transmitter is treated as side information at its receiver. Therefore complex processes in video encoding, such as estimation of the motion vector, are moved to the receiver side, which has a larger-capacity battery. As a result, the encoding process is only to decimate coded original data through channel coding. We provide a performance evaluation for a low-density parity check (LDPC) coding method in the AWGN channel.

  8. Human-Chromatin-Related Protein Interactions Identify a Demethylase Complex Required for Chromosome Segregation

    Directory of Open Access Journals (Sweden)

    Edyta Marcon

    2014-07-01

    Full Text Available Chromatin regulation is driven by multicomponent protein complexes, which form functional modules. Deciphering the components of these modules and their interactions is central to understanding the molecular pathways these proteins are regulating, their functions, and their relation to both normal development and disease. We describe the use of affinity purifications of tagged human proteins coupled with mass spectrometry to generate a protein-protein interaction map encompassing known and predicted chromatin-related proteins. On the basis of 1,394 successful purifications of 293 proteins, we report a high-confidence (85% precision network involving 11,464 protein-protein interactions among 1,738 different human proteins, grouped into 164 often overlapping protein complexes with a particular focus on the family of JmjC-containing lysine demethylases, their partners, and their roles in chromatin remodeling. We show that RCCD1 is a partner of histone H3K36 demethylase KDM8 and demonstrate that both are important for cell-cycle-regulated transcriptional repression in centromeric regions and accurate mitotic division.

  9. A method for work modeling at complex systems: towards applying information systems in family health care units.

    Science.gov (United States)

    Jatobá, Alessandro; de Carvalho, Paulo Victor R; da Cunha, Amauri Marques

    2012-01-01

    Work in organizations requires a minimum level of consensus on the understanding of the practices performed. To adopt technological devices to support the activities in environments where work is complex, characterized by the interdependence among a large number of variables, understanding about how work is done not only takes an even greater importance, but also becomes a more difficult task. Therefore, this study aims to present a method for modeling of work in complex systems, which allows improving the knowledge about the way activities are performed where these activities do not simply happen by performing procedures. Uniting techniques of Cognitive Task Analysis with the concept of Work Process, this work seeks to provide a method capable of providing a detailed and accurate vision of how people perform their tasks, in order to apply information systems for supporting work in organizations.

  10. SEPT12–NDC1 Complexes Are Required for Mammalian Spermiogenesis

    Directory of Open Access Journals (Sweden)

    Tsung-Hsuan Lai

    2016-11-01

    Full Text Available Male factor infertility accounts for approximately 50 percent of infertile couples. The male factor-related causes of intracytoplasmic sperm injection failure include the absence of sperm, immotile sperm, immature sperm, abnormally structured sperm, and sperm with nuclear damage. Our knockout and knock-in mice models demonstrated that SEPTIN12 (SEPT12 is vital for the formation of sperm morphological characteristics during spermiogenesis. In the clinical aspect, mutated SEPT12 in men results in oligozoospermia or teratozoospermia or both. Sperm with mutated SEPT12 revealed abnormal head and tail structures, decreased chromosomal condensation, and nuclear damage. Furthermore, several nuclear or nuclear membrane-related proteins have been identified as SEPT12 interactors through the yeast 2-hybrid system, including NDC1 transmembrane nucleoporin (NDC1. NDC1 is a major nuclear pore protein, and is critical for nuclear pore complex assembly and nuclear morphology maintenance in mammalian cells. Mutated NDC1 cause gametogenesis defects and skeletal malformations in mice, which were detected spontaneously in the A/J strain. In this study, we characterized the functional effects of SEPT12–NDC1 complexes during mammalian spermiogenesis. In mature human spermatozoa, SEPT12 and NDC1 are majorly colocalized in the centrosome regions; however, NDC1 is only slightly co-expressed with SEPT12 at the annulus of the sperm tail. In addition, SEPT12 interacts with NDC1 in the male germ cell line through coimmunoprecipitation. During murine spermiogenesis, we observed that NDC1 was located at the nuclear membrane of spermatids and at the necks of mature spermatozoa. In male germ cell lines, NDC1 overexpression restricted the localization of SEPT12 to the nucleus and repressed the filament formation of SEPT12. In mice sperm with mutated SEPT12, NDC1 dispersed around the manchette region of the sperm head and annulus, compared with concentrating at the sperm neck

  11. Requirement of histidine 217 for ubiquinone reductase activity (Qi site) in the cytochrome bc1 complex.

    Science.gov (United States)

    Gray, K A; Dutton, P L; Daldal, F

    1994-01-25

    Folding models suggest that the highly conserved histidine 217 of the cytochrome b subunit from the cytochrome bc1 complex is close to the quinone reductase (Qi) site. This histidine (bH217) in the cytochrome b polypeptide of the photosynthetic bacterium Rhodobacter capsulatus has been replaced with three other residues, aspartate (D), arginine (R), and leucine (L). bH217D and bH217R are able to grow photoheterotrophically and contain active cytochrome bc1 complexes (60% of wild-type activity), whereas the bH217L mutant is photosynthetically incompetent and contains a cytochrome bc1 complex that has only 10% of the wild-type activity. Single-turnover flash-activated electron transfer experiments show that cytochrome bH is reduced via the Qo site with near native rates in the mutant strains but that electron transfer between cytochrome bH and quinone bound at the Qi site is greatly slowed. These results are consistent with redox midpoint potential (Em) measurements of the cytochrome b subunit hemes and the Qi site quinone. The Em values of cyt bL and bH are approximately the same in the mutants and wild type, although the mutant strains have a larger relative concentration of what may be the high-potential form of cytochrome bH, called cytochrome b150. However, the redox properties of the semiquinone at the Qi site are altered significantly. The Qi site semiquinone stability constant of bH217R is 10 times higher than in the wild type, while in the other two strains (bH217D and bH217L) the stability constant is much lower than in the wild type. Thus H217 appears to have major effects on the redox properties of the quinone bound at the Qi site. These data are incorporated into a suggestion that H217 forms part of the binding pocket of the Qi site in a manner reminiscent of the interaction between quinone bound at the Qb site and H190 of the L subunit of the bacterial photosynthetic reaction center.

  12. What model resolution is required in climatological downscaling over complex terrain?

    Science.gov (United States)

    El-Samra, Renalda; Bou-Zeid, Elie; El-Fadel, Mutasem

    2018-05-01

    This study presents results from the Weather Research and Forecasting (WRF) model applied for climatological downscaling simulations over highly complex terrain along the Eastern Mediterranean. We sequentially downscale general circulation model results, for a mild and wet year (2003) and a hot and dry year (2010), to three local horizontal resolutions of 9, 3 and 1 km. Simulated near-surface hydrometeorological variables are compared at different time scales against data from an observational network over the study area comprising rain gauges, anemometers, and thermometers. The overall performance of WRF at 1 and 3 km horizontal resolution was satisfactory, with significant improvement over the 9 km downscaling simulation. The total yearly precipitation from WRF's 1 km and 3 km domains exhibited quantitative measure of the potential errors for various hydrometeorological variables.

  13. On the required complexity of vehicle dynamic models for use in simulation-based highway design.

    Science.gov (United States)

    Brown, Alexander; Brennan, Sean

    2014-06-01

    This paper presents the results of a comprehensive project whose goal is to identify roadway design practices that maximize the margin of safety between the friction supply and friction demand. This study is motivated by the concern for increased accident rates on curves with steep downgrades, geometries that contain features that interact in all three dimensions - planar curves, grade, and superelevation. This complexity makes the prediction of vehicle skidding quite difficult, particularly for simple simulation models that have historically been used for road geometry design guidance. To obtain estimates of friction margin, this study considers a range of vehicle models, including: a point-mass model used by the American Association of State Highway Transportation Officials (AASHTO) design policy, a steady-state "bicycle model" formulation that considers only per-axle forces, a transient formulation of the bicycle model commonly used in vehicle stability control systems, and finally, a full multi-body simulation (CarSim and TruckSim) regularly used in the automotive industry for high-fidelity vehicle behavior prediction. The presence of skidding--the friction demand exceeding supply--was calculated for each model considering a wide range of vehicles and road situations. The results indicate that the most complicated vehicle models are generally unnecessary for predicting skidding events. However, there are specific maneuvers, namely braking events within lane changes and curves, which consistently predict the worst-case friction margins across all models. This suggests that any vehicle model used for roadway safety analysis should include the effects of combined cornering and braking. The point-mass model typically used by highway design professionals may not be appropriate to predict vehicle behavior on high-speed curves during braking in low-friction situations. However, engineers can use the results of this study to help select the appropriate vehicle dynamic

  14. Papillae formation on trichome cell walls requires the function of the mediator complex subunit Med25.

    Science.gov (United States)

    Fornero, Christy; Suo, Bangxia; Zahde, Mais; Juveland, Katelyn; Kirik, Viktor

    2017-11-01

    Glassy Hair 1 (GLH1) gene that promotes papillae formation on trichome cell walls was identified as a subunit of the transcriptional mediator complex MED25. The MED25 gene is shown to be expressed in trichomes. The expression of the trichome development marker genes GLABRA2 (GL2) and Ethylene Receptor2 (ETR2) is not affected in the glh1 mutant. Presented data suggest that Arabidopsis MED25 mediator component is likely involved in the transcription of genes promoting papillae deposition in trichomes. The plant cell wall plays an important role in communication, defense, organization and support. The importance of each of these functions varies by cell type. Specialized cells, such as Arabidopsis trichomes, exhibit distinct cell wall characteristics including papillae. To better understand the molecular processes important for papillae deposition on the cell wall surface, we identified the GLASSY HAIR 1 (GLH1) gene, which is necessary for papillae formation. We found that a splice-site mutation in the component of the transcriptional mediator complex MED25 gene is responsible for the near papillae-less phenotype of the glh1 mutant. The MED25 gene is expressed in trichomes. Reporters for trichome developmental marker genes GLABRA2 (GL2) and Ethylene Receptor2 (ETR2) were not affected in the glh1 mutant. Collectively, the presented results show that MED25 is necessary for papillae formation on the cell wall surface of leaf trichomes and suggest that the Arabidopsis MED25 mediator component is likely involved in the transcription of a subset of genes that promote papillae deposition in trichomes.

  15. Methods for deconvoluting and interpreting complex gamma- and x-ray spectral regions

    International Nuclear Information System (INIS)

    Gunnink, R.

    1983-06-01

    Germanium and silicon detectors are now widely used for the detection and measurement of x and gamma radiation. However, some analysis situations and spectral regions have heretofore been too complex to deconvolute and interpret by techniques in general use. One example is the L x-ray spectrum of an element taken with a Ge or Si detector. This paper describes some new tools and methods that were developed to analyze complex spectral regions; they are illustrated with examples

  16. Determining Complex Structures using Docking Method with Single Particle Scattering Data

    Directory of Open Access Journals (Sweden)

    Haiguang Liu

    2017-04-01

    Full Text Available Protein complexes are critical for many molecular functions. Due to intrinsic flexibility and dynamics of complexes, their structures are more difficult to determine using conventional experimental methods, in contrast to individual subunits. One of the major challenges is the crystallization of protein complexes. Using X-ray free electron lasers (XFELs, it is possible to collect scattering signals from non-crystalline protein complexes, but data interpretation is more difficult because of unknown orientations. Here, we propose a hybrid approach to determine protein complex structures by combining XFEL single particle scattering data with computational docking methods. Using simulations data, we demonstrate that a small set of single particle scattering data collected at random orientations can be used to distinguish the native complex structure from the decoys generated using docking algorithms. The results also indicate that a small set of single particle scattering data is superior to spherically averaged intensity profile in distinguishing complex structures. Given the fact that XFEL experimental data are difficult to acquire and at low abundance, this hybrid approach should find wide applications in data interpretations.

  17. Krylov Subspace Methods for Complex Non-Hermitian Linear Systems. Thesis

    Science.gov (United States)

    Freund, Roland W.

    1991-01-01

    We consider Krylov subspace methods for the solution of large sparse linear systems Ax = b with complex non-Hermitian coefficient matrices. Such linear systems arise in important applications, such as inverse scattering, numerical solution of time-dependent Schrodinger equations, underwater acoustics, eddy current computations, numerical computations in quantum chromodynamics, and numerical conformal mapping. Typically, the resulting coefficient matrices A exhibit special structures, such as complex symmetry, or they are shifted Hermitian matrices. In this paper, we first describe a Krylov subspace approach with iterates defined by a quasi-minimal residual property, the QMR method, for solving general complex non-Hermitian linear systems. Then, we study special Krylov subspace methods designed for the two families of complex symmetric respectively shifted Hermitian linear systems. We also include some results concerning the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.

  18. Modern methods of surveyor observations in opencast mining under complex hydrogeological conditions.

    Science.gov (United States)

    Usoltseva, L. A.; Lushpei, V. P.; Mursin, VA

    2017-10-01

    The article considers the possibility of linking the modern methods of surveying security of open mining works to improve industrial safety in the Primorsky Territory, as well as their use in the educational process. Industrial Safety in the management of Surface Mining depends largely on the applied assessment methods and methods of stability of pit walls and slopes of dumps in the complex mining and hydro-geological conditions.

  19. 5 CFR 610.404 - Requirement for time-accounting method.

    Science.gov (United States)

    2010-01-01

    ... REGULATIONS HOURS OF DUTY Flexible and Compressed Work Schedules § 610.404 Requirement for time-accounting method. An agency that authorizes a flexible work schedule or a compressed work schedule under this...

  20. [Precautions of physical performance requirements and test methods during product standard drafting process of medical devices].

    Science.gov (United States)

    Song, Jin-Zi; Wan, Min; Xu, Hui; Yao, Xiu-Jun; Zhang, Bo; Wang, Jin-Hong

    2009-09-01

    The major idea of this article is to discuss standardization and normalization for the product standard of medical devices. Analyze the problem related to the physical performance requirements and test methods during product standard drafting process and make corresponding suggestions.

  1. Complex Domains Call for Automation but Automation Requires More Knowledge and Learning

    DEFF Research Database (Denmark)

    Madsen, Erik Skov; Mikkelsen, Lars Lindegaard

    studies investigate operation and automation of oil and gas production in the North Sea. Semi-structured interviews, surveys, and observations are the main methods used. The paper provides a novel conceptual framework around which management may generate discussions about productivity and the need...

  2. A complex RARE is required for the majority of Nedd9 embryonic expression.

    Science.gov (United States)

    Knutson, Danielle C; Clagett-Dame, Margaret

    2015-02-01

    Neural precursor cell expressed, developmentally down-regulated 9 (Nedd9, Casl, Hef1, p105cas, Ef1) is a scaffolding protein that assembles complexes involved in regulating cell adhesion, migration, division, and survival. Nedd9 is found very early in the developing embryonic nervous system. A highly conserved complex retinoic acid response element (RARE) is located 485 base pairs (bp) upstream of exon 2B in the promoter of the Nedd9 gene. Mice transgenic for a 5.2 kilobase (kb) region of the 2B Nedd9 promoter containing the RARE upstream of a lacZ reporter gene [Nedd9(RARE)-lacZ] show a large subset of the normal endogenous Nedd9 expression including that in the caudal hindbrain neuroepithelium, spinal cord, dorsal root ganglia (drg) and migrating neural crest (ncc). However, the transgenic mice do not recapitulate the native Nedd9 expression pattern in presumptive rhombomeres (pr) 3 and 5 of the early hindbrain, the base of the neuroepithelium in the midbrain, nor the forebrain telencephalon. Thus, the 5.2 kb region containing the intact RARE drives a large subset of Nedd9 expression, with additional sequences outside of this region needed to define the full complement of expression. When the 5.2 kb construct is modified (eight point mutations) to eliminate responsiveness of the RARE to all-trans retinoic acid (atRA) [Nedd9(mutRARE)-lacZ], virtually all β-galactosidase (β-gal, lacZ) expression is lost. Exposure of Nedd9(RARE)-lacZ transgenic embryos to excess atRA at embryonic day 8.0 (E8.0) leads to rostral ectopic transgene expression within 6 h whereas the Nedd9(mutRARE)-lacZ mutant does not show this effect. Thus the RARE upstream of the Nedd9 2B promoter is necessary for much of the endogenous gene expression during early development as well as ectopic expression in response to atRA.

  3. Simultaneous analysis of qualitative parameters of solid fuel using complex neutron gamma method

    International Nuclear Information System (INIS)

    Dombrovskij, V.P.; Ajtsev, N.I.; Ryashchikov, V.I.; Frolov, V.K.

    1983-01-01

    A study was made on complex neutron gamma method for simultaneous analysis of carbon content, ash content and humidity of solid fuel according to gamma radiation of inelastic fast neutron scattering and radiation capture of thermal neutrons. Metrological characteristics of pulse and stationary neutron gamma methods for determination of qualitative solid fuel parameters were analyzed, taking coke breeze as an example. Optimal energy ranges of gamma radiation detection (2-8 MeV) were determined. The advantages of using pulse neutron generator for complex analysis of qualitative parameters of solid fuel in large masses were shown

  4. Developments based on stochastic and determinist methods for studying complex nuclear systems; Developpements utilisant des methodes stochastiques et deterministes pour l'analyse de systemes nucleaires complexes

    Energy Technology Data Exchange (ETDEWEB)

    Giffard, F.X

    2000-05-19

    In the field of reactor and fuel cycle physics, particle transport plays and important role. Neutronic design, operation and evaluation calculations of nuclear system make use of large and powerful computer codes. However, current limitations in terms of computer resources make it necessary to introduce simplifications and approximations in order to keep calculation time and cost within reasonable limits. Two different types of methods are available in these codes. The first one is the deterministic method, which is applicable in most practical cases but requires approximations. The other method is the Monte Carlo method, which does not make these approximations but which generally requires exceedingly long running times. The main motivation of this work is to investigate the possibility of a combined use of the two methods in such a way as to retain their advantages while avoiding their drawbacks. Our work has mainly focused on the speed-up of 3-D continuous energy Monte Carlo calculations (TRIPOLI-4 code) by means of an optimized biasing scheme derived from importance maps obtained from the deterministic code ERANOS. The application of this method to two different practical shielding-type problems has demonstrated its efficiency: speed-up factors of 100 have been reached. In addition, the method offers the advantage of being easily implemented as it is not very to the choice of the importance mesh grid. It has also been demonstrated that significant speed-ups can be achieved by this method in the case of coupled neutron-gamma transport problems, provided that the interdependence of the neutron and photon importance maps is taken into account. Complementary studies are necessary to tackle a problem brought out by this work, namely undesirable jumps in the Monte Carlo variance estimates. (author)

  5. Developments based on stochastic and determinist methods for studying complex nuclear systems; Developpements utilisant des methodes stochastiques et deterministes pour l'analyse de systemes nucleaires complexes

    Energy Technology Data Exchange (ETDEWEB)

    Giffard, F X

    2000-05-19

    In the field of reactor and fuel cycle physics, particle transport plays and important role. Neutronic design, operation and evaluation calculations of nuclear system make use of large and powerful computer codes. However, current limitations in terms of computer resources make it necessary to introduce simplifications and approximations in order to keep calculation time and cost within reasonable limits. Two different types of methods are available in these codes. The first one is the deterministic method, which is applicable in most practical cases but requires approximations. The other method is the Monte Carlo method, which does not make these approximations but which generally requires exceedingly long running times. The main motivation of this work is to investigate the possibility of a combined use of the two methods in such a way as to retain their advantages while avoiding their drawbacks. Our work has mainly focused on the speed-up of 3-D continuous energy Monte Carlo calculations (TRIPOLI-4 code) by means of an optimized biasing scheme derived from importance maps obtained from the deterministic code ERANOS. The application of this method to two different practical shielding-type problems has demonstrated its efficiency: speed-up factors of 100 have been reached. In addition, the method offers the advantage of being easily implemented as it is not very to the choice of the importance mesh grid. It has also been demonstrated that significant speed-ups can be achieved by this method in the case of coupled neutron-gamma transport problems, provided that the interdependence of the neutron and photon importance maps is taken into account. Complementary studies are necessary to tackle a problem brought out by this work, namely undesirable jumps in the Monte Carlo variance estimates. (author)

  6. Software Safety Analysis of Digital Protection System Requirements Using a Qualitative Formal Method

    International Nuclear Information System (INIS)

    Lee, Jang-Soo; Kwon, Kee-Choon; Cha, Sung-Deok

    2004-01-01

    The safety analysis of requirements is a key problem area in the development of software for the digital protection systems of a nuclear power plant. When specifying requirements for software of the digital protection systems and conducting safety analysis, engineers find that requirements are often known only in qualitative terms and that existing fault-tree analysis techniques provide little guidance on formulating and evaluating potential failure modes. A framework for the requirements engineering process is proposed that consists of a qualitative method for requirements specification, called the qualitative formal method (QFM), and a safety analysis method for the requirements based on causality information, called the causal requirements safety analysis (CRSA). CRSA is a technique that qualitatively evaluates causal relationships between software faults and physical hazards. This technique, extending the qualitative formal method process and utilizing information captured in the state trajectory, provides specific guidelines on how to identify failure modes and the relationship among them. The QFM and CRSA processes are described using shutdown system 2 of the Wolsong nuclear power plants as the digital protection system example

  7. IMPACT OF MATRIX INVERSION ON THE COMPLEXITY OF THE FINITE ELEMENT METHOD

    Directory of Open Access Journals (Sweden)

    M. Sybis

    2016-04-01

    Full Text Available Purpose. The development of a wide construction market and a desire to design innovative architectural building constructions has resulted in the need to create complex numerical models of objects having increasingly higher computational complexity. The purpose of this work is to show that choosing a proper method for solving the set of equations can improve the calculation time (reduce the complexity by a few levels of magnitude. Methodology. The article presents an analysis of the impact of matrix inversion algorithm on the deflection calculation in the beam, using the finite element method (FEM. Based on the literature analysis, common methods of calculating set of equations were determined. From the found solutions the Gaussian elimination, LU and Cholesky decomposition methods have been implemented to determine the effect of the matrix inversion algorithm used for solving the equations set on the number of computational operations performed. In addition, each of the implemented method has been further optimized thereby reducing the number of necessary arithmetic operations. Findings. These optimizations have been performed on the use of certain properties of the matrix, such as symmetry or significant number of zero elements in the matrix. The results of the analysis are presented for the division of the beam to 5, 50, 100 and 200 nodes, for which the deflection has been calculated. Originality. The main achievement of this work is that it shows the impact of the used methodology on the complexity of solving the problem (or equivalently, time needed to obtain results. Practical value. The difference between the best (the less complex and the worst (the most complex is in the row of few orders of magnitude. This result shows that choosing wrong methodology may enlarge time needed to perform calculation significantly.

  8. Three-dimensional Cross-Platform Planning for Complex Spinal Procedures: A New Method Adaptive to Different Navigation Systems.

    Science.gov (United States)

    Kosterhon, Michael; Gutenberg, Angelika; Kantelhardt, Sven R; Conrad, Jens; Nimer Amr, Amr; Gawehn, Joachim; Giese, Alf

    2017-08-01

    A feasibility study. To develop a method based on the DICOM standard which transfers complex 3-dimensional (3D) trajectories and objects from external planning software to any navigation system for planning and intraoperative guidance of complex spinal procedures. There have been many reports about navigation systems with embedded planning solutions but only few on how to transfer planning data generated in external software. Patients computerized tomography and/or magnetic resonance volume data sets of the affected spinal segments were imported to Amira software, reconstructed to 3D images and fused with magnetic resonance data for soft-tissue visualization, resulting in a virtual patient model. Objects needed for surgical plans or surgical procedures such as trajectories, implants or surgical instruments were either digitally constructed or computerized tomography scanned and virtually positioned within the 3D model as required. As crucial step of this method these objects were fused with the patient's original diagnostic image data, resulting in a single DICOM sequence, containing all preplanned information necessary for the operation. By this step it was possible to import complex surgical plans into any navigation system. We applied this method not only to intraoperatively adjustable implants and objects under experimental settings, but also planned and successfully performed surgical procedures, such as the percutaneous lateral approach to the lumbar spine following preplanned trajectories and a thoracic tumor resection including intervertebral body replacement using an optical navigation system. To demonstrate the versatility and compatibility of the method with an entirely different navigation system, virtually preplanned lumbar transpedicular screw placement was performed with a robotic guidance system. The presented method not only allows virtual planning of complex surgical procedures, but to export objects and surgical plans to any navigation or

  9. 40 CFR 63.344 - Performance test requirements and test methods.

    Science.gov (United States)

    2010-07-01

    ... electroplating tanks or chromium anodizing tanks. The sampling time and sample volume for each run of Methods 306... Chromium Anodizing Tanks § 63.344 Performance test requirements and test methods. (a) Performance test... Emissions From Decorative and Hard Chromium Electroplating and Anodizing Operations,” appendix A of this...

  10. Integrating complex functions: coordination of nuclear pore complex assembly and membrane expansion of the nuclear envelope requires a family of integral membrane proteins.

    Science.gov (United States)

    Schneiter, Roger; Cole, Charles N

    2010-01-01

    The nuclear envelope harbors numerous large proteinaceous channels, the nuclear pore complexes (NPCs), through which macromolecular exchange between the cytosol and the nucleoplasm occurs. This double-membrane nuclear envelope is continuous with the endoplasmic reticulum and thus functionally connected to such diverse processes as vesicular transport, protein maturation and lipid synthesis. Recent results obtained from studies in Saccharomyces cerevisiae indicate that assembly of the nuclear pore complex is functionally dependent upon maintenance of lipid homeostasis of the ER membrane. Previous work from one of our laboratories has revealed that an integral membrane protein Apq12 is important for the assembly of functional nuclear pores. Cells lacking APQ12 are viable but cannot grow at low temperatures, have aberrant NPCs and a defect in mRNA export. Remarkably, these defects in NPC assembly can be overcome by supplementing cells with a membrane fluidizing agent, benzyl alcohol, suggesting that Apq12 impacts the flexibility of the nuclear membrane, possibly by adjusting its lipid composition when cells are shifted to a reduced temperature. Our new study now expands these findings and reveals that an essential membrane protein, Brr6, shares at least partially overlapping functions with Apq12 and is also required for assembly of functional NPCs. A third nuclear envelope membrane protein, Brl1, is related to Brr6, and is also required for NPC assembly. Because maintenance of membrane homeostasis is essential for cellular survival, the fact that these three proteins are conserved in fungi that undergo closed mitoses, but are not found in metazoans or plants, may indicate that their functions are performed by proteins unrelated at the primary sequence level to Brr6, Brl1 and Apq12 in cells that disassemble their nuclear envelopes during mitosis.

  11. An image overall complexity evaluation method based on LSD line detection

    Science.gov (United States)

    Li, Jianan; Duan, Jin; Yang, Xu; Xiao, Bo

    2017-04-01

    In the artificial world, whether it is the city's traffic roads or engineering buildings contain a lot of linear features. Therefore, the research on the image complexity of linear information has become an important research direction in digital image processing field. This paper, by detecting the straight line information in the image and using the straight line as the parameter index, establishing the quantitative and accurate mathematics relationship. In this paper, we use LSD line detection algorithm which has good straight-line detection effect to detect the straight line, and divide the detected line by the expert consultation strategy. Then we use the neural network to carry on the weight training and get the weight coefficient of the index. The image complexity is calculated by the complexity calculation model. The experimental results show that the proposed method is effective. The number of straight lines in the image, the degree of dispersion, uniformity and so on will affect the complexity of the image.

  12. Computational study of formamide-water complexes using the SAPT and AIM methods

    International Nuclear Information System (INIS)

    Parreira, Renato L.T.; Valdes, Haydee; Galembeck, Sergio E.

    2006-01-01

    In this work, the complexes formed between formamide and water were studied by means of the SAPT and AIM methods. Complexation leads to significant alterations in the geometries and electronic structure of formamide. Intermolecular interactions in the complexes are intense, especially in the cases where the solvent interacts with the carbonyl and amide groups simultaneously. In the transition states, the interaction between the water molecule and the lone pair on the amide nitrogen is also important. In all the complexes studied herein, the electrostatic interactions between formamide and water are the main attractive force, and their contribution may be five times as large as the corresponding contribution from dispersion, and twice as large as the contribution from induction. However, an increase in the resonance of planar formamide with the successive addition of water molecules may suggest that the hydrogen bonds taking place between formamide and water have some covalent character

  13. Decomposition of overlapping protein complexes: A graph theoretical method for analyzing static and dynamic protein associations

    Directory of Open Access Journals (Sweden)

    Guimarães Katia S

    2006-04-01

    Full Text Available Abstract Background Most cellular processes are carried out by multi-protein complexes, groups of proteins that bind together to perform a specific task. Some proteins form stable complexes, while other proteins form transient associations and are part of several complexes at different stages of a cellular process. A better understanding of this higher-order organization of proteins into overlapping complexes is an important step towards unveiling functional and evolutionary mechanisms behind biological networks. Results We propose a new method for identifying and representing overlapping protein complexes (or larger units called functional groups within a protein interaction network. We develop a graph-theoretical framework that enables automatic construction of such representation. We illustrate the effectiveness of our method by applying it to TNFα/NF-κB and pheromone signaling pathways. Conclusion The proposed representation helps in understanding the transitions between functional groups and allows for tracking a protein's path through a cascade of functional groups. Therefore, depending on the nature of the network, our representation is capable of elucidating temporal relations between functional groups. Our results show that the proposed method opens a new avenue for the analysis of protein interaction networks.

  14. Protein complex detection in PPI networks based on data integration and supervised learning method.

    Science.gov (United States)

    Yu, Feng; Yang, Zhi; Hu, Xiao; Sun, Yuan; Lin, Hong; Wang, Jian

    2015-01-01

    Revealing protein complexes are important for understanding principles of cellular organization and function. High-throughput experimental techniques have produced a large amount of protein interactions, which makes it possible to predict protein complexes from protein-protein interaction (PPI) networks. However, the small amount of known physical interactions may limit protein complex detection. The new PPI networks are constructed by integrating PPI datasets with the large and readily available PPI data from biomedical literature, and then the less reliable PPI between two proteins are filtered out based on semantic similarity and topological similarity of the two proteins. Finally, the supervised learning protein complex detection (SLPC), which can make full use of the information of available known complexes, is applied to detect protein complex on the new PPI networks. The experimental results of SLPC on two different categories yeast PPI networks demonstrate effectiveness of the approach: compared with the original PPI networks, the best average improvements of 4.76, 6.81 and 15.75 percentage units in the F-score, accuracy and maximum matching ratio (MMR) are achieved respectively; compared with the denoising PPI networks, the best average improvements of 3.91, 4.61 and 12.10 percentage units in the F-score, accuracy and MMR are achieved respectively; compared with ClusterONE, the start-of the-art complex detection method, on the denoising extended PPI networks, the average improvements of 26.02 and 22.40 percentage units in the F-score and MMR are achieved respectively. The experimental results show that the performances of SLPC have a large improvement through integration of new receivable PPI data from biomedical literature into original PPI networks and denoising PPI networks. In addition, our protein complexes detection method can achieve better performance than ClusterONE.

  15. Comparison of Ho and Y complexation data obtained by electromigration methods, potentiometry and spectrophotometry

    International Nuclear Information System (INIS)

    Vinsova, H.; Koudelkova, M.; Ernestova, M.; Jedinakova-Krizova, V.

    2003-01-01

    Many of holmium and yttrium complex compounds of both organic and inorganic origin have been studied recently from the point of view of their radiopharmaceutical behavior. Complexes with Ho-166 and Y-90 can be either directly used as pharmaceutical preparations or they can be applied in a conjugate form with selected monoclonal antibody. Appropriate bifunctional chelation agents are necessary in the latter case for indirect binding of monoclonal antibody and selected radionuclide. Our present study has been focused on the characterization of radionuclide (metal) - ligand interaction using various analytical methods. Electromigration methods (capillary electrophoresis, capillary isotachophoresis), potentiometric titration and spectrophotometry have been tested from the point of view of their potential to determine conditional stability constants of holmium and yttrium complexes. A principle of an isotachophoretic determination of stability constants is based on the linear relation between logarithms of stability constant and a reduction of a zone of complex. For the calculation of thermodynamic constants using potentiometry it was necessary at first to determine the protonation constants of acid. Those were calculated using the computer program LETAGROP Etitr from data obtained by potentiometric acid-base titration. Consequently, the titration curves of holmium and yttrium with studied ligands and protonation constants of corresponding acid were applied for the calculation of metal-ligand stability constants. Spectrophotometric determination of stability constants of selected systems was based on the titration of holmium and yttrium nitrate solutions by Arsenazo III following by the titration of metal-Arsenazo III complex by selected ligand. Data obtained have been evaluated using the computation program OPIUM. Results obtained by all analytical methods tested in this study have been compared. It was found that direct potentiometric titration technique could not be

  16. Stress Intensity Factor for Interface Cracks in Bimaterials Using Complex Variable Meshless Manifold Method

    Directory of Open Access Journals (Sweden)

    Hongfen Gao

    2014-01-01

    Full Text Available This paper describes the application of the complex variable meshless manifold method (CVMMM to stress intensity factor analyses of structures containing interface cracks between dissimilar materials. A discontinuous function and the near-tip asymptotic displacement functions are added to the CVMMM approximation using the framework of complex variable moving least-squares (CVMLS approximation. This enables the domain to be modeled by CVMMM without explicitly meshing the crack surfaces. The enriched crack-tip functions are chosen as those that span the asymptotic displacement fields for an interfacial crack. The complex stress intensity factors for bimaterial interfacial cracks were numerically evaluated using the method. Good agreement between the numerical results and the reference solutions for benchmark interfacial crack problems is realized.

  17. Dual chromatin recognition by the histone deacetylase complex HCHC is required for proper DNA methylation in Neurospora crassa

    Science.gov (United States)

    Honda, Shinji; Bicocca, Vincent T.; Gessaman, Jordan D.; Rountree, Michael R.; Yokoyama, Ayumi; Yu, Eun Y.; Selker, Jeanne M. L.; Selker, Eric U.

    2016-01-01

    DNA methylation, heterochromatin protein 1 (HP1), histone H3 lysine 9 (H3K9) methylation, histone deacetylation, and highly repeated sequences are prototypical heterochromatic features, but their interrelationships are not fully understood. Prior work showed that H3K9 methylation directs DNA methylation and histone deacetylation via HP1 in Neurospora crassa and that the histone deacetylase complex HCHC is required for proper DNA methylation. The complex consists of the chromodomain proteins HP1 and chromodomain protein 2 (CDP-2), the histone deacetylase HDA-1, and the AT-hook motif protein CDP-2/HDA-1–associated protein (CHAP). We show that the complex is required for proper chromosome segregation, dissect its function, and characterize interactions among its components. Our analyses revealed the existence of an HP1-based DNA methylation pathway independent of its chromodomain. The pathway partially depends on CHAP but not on the CDP-2 chromodomain. CDP-2 serves as a bridge between the recognition of H3K9 trimethylation (H3K9me3) by HP1 and the histone deacetylase activity of HDA-1. CHAP is also critical for HDA-1 localization to heterochromatin. Specifically, the CHAP zinc finger interacts directly with the HDA-1 argonaute-binding protein 2 (Arb2) domain, and the CHAP AT-hook motifs recognize heterochromatic regions by binding to AT-rich DNA. Our data shed light on the interrelationships among the prototypical heterochromatic features and support a model in which dual recognition by the HP1 chromodomain and the CHAP AT-hooks are required for proper heterochromatin formation. PMID:27681634

  18. Computational Experiment Study on Selection Mechanism of Project Delivery Method Based on Complex Factors

    Directory of Open Access Journals (Sweden)

    Xiang Ding

    2014-01-01

    Full Text Available Project delivery planning is a key stage used by the project owner (or project investor for organizing design, construction, and other operations in a construction project. The main task in this stage is to select an appropriate project delivery method. In order to analyze different factors affecting the PDM selection, this paper establishes a multiagent model mainly to show how project complexity, governance strength, and market environment affect the project owner’s decision on PDM. Experiment results show that project owner usually choose Design-Build method when the project is very complex within a certain range. Besides, this paper points out that Design-Build method will be the prior choice when the potential contractors develop quickly. This paper provides the owners with methods and suggestions in terms of showing how the factors affect PDM selection, and it may improve the project performance.

  19. Hybrid RANS/LES method for wind flow over complex terrain

    DEFF Research Database (Denmark)

    Bechmann, Andreas; Sørensen, Niels N.

    2010-01-01

    for flows at high Reynolds numbers. To reduce the computational cost of traditional LES, a hybrid method is proposed in which the near-wall eddies are modelled in a Reynolds-averaged sense. Close to walls, the flow is treated with the Reynolds-averaged Navier-Stokes (RANS) equations (unsteady RANS...... rough walls. Previous attempts of combining RANS and LES has resulted in unphysical transition regions between the two layers, but the present work improves this region by using a stochastic backscatter model. To demonstrate the ability of the proposed hybrid method, simulations are presented for wind...... the turbulent kinetic energy, whereas the new method captures the high turbulence levels well but underestimates the mean velocity. The presented results are for a relative mild configuration of complex terrain, but the proposed method can also be used for highly complex terrain where the benefits of the new...

  20. Adjust the method of the FMEA to the requirements of the aviation industry

    Directory of Open Access Journals (Sweden)

    Andrzej FELLNER

    2015-12-01

    Full Text Available The article presents a summary of current methods used in aviation and rail transport. It also contains a proposal to adjust the method of the FMEA to the latest requirements of the airline industry. The authors suggested tables of indicators Zn, Pr and Dt necessary to implement FMEA method of risk analysis taking into account current achievements aerospace and rail safety. Also proposed acceptable limits of the RPN number which allows you to classify threats.

  1. Fractional Complex Transform and exp-Function Methods for Fractional Differential Equations

    Directory of Open Access Journals (Sweden)

    Ahmet Bekir

    2013-01-01

    Full Text Available The exp-function method is presented for finding the exact solutions of nonlinear fractional equations. New solutions are constructed in fractional complex transform to convert fractional differential equations into ordinary differential equations. The fractional derivatives are described in Jumarie's modified Riemann-Liouville sense. We apply the exp-function method to both the nonlinear time and space fractional differential equations. As a result, some new exact solutions for them are successfully established.

  2. NetMHCcons: a consensus method for the major histocompatibility complex class I predictions

    DEFF Research Database (Denmark)

    Karosiene, Edita; Lundegaard, Claus; Lund, Ole

    2012-01-01

    A key role in cell-mediated immunity is dedicated to the major histocompatibility complex (MHC) molecules that bind peptides for presentation on the cell surface. Several in silico methods capable of predicting peptide binding to MHC class I have been developed. The accuracy of these methods depe...... at www.cbs.dtu.dk/services/NetMHCcons, and allows the user in an automatic manner to obtain the most accurate predictions for any given MHC molecule....

  3. A general method for computing the total solar radiation force on complex spacecraft structures

    Science.gov (United States)

    Chan, F. K.

    1981-01-01

    The method circumvents many of the existing difficulties in computational logic presently encountered in the direct analytical or numerical evaluation of the appropriate surface integral. It may be applied to complex spacecraft structures for computing the total force arising from either specular or diffuse reflection or even from non-Lambertian reflection and re-radiation.

  4. A method for evaluating the problem complex of choosing the ventilation system for a new building

    DEFF Research Database (Denmark)

    Hviid, Christian Anker; Svendsen, Svend

    2007-01-01

    The application of a ventilation system in a new building is a multidimensional complex problem that involves quantifiable and non-quantifiable data like energy consump¬tion, indoor environment, building integration and architectural expression. This paper presents a structured method for evaluat...

  5. Simulation As a Method To Support Complex Organizational Transformations in Healthcare

    NARCIS (Netherlands)

    Rothengatter, D.C.F.; Katsma, Christiaan; van Hillegersberg, Jos

    2010-01-01

    In this paper we study the application of simulation as a method to support information system and process design in complex organizational transitions. We apply a combined use of a collaborative workshop approach with the use of a detailed and accurate graphical simulation model in a hospital that

  6. Functional analytic methods in complex analysis and applications to partial differential equations

    International Nuclear Information System (INIS)

    Mshimba, A.S.A.; Tutschke, W.

    1990-01-01

    The volume contains 24 lectures given at the Workshop on Functional Analytic Methods in Complex Analysis and Applications to Partial Differential Equations held in Trieste, Italy, between 8-19 February 1988, at the ICTP. A separate abstract was prepared for each of these lectures. Refs and figs

  7. Structure of the automated uchebno-methodical complex on technical disciplines

    Directory of Open Access Journals (Sweden)

    Вячеслав Михайлович Дмитриев

    2010-12-01

    Full Text Available In article it is put and the problem of automation and information of process of training of students on the basis of the entered system-organizational forms which have received in aggregate the name of education methodical complexes on discipline dares.

  8. Global Learning in a Geography Course Using the Mystery Method as an Approach to Complex Issues

    Science.gov (United States)

    Applis, Stefan

    2014-01-01

    In the study which is the foundation of this essay, the question is examined of whether the complexity of global issues can be solved at the level of teaching methodology. In this context, the first qualitative and constructive study was carried out which researches the Mystery Method using the Thinking-Through-Geography approach (David Leat,…

  9. The Visual Orientation Memory of "Drosophila" Requires Foraging (PKG) Upstream of Ignorant (RSK2) in Ring Neurons of the Central Complex

    Science.gov (United States)

    Kuntz, Sara; Poeck, Burkhard; Sokolowski, Marla B.; Strauss, Roland

    2012-01-01

    Orientation and navigation in a complex environment requires path planning and recall to exert goal-driven behavior. Walking "Drosophila" flies possess a visual orientation memory for attractive targets which is localized in the central complex of the adult brain. Here we show that this type of working memory requires the cGMP-dependent protein…

  10. Energy conserving numerical methods for the computation of complex vortical flows

    Science.gov (United States)

    Allaneau, Yves

    One of the original goals of this thesis was to develop numerical tools to help with the design of micro air vehicles. Micro Air Vehicles (MAVs) are small flying devices of only a few inches in wing span. Some people consider that as their size becomes smaller and smaller, it would be increasingly more difficult to keep all the classical control surfaces such as the rudders, the ailerons and the usual propellers. Over the years, scientists took inspiration from nature. Birds, by flapping and deforming their wings, are capable of accurate attitude control and are able to generate propulsion. However, the biomimicry design has its own limitations and it is difficult to place a hummingbird in a wind tunnel to study precisely the motion of its wings. Our approach was to use numerical methods to tackle this challenging problem. In order to precisely evaluate the lift and drag generated by the wings, one needs to be able to capture with high fidelity the extremely complex vortical flow produced in the wake. This requires a numerical method that is stable yet not too dissipative, so that the vortices do not get diffused in an unphysical way. We solved this problem by developing a new Discontinuous Galerkin scheme that, in addition to conserving mass, momentum and total energy locally, also preserves kinetic energy globally. This property greatly improves the stability of the simulations, especially in the special case p=0 when the approximation polynomials are taken to be piecewise constant (we recover a finite volume scheme). In addition to needing an adequate numerical scheme, a high fidelity solution requires many degrees of freedom in the computations to represent the flow field. The size of the smallest eddies in the flow is given by the Kolmogoroff scale. Capturing these eddies requires a mesh counting in the order of Re³ cells, where Re is the Reynolds number of the flow. We show that under-resolving the system, to a certain extent, is acceptable. However our

  11. The problem of complex eigensystems in the semianalytical solution for advancement of time in solute transport simulations: a new method using real arithmetic

    Science.gov (United States)

    Umari, Amjad M.J.; Gorelick, Steven M.

    1986-01-01

    In the numerical modeling of groundwater solute transport, explicit solutions may be obtained for the concentration field at any future time without computing concentrations at intermediate times. The spatial variables are discretized and time is left continuous in the governing differential equation. These semianalytical solutions have been presented in the literature and involve the eigensystem of a coefficient matrix. This eigensystem may be complex (i.e., have imaginary components) due to the asymmetry created by the advection term in the governing advection-dispersion equation. Previous investigators have either used complex arithmetic to represent a complex eigensystem or chosen large dispersivity values for which the imaginary components of the complex eigenvalues may be ignored without significant error. It is shown here that the error due to ignoring the imaginary components of complex eigenvalues is large for small dispersivity values. A new algorithm that represents the complex eigensystem by converting it to a real eigensystem is presented. The method requires only real arithmetic.

  12. Evaluation of Two Statistical Methods Provides Insights into the Complex Patterns of Alternative Polyadenylation Site Switching

    Science.gov (United States)

    Li, Jie; Li, Rui; You, Leiming; Xu, Anlong; Fu, Yonggui; Huang, Shengfeng

    2015-01-01

    Switching between different alternative polyadenylation (APA) sites plays an important role in the fine tuning of gene expression. New technologies for the execution of 3’-end enriched RNA-seq allow genome-wide detection of the genes that exhibit significant APA site switching between different samples. Here, we show that the independence test gives better results than the linear trend test in detecting APA site-switching events. Further examination suggests that the discrepancy between these two statistical methods arises from complex APA site-switching events that cannot be represented by a simple change of average 3’-UTR length. In theory, the linear trend test is only effective in detecting these simple changes. We classify the switching events into four switching patterns: two simple patterns (3’-UTR shortening and lengthening) and two complex patterns. By comparing the results of the two statistical methods, we show that complex patterns account for 1/4 of all observed switching events that happen between normal and cancerous human breast cell lines. Because simple and complex switching patterns may convey different biological meanings, they merit separate study. We therefore propose to combine both the independence test and the linear trend test in practice. First, the independence test should be used to detect APA site switching; second, the linear trend test should be invoked to identify simple switching events; and third, those complex switching events that pass independence testing but fail linear trend testing can be identified. PMID:25875641

  13. Automated local line rolling forming and simplified deformation simulation method for complex curvature plate of ships

    Directory of Open Access Journals (Sweden)

    Y. Zhao

    2017-06-01

    Full Text Available Local line rolling forming is a common forming approach for the complex curvature plate of ships. However, the processing mode based on artificial experience is still applied at present, because it is difficult to integrally determine relational data for the forming shape, processing path, and process parameters used to drive automation equipment. Numerical simulation is currently the major approach for generating such complex relational data. Therefore, a highly precise and effective numerical computation method becomes crucial in the development of the automated local line rolling forming system for producing complex curvature plates used in ships. In this study, a three-dimensional elastoplastic finite element method was first employed to perform numerical computations for local line rolling forming, and the corresponding deformation and strain distribution features were acquired. In addition, according to the characteristics of strain distributions, a simplified deformation simulation method, based on the deformation obtained by applying strain was presented. Compared to the results of the three-dimensional elastoplastic finite element method, this simplified deformation simulation method was verified to provide high computational accuracy, and this could result in a substantial reduction in calculation time. Thus, the application of the simplified deformation simulation method was further explored in the case of multiple rolling loading paths. Moreover, it was also utilized to calculate the local line rolling forming for the typical complex curvature plate of ships. Research findings indicated that the simplified deformation simulation method was an effective tool for rapidly obtaining relationships between the forming shape, processing path, and process parameters.

  14. Antioxidant study of quercetin and their metal complex and determination of stability constant by spectrophotometry method.

    Science.gov (United States)

    Ravichandran, R; Rajendran, M; Devapiriam, D

    2014-03-01

    Quercetin found chelate cadmium ions, scavenge free radicals produced by cadmium. Hence new complex, quercetin with cadmium was synthesised, and the synthesised complex structures were determined by UV-vis spectrophotometry, infrared spectroscopy, thermogravimetry and differential thermal analysis techniques (UV-vis, IR, TGA and DTA). The equilibrium stability constants of quercetin-cadmium complex were determined by Job's method. The determined stability constant value of quercetin-cadminum complex at pH 4.4 is 2.27×10(6) and at pH 7.4 is 7.80×10(6). It was found that the quercetin and cadmium ion form 1:1 complex in both pH 4.4 and pH 7.4. The structure of the compounds was elucidated on the basis of obtained results. Furthermore, the antioxidant activity of the free quercetin and quercetin-cadmium complexes were determined by DPPH and ABTS assays. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Optimization of a method for preparing solid complexes of essential clove oil with β-cyclodextrins.

    Science.gov (United States)

    Hernández-Sánchez, Pilar; López-Miranda, Santiago; Guardiola, Lucía; Serrano-Martínez, Ana; Gabaldón, José Antonio; Nuñez-Delicado, Estrella

    2017-01-01

    Clove oil (CO) is an aromatic oily liquid used in the food, cosmetics and pharmaceutical industries for its functional properties. However, its disadvantages of pungent taste, volatility, light sensitivity and poor water solubility can be solved by applying microencapsulation or complexation techniques. Essential CO was successfully solubilized in aqueous solution by forming inclusion complexes with β-cyclodextrins (β-CDs). Moreover, phase solubility studies demonstrated that essential CO also forms insoluble complexes with β-CDs. Based on these results, essential CO-β-CD solid complexes were prepared by the novel approach of microwave irradiation (MWI), followed by three different drying methods: vacuum oven drying (VO), freeze-drying (FD) or spray-drying (SD). FD was the best option for drying the CO-β-CD solid complexes, followed by VO and SD. MWI can be used efficiently to prepare essential CO-β-CD complexes with good yield on an industrial scale. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  16. MODELS AND METHODS OF SAFETY-ORIENTED PROJECT MANAGEMENT OF DEVELOPMENT OF COMPLEX SYSTEMS: METHODOLOGICAL APPROACH

    Directory of Open Access Journals (Sweden)

    Олег Богданович ЗАЧКО

    2016-03-01

    Full Text Available The methods and models of safety-oriented project management of the development of complex systems are proposed resulting from the convergence of existing approaches in project management in contrast to the mechanism of value-oriented management. A cognitive model of safety oriented project management of the development of complex systems is developed, which provides a synergistic effect that is to move the system from the original (pre condition in an optimal one from the viewpoint of life safety - post-project state. The approach of assessment the project complexity is proposed, which consists in taking into account the seasonal component of a time characteristic of life cycles of complex organizational and technical systems with occupancy. This enabled to take into account the seasonal component in simulation models of life cycle of the product operation in complex organizational and technical system, modeling the critical points of operation of systems with occupancy, which forms a new methodology for safety-oriented management of projects, programs and portfolios of projects with the formalization of the elements of complexity.

  17. Functional Mitochondrial Complex I Is Required by Tobacco Leaves for Optimal Photosynthetic Performance in Photorespiratory Conditions and during Transients1

    Science.gov (United States)

    Dutilleul, Christelle; Driscoll, Simon; Cornic, Gabriel; De Paepe, Rosine; Foyer, Christine H.; Noctor, Graham

    2003-01-01

    The importance of the mitochondrial electron transport chain in photosynthesis was studied using the tobacco (Nicotiana sylvestris) mutant CMSII, which lacks functional complex I. Rubisco activities and oxygen evolution at saturating CO2 showed that photosynthetic capacity in the mutant was at least as high as in wild-type (WT) leaves. Despite this, steady-state photosynthesis in the mutant was reduced by 20% to 30% at atmospheric CO2 levels. The inhibition of photosynthesis was alleviated by high CO2 or low O2. The mutant showed a prolonged induction of photosynthesis, which was exacerbated in conditions favoring photorespiration and which was accompanied by increased extractable NADP-malate dehydrogenase activity. Feeding experiments with leaf discs demonstrated that CMSII had a lower capacity than the WT for glycine (Gly) oxidation in the dark. Analysis of the postillumination burst in CO2 evolution showed that this was not because of insufficient Gly decarboxylase capacity. Despite the lower rate of Gly metabolism in CMSII leaves in the dark, the Gly to Ser ratio in the light displayed a similar dependence on photosynthesis to the WT. It is concluded that: (a) Mitochondrial complex I is required for optimal photosynthetic performance, despite the operation of alternative dehydrogenases in CMSII; and (b) complex I is necessary to avoid redox disruption of photosynthesis in conditions where leaf mitochondria must oxidize both respiratory and photorespiratory substrates simultaneously. PMID:12529534

  18. Recruitment of a SAP18-HDAC1 complex into HIV-1 virions and its requirement for viral replication.

    Directory of Open Access Journals (Sweden)

    Masha Sorin

    2009-06-01

    Full Text Available HIV-1 integrase (IN is a virally encoded protein required for integration of viral cDNA into host chromosomes. INI1/hSNF5 is a component of the SWI/SNF complex that interacts with HIV-1 IN, is selectively incorporated into HIV-1 (but not other retroviral virions, and modulates multiple steps, including particle production and infectivity. To gain further insight into the role of INI1 in HIV-1 replication, we screened for INI1-interacting proteins using the yeast two-hybrid system. We found that SAP18 (Sin3a associated protein 18 kD, a component of the Sin3a-HDAC1 complex, directly binds to INI1 in yeast, in vitro and in vivo. Interestingly, we found that IN also binds to SAP18 in vitro and in vivo. SAP18 and components of a Sin3A-HDAC1 complex were specifically incorporated into HIV-1 (but not SIV and HTLV-1 virions in an HIV-1 IN-dependent manner. Using a fluorescence-based assay, we found that HIV-1 (but not SIV virion preparations harbour significant deacetylase activity, indicating the specific recruitment of catalytically active HDAC into the virions. To determine the requirement of virion-associated HDAC1 to HIV-1 replication, an inactive, transdominant negative mutant of HDAC1 (HDAC1(H141A was utilized. Incorporation of HDAC1(H141A decreased the virion-associated histone deacetylase activity. Furthermore, incorporation of HDAC1(H141A decreased the infectivity of HIV-1 (but not SIV virions. The block in infectivity due to virion-associated HDAC1(H141A occurred specifically at the early reverse transcription stage, while entry of the virions was unaffected. RNA-interference mediated knock-down of HDAC1 in producer cells resulted in decreased virion-associated HDAC1 activity and a reduction in infectivity of these virions. These studies indicate that HIV-1 IN and INI1/hSNF5 bind SAP18 and selectively recruit components of Sin3a-HDAC1 complex into HIV-1 virions. Furthermore, HIV-1 virion-associated HDAC1 is required for efficient early post

  19. A complex method of equipment replacement planning. An advanced plan for the replacement of medical equipment.

    Science.gov (United States)

    Dondelinger, Robert M

    2004-01-01

    This complex method of equipment replacement planning is a methodology; it is a means to an end, a process that focuses on equipment most in need of replacement, rather than the end itself. It uses data available from the maintenance management database, and attempts to quantify those subjective items important [figure: see text] in making equipment replacement decisions. Like the simple method of the last issue, it is a starting point--albeit an advanced starting point--which the user can modify to fit their particular organization, but the complex method leaves room for expansion. It is based on sound logic, documented facts, and is fully defensible during the decision-making process and will serve your organization well as provide a structure for your equipment replacement planning decisions.

  20. Methods Dealing with Complexity in Selecting Joint Venture Contractors for Large-Scale Infrastructure Projects

    Directory of Open Access Journals (Sweden)

    Ru Liang

    2018-01-01

    Full Text Available The magnitude of business dynamics has increased rapidly due to increased complexity, uncertainty, and risk of large-scale infrastructure projects. This fact made it increasingly tough to “go alone” into a contractor. As a consequence, joint venture contractors with diverse strengths and weaknesses cooperatively bid for bidding. Understanding project complexity and making decision on the optimal joint venture contractor is challenging. This paper is to study how to select joint venture contractors for undertaking large-scale infrastructure projects based on a multiattribute mathematical model. Two different methods are developed to solve the problem. One is based on ideal points and the other one is based on balanced ideal advantages. Both of the two methods consider individual difference in expert judgment and contractor attributes. A case study of Hong Kong-Zhuhai-Macao-Bridge (HZMB project in China is used to demonstrate how to apply these two methods and their advantages.

  1. Estimation of very low concentrations of Ruthenium by spectrophotometric method using barbituric acid as complexing agent

    International Nuclear Information System (INIS)

    Ramakrishna Reddy, S.; Srinivasan, R.; Mallika, C.; Kamachi Mudali, U.; Natarajan, R.

    2012-01-01

    Spectrophotometric method employing numerous chromogenic reagents like thiourea, 1,10-phenanthroline, thiocyanate and tropolone is reported in the literature for the estimation of very low concentrations of Ru. A sensitive spectrophotometric method has been developed for the determination of ruthenium in the concentration range 1.5 to 6.5 ppm in the present work. This method is based on the reaction of ruthenium with barbituric acid to produce ruthenium(ll)tris-violurate, (Ru(H 2 Va) 3 ) -1 complex which gives a stable deep-red coloured solution. The maximum absorption of the complex is at 491 nm due to the inverted t 2g → Π(L-L ligand) electron - transfer transition. The molar absorptivity of the coloured species is 9,851 dm 3 mol -1 cm -1

  2. Complex Hand Dexterity: A Review of Biomechanical Methods for Measuring Musical Performance

    Directory of Open Access Journals (Sweden)

    Cheryl Diane Metcalf

    2014-05-01

    Full Text Available Complex hand dexterity is fundamental to our interactions with the physical, social and cultural environment. Dexterity can be an expression of creativity and precision in a range of activities, including musical performance. Little is understood about complex hand dexterity or how virtuoso expertise is acquired, due to the versatility of movement combinations available to complete any given task. This has historically limited progress of the field because of difficulties in measuring movements of the hand. Recent developments in methods of motion capture and analysis mean it is now possible to explore the intricate movements of the hand and fingers. These methods allow us insights into the neurophysiological mechanisms underpinning complex hand dexterity and motor learning. They also allow investigation into the key factors that contribute to injury, recovery and functional compensation.The application of such analytical techniques within musical performance provides a multidisciplinary framework for purposeful investigation into the process of learning and skill acquisition in instrumental performance. These highly skilled manual and cognitive tasks present the ultimate achievement in complex hand dexterity. This paper will review methods of assessing instrumental performance in music, focusing specifically on biomechanical measurement and the associated technical challenges faced when measuring highly dexterous activities.

  3. Method for data compression by associating complex numbers with files of data values

    Science.gov (United States)

    Feo, John Thomas; Hanks, David Carlton; Kraay, Thomas Arthur

    1998-02-10

    A method for compressing data for storage or transmission. Given a complex polynomial and a value assigned to each root, a root generated data file (RGDF) is created, one entry at a time. Each entry is mapped to a point in a complex plane. An iterative root finding technique is used to map the coordinates of the point to the coordinates of one of the roots of the polynomial. The value associated with that root is assigned to the entry. An equational data compression (EDC) method reverses this procedure. Given a target data file, the EDC method uses a search algorithm to calculate a set of m complex numbers and a value map that will generate the target data file. The error between a simple target data file and generated data file is typically less than 10%. Data files can be transmitted or stored without loss by transmitting the m complex numbers, their associated values, and an error file whose size is at most one-tenth of the size of the input data file.

  4. Workshop on Recent Trends in Complex Methods for Partial Differential Equations

    CERN Document Server

    Celebi, A; Tutschke, Wolfgang

    1999-01-01

    This volume is a collection of manscripts mainly originating from talks and lectures given at the Workshop on Recent Trends in Complex Methods for Par­ tial Differential Equations held from July 6 to 10, 1998 at the Middle East Technical University in Ankara, Turkey, sponsored by The Scientific and Tech­ nical Research Council of Turkey and the Middle East Technical University. This workshop is a continuation oftwo workshops from 1988 and 1993 at the In­ ternational Centre for Theoretical Physics in Trieste, Italy entitled Functional analytic Methods in Complex Analysis and Applications to Partial Differential Equations. Since classical complex analysis of one and several variables has a long tra­ dition it is of high level. But most of its basic problems are solved nowadays so that within the last few decades it has lost more and more attention. The area of complex and functional analytic methods in partial differential equations, however, is still a growing and flourishing field, in particular as these ...

  5. Cork-resin ablative insulation for complex surfaces and method for applying the same

    Science.gov (United States)

    Walker, H. M.; Sharpe, M. H.; Simpson, W. G. (Inventor)

    1980-01-01

    A method of applying cork-resin ablative insulation material to complex curved surfaces is disclosed. The material is prepared by mixing finely divided cork with a B-stage curable thermosetting resin, forming the resulting mixture into a block, B-stage curing the resin-containing block, and slicing the block into sheets. The B-stage cured sheet is shaped to conform to the surface being insulated, and further curing is then performed. Curing of the resins only to B-stage before shaping enables application of sheet material to complex curved surfaces and avoids limitations and disadvantages presented in handling of fully cured sheet material.

  6. Complex-valued derivative propagation method with approximate Bohmian trajectories: Application to electronic nonadiabatic dynamics

    Science.gov (United States)

    Wang, Yu; Chou, Chia-Chun

    2018-05-01

    The coupled complex quantum Hamilton-Jacobi equations for electronic nonadiabatic transitions are approximately solved by propagating individual quantum trajectories in real space. Equations of motion are derived through use of the derivative propagation method for the complex actions and their spatial derivatives for wave packets moving on each of the coupled electronic potential surfaces. These equations for two surfaces are converted into the moving frame with the same grid point velocities. Excellent wave functions can be obtained by making use of the superposition principle even when nodes develop in wave packet scattering.

  7. Functional Mobility Testing: A Novel Method to Create Suit Design Requirements

    Science.gov (United States)

    England, Scott A.; Benson, Elizabeth A.; Rajulu, Sudhakar L.

    2008-01-01

    This study was performed to aide in the creation of design requirements for the next generation of space suits that more accurately describe the level of mobility necessary for a suited crewmember through the use of an innovative methodology utilizing functional mobility. A novel method was utilized involving the collection of kinematic data while 20 subjects (10 male, 10 female) performed pertinent functional tasks that will be required of a suited crewmember during various phases of a lunar mission. These tasks were selected based on relevance and criticality from a larger list of tasks that may be carried out by the crew. Kinematic data was processed through Vicon BodyBuilder software to calculate joint angles for the ankle, knee, hip, torso, shoulder, elbow, and wrist. Maximum functional mobility was consistently lower than maximum isolated mobility. This study suggests that conventional methods for establishing design requirements for human-systems interfaces based on maximal isolated joint capabilities may overestimate the required mobility. Additionally, this method provides a valuable means of evaluating systems created from these requirements by comparing the mobility available in a new spacesuit, or the mobility required to use a new piece of hardware, to this newly established database of functional mobility.

  8. Comparison of different methods to extract the required coefficient of friction for level walking.

    Science.gov (United States)

    Chang, Wen-Ruey; Chang, Chien-Chi; Matz, Simon

    2012-01-01

    The required coefficient of friction (RCOF) is an important predictor for slip incidents. Despite the wide use of the RCOF there is no standardised method for identifying the RCOF from ground reaction forces. This article presents a comparison of the outcomes from seven different methods, derived from those reported in the literature, for identifying the RCOF from the same data. While commonly used methods are based on a normal force threshold, percentage of stance phase or time from heel contact, a newly introduced hybrid method is based on a combination of normal force, time and direction of increase in coefficient of friction. Although no major differences were found with these methods in more than half the strikes, significant differences were found in a significant portion of strikes. Potential problems with some of these methods were identified and discussed and they appear to be overcome by the hybrid method. No standard method exists for determining the required coefficient of friction (RCOF), an important predictor for slipping. In this study, RCOF values from a single data set, using various methods from the literature, differed considerably for a significant portion of strikes. A hybrid method may yield improved results.

  9. A novel method for preparation of HAMLET-like protein complexes.

    Science.gov (United States)

    Permyakov, Sergei E; Knyazeva, Ekaterina L; Leonteva, Marina V; Fadeev, Roman S; Chekanov, Aleksei V; Zhadan, Andrei P; Håkansson, Anders P; Akatov, Vladimir S; Permyakov, Eugene A

    2011-09-01

    Some natural proteins induce tumor-selective apoptosis. α-Lactalbumin (α-LA), a milk calcium-binding protein, is converted into an antitumor form, called HAMLET/BAMLET, via partial unfolding and association with oleic acid (OA). Besides triggering multiple cell death mechanisms in tumor cells, HAMLET exhibits bactericidal activity against Streptococcus pneumoniae. The existing methods for preparation of active complexes of α-LA with OA employ neutral pH solutions, which greatly limit water solubility of OA. Therefore these methods suffer from low scalability and/or heterogeneity of the resulting α-LA - OA samples. In this study we present a novel method for preparation of α-LA - OA complexes using alkaline conditions that favor aqueous solubility of OA. The unbound OA is removed by precipitation under acidic conditions. The resulting sample, bLA-OA-45, bears 11 OA molecules and exhibits physico-chemical properties similar to those of BAMLET. Cytotoxic activities of bLA-OA-45 against human epidermoid larynx carcinoma and S. pneumoniae D39 cells are close to those of HAMLET. Treatment of S. pneumoniae with bLA-OA-45 or HAMLET induces depolarization and rupture of the membrane. The cells are markedly rescued from death upon pretreatment with an inhibitor of Ca(2+) transport. Hence, the activation mechanisms of S. pneumoniae death are analogous for these two complexes. The developed express method for preparation of active α-LA - OA complex is high-throughput and suited for development of other protein complexes with low-molecular-weight amphiphilic substances possessing valuable cytotoxic properties. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  10. A low complexity method for the optimization of network path length in spatially embedded networks

    International Nuclear Information System (INIS)

    Chen, Guang; Yang, Xu-Hua; Xu, Xin-Li; Ming, Yong; Chen, Sheng-Yong; Wang, Wan-Liang

    2014-01-01

    The average path length of a network is an important index reflecting the network transmission efficiency. In this paper, we propose a new method of decreasing the average path length by adding edges. A new indicator is presented, incorporating traffic flow demand, to assess the decrease in the average path length when a new edge is added during the optimization process. With the help of the indicator, edges are selected and added into the network one by one. The new method has a relatively small time computational complexity in comparison with some traditional methods. In numerical simulations, the new method is applied to some synthetic spatially embedded networks. The result shows that the method can perform competitively in decreasing the average path length. Then, as an example of an application of this new method, it is applied to the road network of Hangzhou, China. (paper)

  11. Investigation of rare earth complexes with pyridoxalydenamino acids by optical methods. Structure of complexes on basis of hydrophobic amino acids

    International Nuclear Information System (INIS)

    Zolin, V.F.; Koreneva, L.G.; Serbinova, T.A.; Tsaryuk, V.I.

    1975-01-01

    The structure of pyridoxalidene amino acid complexes was studied by circular dichroism, magnetic circular dichroism and luminescence spectroscopy. It was shown that these are two-ligand complexes, whereby in the case of those based on valine, leucine and isoleucine the chromophores are almost perpendicular to one another. In the case of complexes based on glycine and alanine the co-ordination sphere is strongly deformed. (author)

  12. Mission from Mars - a method for exploring user requirements for children in a narrative space

    DEFF Research Database (Denmark)

    Dindler, Christian; Ludvigsen, Martin; Lykke-Olesen, Andreas

    2005-01-01

    In this paper a particular design method is propagated as a supplement to existing descriptive approaches to current practice studies especially suitable for gathering requirements for the design of children's technology. The Mission from Mars method was applied during the design of an electronic...... school bag (eBag). The three-hour collaborative session provides a first-hand insight into children's practice in a fun and intriguing way. The method is proposed as a supplement to existing descriptive design methods for interaction design and children....

  13. Detection of circulating immune complexes in hepatitis by means of a new method employing /sup 125/I-antibody. Circulating immune complexes in hepatitis

    Energy Technology Data Exchange (ETDEWEB)

    Fresco, G F [Genoa Univ. (Italy). Dept. of Internal Medicine

    1978-06-01

    A new RIA method for the detection of circulating immune complexes and antibodies arising in the course of viral hepatitis is described. It involves the use of /sup 125/I-labeled antibodies and foresees the possibility of employing immune complex-coated polypropylene tubes. This simple and sensitive procedure takes into account the possibility that the immune complexes may be absorbed by the surface of polypropylene tubes during the period in which the serum remains there.

  14. Spectroscopic methods for aqueous cyclodextrin inclusion complex binding measurement for 1,4-dioxane, chlorinated co-contaminants, and ozone

    Science.gov (United States)

    Khan, Naima A.; Johnson, Michael D.; Carroll, Kenneth C.

    2018-03-01

    Recalcitrant organic contaminants, such as 1,4-dioxane, typically require advanced oxidation process (AOP) oxidants, such as ozone (O3), for their complete mineralization during water treatment. Unfortunately, the use of AOPs can be limited by these oxidants' relatively high reactivities and short half-lives. These drawbacks can be minimized by partial encapsulation of the oxidants within a cyclodextrin cavity to form inclusion complexes. We determined the inclusion complexes of O3 and three common co-contaminants (trichloroethene, 1,1,1-trichloroethane, and 1,4-dioxane) as guest compounds within hydroxypropyl-β-cyclodextrin. Both direct (ultraviolet or UV) and competitive (fluorescence changes with 6-p-toluidine-2-naphthalenesulfonic acid as the probe) methods were used, which gave comparable results for the inclusion constants of these species. Impacts of changing pH and NaCl concentrations were also assessed. Binding constants increased with pH and with ionic strength, which was attributed to variations in guest compound solubility. The results illustrate the versatility of cyclodextrins for inclusion complexation with various types of compounds, binding measurement methods are applicable to a wide range of applications, and have implications for both extraction of contaminants and delivery of reagents for treatment of contaminants in wastewater or contaminated groundwater.

  15. Energy-based method for near-real time modeling of sound field in complex urban environments.

    Science.gov (United States)

    Pasareanu, Stephanie M; Remillieux, Marcel C; Burdisso, Ricardo A

    2012-12-01

    Prediction of the sound field in large urban environments has been limited thus far by the heavy computational requirements of conventional numerical methods such as boundary element (BE) or finite-difference time-domain (FDTD) methods. Recently, a considerable amount of work has been devoted to developing energy-based methods for this application, and results have shown the potential to compete with conventional methods. However, these developments have been limited to two-dimensional (2-D) studies (along street axes), and no real description of the phenomena at issue has been exposed. Here the mathematical theory of diffusion is used to predict the sound field in 3-D complex urban environments. A 3-D diffusion equation is implemented by means of a simple finite-difference scheme and applied to two different types of urban configurations. This modeling approach is validated against FDTD and geometrical acoustic (GA) solutions, showing a good overall agreement. The role played by diffraction near buildings edges close to the source is discussed, and suggestions are made on the possibility to predict accurately the sound field in complex urban environments, in near real time simulations.

  16. Proposal for Requirement Validation Criteria and Method Based on Actor Interaction

    Science.gov (United States)

    Hattori, Noboru; Yamamoto, Shuichiro; Ajisaka, Tsuneo; Kitani, Tsuyoshi

    We propose requirement validation criteria and a method based on the interaction between actors in an information system. We focus on the cyclical transitions of one actor's situation against another and clarify observable stimuli and responses based on these transitions. Both actors' situations can be listed in a state transition table, which describes the observable stimuli or responses they send or receive. Examination of the interaction between both actors in the state transition tables enables us to detect missing or defective observable stimuli or responses. Typically, this method can be applied to the examination of the interaction between a resource managed by the information system and its user. As a case study, we analyzed 332 requirement defect reports of an actual system development project in Japan. We found that there were a certain amount of defects regarding missing or defective stimuli and responses, which can be detected using our proposed method if this method is used in the requirement definition phase. This means that we can reach a more complete requirement definition with our proposed method.

  17. Microreactor and method for preparing a radiolabeled complex or a biomolecule conjugate

    Energy Technology Data Exchange (ETDEWEB)

    Reichert, David E; Kenis, Paul J. A.; Wheeler, Tobias D; Desai, Amit V; Zeng, Dexing; Onal, Birce C

    2015-03-17

    A microreactor for preparing a radiolabeled complex or a biomolecule conjugate comprises a microchannel for fluid flow, where the microchannel comprises a mixing portion comprising one or more passive mixing elements, and a reservoir for incubating a mixed fluid. The reservoir is in fluid communication with the microchannel and is disposed downstream of the mixing portion. A method of preparing a radiolabeled complex includes flowing a radiometal solution comprising a metallic radionuclide through a downstream mixing portion of a microchannel, where the downstream mixing portion includes one or more passive mixing elements, and flowing a ligand solution comprising a bifunctional chelator through the downstream mixing portion. The ligand solution and the radiometal solution are passively mixed while in the downstream mixing portion to initiate a chelation reaction between the metallic radionuclide and the bifunctional chelator. The chelation reaction is completed to form a radiolabeled complex.

  18. An Evaluation of Fractal Surface Measurement Methods for Characterizing Landscape Complexity from Remote-Sensing Imagery

    Science.gov (United States)

    Lam, Nina Siu-Ngan; Qiu, Hong-Lie; Quattrochi, Dale A.; Emerson, Charles W.; Arnold, James E. (Technical Monitor)

    2001-01-01

    The rapid increase in digital data volumes from new and existing sensors necessitates the need for efficient analytical tools for extracting information. We developed an integrated software package called ICAMS (Image Characterization and Modeling System) to provide specialized spatial analytical functions for interpreting remote sensing data. This paper evaluates the three fractal dimension measurement methods: isarithm, variogram, and triangular prism, along with the spatial autocorrelation measurement methods Moran's I and Geary's C, that have been implemented in ICAMS. A modified triangular prism method was proposed and implemented. Results from analyzing 25 simulated surfaces having known fractal dimensions show that both the isarithm and triangular prism methods can accurately measure a range of fractal surfaces. The triangular prism method is most accurate at estimating the fractal dimension of higher spatial complexity, but it is sensitive to contrast stretching. The variogram method is a comparatively poor estimator for all of the surfaces, particularly those with higher fractal dimensions. Similar to the fractal techniques, the spatial autocorrelation techniques are found to be useful to measure complex images but not images with low dimensionality. These fractal measurement methods can be applied directly to unclassified images and could serve as a tool for change detection and data mining.

  19. Guidance and methods for satisfying low specific activity material and surface contaminated object regulatory requirements

    International Nuclear Information System (INIS)

    Pope, R.B.; Shappert, L.B.; Michelhaugh, R.D.; Boyle, R.W.; Easton, E.P.; Coodk, J.R.

    1998-01-01

    The U.S. Department of Transportation (DOT) and the U.S. Nuclear Regulatory Commission (NRC) have prepared a comprehensive set of draft guidance for shippers and inspectors to use when applying the newly imposed regulatory requirements for low specific activity (LSA) material and surface contaminated objects (SCOs). These requirements represent significant departures in some areas from the manner in which these materials and objects were regulated by the earlier versions of the regulations. The proper interpretation and application of the regulatory criteria can require a fairly complex set of decisions be made. To assist those trying these regulatory requirements, a detailed set of logic-flow diagrams representing decisions related to multiple factors were prepared and included in the draft report for comment on Categorizing and Transporting Low Specific Activity Materials and Surface Contaminated Objects, (DOT/NRC, 1997). These logic-flow diagrams, as developed, are specific to the U.S. regulations, but were readily adaptable to the IAEA regulations. The diagrams have been modified accordingly and tied directly to specific paragraphs in IAEA Safety Series No. 6. This paper provides the logic-flow diagrams adapted in the IAEA regulations, and demonstrated how these diagrams can be used to assist consignors and inspectors in assessing compliance of shipments with the LSA material and SCO regulatory requirements. (authors)

  20. EVALUATING THE NOVEL METHODS ON SPECIES DISTRIBUTION MODELING IN COMPLEX FOREST

    Directory of Open Access Journals (Sweden)

    C. H. Tu

    2012-07-01

    Full Text Available The prediction of species distribution has become a focus in ecology. For predicting a result more effectively and accurately, some novel methods have been proposed recently, like support vector machine (SVM and maximum entropy (MAXENT. However, high complexity in the forest, like that in Taiwan, will make the modeling become even harder. In this study, we aim to explore which method is more applicable to species distribution modeling in the complex forest. Castanopsis carlesii (long-leaf chinkapin, LLC, growing widely in Taiwan, was chosen as the target species because its seeds are an important food source for animals. We overlaid the tree samples on the layers of altitude, slope, aspect, terrain position, and vegetation index derived from SOPT-5 images, and developed three models, MAXENT, SVM, and decision tree (DT, to predict the potential habitat of LLCs. We evaluated these models by two sets of independent samples in different site and the effect on the complexity of forest by changing the background sample size (BSZ. In the forest with low complex (small BSZ, the accuracies of SVM (kappa = 0.87 and DT (0.86 models were slightly higher than that of MAXENT (0.84. In the more complex situation (large BSZ, MAXENT kept high kappa value (0.85, whereas SVM (0.61 and DT (0.57 models dropped significantly due to limiting the habitat close to samples. Therefore, MAXENT model was more applicable to predict species’ potential habitat in the complex forest; whereas SVM and DT models would tend to underestimate the potential habitat of LLCs.

  1. Colorimetric method for enzymatic screening assay of ATP using Fe(III)-xylenol orange complex formation.

    Science.gov (United States)

    Ishida, Akihiko; Yamada, Yasuko; Kamidate, Tamio

    2008-11-01

    In hygiene management, recently there has been a significant need for screening methods for microbial contamination by visual observation or with commonly used colorimetric apparatus. The amount of adenosine triphosphate (ATP) can serve as the index of a microorganism. This paper describes the development of a colorimetric method for the assay of ATP, using enzymatic cycling and Fe(III)-xylenol orange (XO) complex formation. The color characteristics of the Fe(III)-XO complexes, which show a distinct color change from yellow to purple, assist the visual observation in screening work. In this method, a trace amount of ATP was converted to pyruvate, which was further amplified exponentially with coupled enzymatic reactions. Eventually, pyruvate was converted to the Fe(III)-XO complexes through pyruvate oxidase reaction and Fe(II) oxidation. As the assay result, yellow or purple color was observed: A yellow color indicates that the ATP concentration is lower than the criterion of the test, and a purple color indicates that the ATP concentration is higher than the criterion. The method was applied to the assay of ATP extracted from Escherichia coli cells added to cow milk.

  2. Network reliability analysis of complex systems using a non-simulation-based method

    International Nuclear Information System (INIS)

    Kim, Youngsuk; Kang, Won-Hee

    2013-01-01

    Civil infrastructures such as transportation, water supply, sewers, telecommunications, and electrical and gas networks often establish highly complex networks, due to their multiple source and distribution nodes, complex topology, and functional interdependence between network components. To understand the reliability of such complex network system under catastrophic events such as earthquakes and to provide proper emergency management actions under such situation, efficient and accurate reliability analysis methods are necessary. In this paper, a non-simulation-based network reliability analysis method is developed based on the Recursive Decomposition Algorithm (RDA) for risk assessment of generic networks whose operation is defined by the connections of multiple initial and terminal node pairs. The proposed method has two separate decomposition processes for two logical functions, intersection and union, and combinations of these processes are used for the decomposition of any general system event with multiple node pairs. The proposed method is illustrated through numerical network examples with a variety of system definitions, and is applied to a benchmark gas transmission pipe network in Memphis TN to estimate the seismic performance and functional degradation of the network under a set of earthquake scenarios.

  3. Assessment of exposure to the Penicillium glabrum complex in cork industry using complementing methods.

    Science.gov (United States)

    Viegas, Carla; Sabino, Raquel; Botelho, Daniel; dos Santos, Mateus; Gomes, Anita Quintal

    2015-09-01

    Cork oak is the second most dominant forest species in Portugal and makes this country the world leader in cork export. Occupational exposure to Chrysonilia sitophila and the Penicillium glabrum complex in cork industry is common, and the latter fungus is associated with suberosis. However, as conventional methods seem to underestimate its presence in occupational environments, the aim of our study was to see whether information obtained by polymerase chain reaction (PCR), a molecular-based method, can complement conventional findings and give a better insight into occupational exposure of cork industry workers. We assessed fungal contamination with the P. glabrum complex in three cork manufacturing plants in the outskirts of Lisbon using both conventional and molecular methods. Conventional culturing failed to detect the fungus at six sampling sites in which PCR did detect it. This confirms our assumption that the use of complementing methods can provide information for a more accurate assessment of occupational exposure to the P. glabrum complex in cork industry.

  4. The development of quantitative determination method of organic acids in complex poly herbal extraction

    Directory of Open Access Journals (Sweden)

    I. L. Dyachok

    2016-08-01

    Full Text Available Aim. The development of sensible, economical and expressive method of quantitative determination of organic acids in complex poly herbal extraction counted on izovaleric acid with the use of digital technologies. Materials and methods. Model complex poly herbal extraction of sedative action was chosen as a research object. Extraction is composed of these medical plants: Valeriana officinalis L., Crataégus, Melissa officinalis L., Hypericum, Mentha piperita L., Húmulus lúpulus, Viburnum. Based on chemical composition of plant components, we consider that main pharmacologically active compounds, which can be found in complex poly herbal extraction are: polyphenolic substances (flavonoids, which are contained in Crataégus, Viburnum, Hypericum, Mentha piperita L., Húmulus lúpulus; also organic acids, including izovaleric acid, which are contained in Valeriana officinalis L., Mentha piperita L., Melissa officinalis L., Viburnum; the aminoacid are contained in Valeriana officinalis L. For the determination of organic acids content in low concentration we applied instrumental method of analysis, namely conductometry titration which consisted in the dependences of water solution conductivity of complex poly herbal extraction on composition of organic acids. Result. The got analytical dependences, which describes tangent lines to the conductometry curve before and after the point of equivalence, allow to determine the volume of solution expended on titration and carry out procedure of quantitative determination of organic acids in the digital mode. Conclusion. The proposed method enables to determine the point of equivalence and carry out quantitative determination of organic acids counted on izovaleric acid with the use of digital technologies, that allows to computerize the method on the whole.

  5. The influence of atomic number on the complex formation constants by visible spectrophotometric method

    International Nuclear Information System (INIS)

    Samin; Kris-Tri-Basuki; Farida-Ernawati

    1996-01-01

    The influence of atomic number on the complex formation constants and it's application by visible spectrophotometric method has been carried out. The complex compound have been made of Y, Nd, Sm and Gd with alizarin red sulfonic in the mole fraction range of 0.20 - 0.53 and pH range of 3.5 - 5. The optimum condition of complex formation was found in the mole fraction range of 0.30 - 0.53, range of pH 3.75 - 5, and the total concentration was 0.00030 M. It was found that the formation constant (β) of alizarin red S. complex by continued variation and matrix disintegration techniques were β : (7.00 ± 0.64).10 9 of complex 3 9γ,β : (4.09±0.34).10 8 of 6 0Nd, β : (7.26 ± 0.42).10 8 of 62 S m and β : (8.38 ± 0.70).10 8 of 64 G d. It can be concluded that the atomic number of Nd is bigger than Sm which is bigger than Gd. The atomic number of Y is the smallest. (39) and the complex formation constant is a biggest. The complex compound can be used for sample analysis with limit detection of Y : 2.2 .10 -5 M, Nd : 2.9 .10 -5 M, Sm : 2.6 .10 -5 M and Gd : 2.4 .10 -5 M. The sensitivity of analysis are Y>Gd>Sm>Nd. The Y 2 O 3 sample of product result from xenotime sand contains Y 2 O 3 : 98.96 ± 1.40 % and in the filtrate (product of monazite sand) contains Nd : 0.27 ± 0.002 M

  6. Detection of circulating immune complexes in breast cancer and melanoma by three different methods

    Energy Technology Data Exchange (ETDEWEB)

    Krapf, F; Renger, D; Fricke, M; Kemper, A; Schedel, I; Deicher, H

    1982-08-01

    By the simultaneous application of three methods, C1q-binding-test (C1q-BA), a two antibody conglutinin binding ELISA and a polyethylene-glycol 6000 precipitation with subsequent quantitative determination of immunoglobulins and complement factors in the redissolved precipitates (PPLaNT), circulating immune complexes could be demonstrated in the sera of 94% of patients with malignant melanoma and of 75% of breast cancer patients. The specific detection rates of the individual methods varied between 23% (C1q-BA) and 46% (PPLaNT), presumably due to the presence of qualitatively different immune complexes in the investigated sera. Accordingly, the simultaneous use of the afore mentioned assays resulted in an increased diagnostic sensitivity and a duplication of the predictive value. Nevertheless, because of the relatively low incidence of malignant diseases in the total population, and due to the fact that circulating immune complexes occur in other non-malignant diseases with considerable frequency, tests for circulating immune complexes must be regarded as less useful parameters in the early diagnostic of cancer.

  7. A ghost-cell immersed boundary method for flow in complex geometry

    International Nuclear Information System (INIS)

    Tseng, Y.-H.; Ferziger, Joel H.

    2003-01-01

    An efficient ghost-cell immersed boundary method (GCIBM) for simulating turbulent flows in complex geometries is presented. A boundary condition is enforced through a ghost cell method. The reconstruction procedure allows systematic development of numerical schemes for treating the immersed boundary while preserving the overall second-order accuracy of the base solver. Both Dirichlet and Neumann boundary conditions can be treated. The current ghost cell treatment is both suitable for staggered and non-staggered Cartesian grids. The accuracy of the current method is validated using flow past a circular cylinder and large eddy simulation of turbulent flow over a wavy surface. Numerical results are compared with experimental data and boundary-fitted grid results. The method is further extended to an existing ocean model (MITGCM) to simulate geophysical flow over a three-dimensional bump. The method is easily implemented as evidenced by our use of several existing codes

  8. Revitalisation as a Method of Planning Sustainable Development of Old Town Complexes in Historic Towns

    Science.gov (United States)

    Zagroba, Marek; Gawryluk, Dorota

    2017-12-01

    which they were reconstructed afterwards. In consequence, some elements of the original town master plans have been lost. Revitalisation is an approach whose aim is to improve the quality of space and the ability of inner town areas to function. Revitalisation goes beyond the purely spatial factors, and involves broadly understood economic and social considerations. The conclusions drawn from this research pertain to benefits of using the revitalisation method in planning a sustainable development of urban structures. The development and implementation of revitalisation programmes is a very complex process that takes many years and requires an integrated and interdisciplinary team effort. This method allows us to preserve the identity of historic town areas while enabling them to play functions in the contemporary life of a town.

  9. A hybrid 3D SEM reconstruction method optimized for complex geologic material surfaces.

    Science.gov (United States)

    Yan, Shang; Adegbule, Aderonke; Kibbey, Tohren C G

    2017-08-01

    Reconstruction methods are widely used to extract three-dimensional information from scanning electron microscope (SEM) images. This paper presents a new hybrid reconstruction method that combines stereoscopic reconstruction with shape-from-shading calculations to generate highly-detailed elevation maps from SEM image pairs. The method makes use of an imaged glass sphere to determine the quantitative relationship between observed intensity and angles between the beam and surface normal, and the detector and surface normal. Two specific equations are derived to make use of image intensity information in creating the final elevation map. The equations are used together, one making use of intensities in the two images, the other making use of intensities within a single image. The method is specifically designed for SEM images captured with a single secondary electron detector, and is optimized to capture maximum detail from complex natural surfaces. The method is illustrated with a complex structured abrasive material, and a rough natural sand grain. Results show that the method is capable of capturing details such as angular surface features, varying surface roughness, and surface striations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Evaluation of Irrigation Methods for Highbush Blueberry. I. Growth and Water Requirements of Young Plants

    Science.gov (United States)

    A study was conducted in a new field of northern highbush blueberry (Vaccinium corymbosum L. 'Elliott') to determine the effects of different irrigation methods on growth and water requirements of uncropped plants during the first 2 years after planting. The plants were grown on mulched, raised beds...

  11. 21 CFR 111.320 - What requirements apply to laboratory methods for testing and examination?

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 2 2010-04-01 2010-04-01 false What requirements apply to laboratory methods for testing and examination? 111.320 Section 111.320 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION CURRENT GOOD MANUFACTURING...

  12. Design requirements, criteria and methods for seismic qualification of CANDU power plants

    International Nuclear Information System (INIS)

    Singh, N.; Duff, C.G.

    1979-10-01

    This report describes the requirements and criteria for the seismic design and qualification of systems and equipment in CANDU nuclear power plants. Acceptable methods and techniques for seismic qualification of CANDU nuclear power plants to mitigate the effects or the consequences of earthquakes are also described. (auth)

  13. New concepts, requirements and methods concerning the periodic inspection of the CANDU fuel channels

    International Nuclear Information System (INIS)

    Denis, J.R.

    1995-01-01

    Periodic inspection of fuel channels is essential for a proper assessment of the structural integrity of these vital components of the reactor. The development of wet channel technologies for non-destructive examination (NDE) of pressure tubes and the high technical performance and reliability of the CIGAR equipment have led, in less than 1 0 years, to the accumulation of a very significant volume of data concerning the flaw mechanisms and structural behaviour of the CANDU fuel channels. On this basis, a new form of the CAN/CSA-N285.4 Standard for Periodic Inspection of CANDU Nuclear Power Plant components was elaborated, introducing new concepts and requirements, in accord with the powerful NDE methods now available. This paper presents these concepts and requirements, and discusses the NDE methods, presently used or under development, to satisfy these requirements. Specific features regarding the fuel channel inspections of Cernavoda NGS Unit 1 are also discussed. (author)

  14. An argumentation-based method for managing complex issues in design of infrastructural systems

    International Nuclear Information System (INIS)

    Marashi, Emad; Davis, John P.

    2006-01-01

    The many interacting and conflicting requirements of a wide range of stakeholders are the main sources of complexity in the infrastructure and utility systems. We propose a systemic methodology based on negotiation and argumentation to help in the resolution of complex issues and to facilitate options appraisal during design of such systems. A process-based approach is used to assemble and propagate the evidence on performance and reliability of the system and its components, providing a success measure for different scenarios or design alternatives. The reliability of information sources and experts opinions are dealt with through an extension of the mathematical theory of evidence. This framework helps not only in capturing the reasoning behind design decisions, but also enables the decision-makers to assess and compare the evidential support for each design option

  15. Fault tree construction of hybrid system requirements using qualitative formal method

    International Nuclear Information System (INIS)

    Lee, Jang-Soo; Cha, Sung-Deok

    2005-01-01

    When specifying requirements for software controlling hybrid systems and conducting safety analysis, engineers experience that requirements are often known only in qualitative terms and that existing fault tree analysis techniques provide little guidance on formulating and evaluating potential failure modes. In this paper, we propose Causal Requirements Safety Analysis (CRSA) as a technique to qualitatively evaluate causal relationship between software faults and physical hazards. This technique, extending qualitative formal method process and utilizing information captured in the state trajectory, provides specific guidelines on how to identify failure modes and relationship among them. Using a simplified electrical power system as an example, we describe step-by-step procedures of conducting CRSA. Our experience of applying CRSA to perform fault tree analysis on requirements for the Wolsong nuclear power plant shutdown system indicates that CRSA is an effective technique in assisting safety engineers

  16. A method for calculating Bayesian uncertainties on internal doses resulting from complex occupational exposures

    International Nuclear Information System (INIS)

    Puncher, M.; Birchall, A.; Bull, R. K.

    2012-01-01

    Estimating uncertainties on doses from bioassay data is of interest in epidemiology studies that estimate cancer risk from occupational exposures to radionuclides. Bayesian methods provide a logical framework to calculate these uncertainties. However, occupational exposures often consist of many intakes, and this can make the Bayesian calculation computationally intractable. This paper describes a novel strategy for increasing the computational speed of the calculation by simplifying the intake pattern to a single composite intake, termed as complex intake regime (CIR). In order to assess whether this approximation is accurate and fast enough for practical purposes, the method is implemented by the Weighted Likelihood Monte Carlo Sampling (WeLMoS) method and evaluated by comparing its performance with a Markov Chain Monte Carlo (MCMC) method. The MCMC method gives the full solution (all intakes are independent), but is very computationally intensive to apply routinely. Posterior distributions of model parameter values, intakes and doses are calculated for a representative sample of plutonium workers from the United Kingdom Atomic Energy cohort using the WeLMoS method with the CIR and the MCMC method. The distributions are in good agreement: posterior means and Q 0.025 and Q 0.975 quantiles are typically within 20 %. Furthermore, the WeLMoS method using the CIR converges quickly: a typical case history takes around 10-20 min on a fast workstation, whereas the MCMC method took around 12-hr. The advantages and disadvantages of the method are discussed. (authors)

  17. Numerical sensitivity computation for discontinuous gradient-only optimization problems using the complex-step method

    CSIR Research Space (South Africa)

    Wilke, DN

    2012-07-01

    Full Text Available problems that utilise remeshing (i.e. the mesh topology is allowed to change) between design updates. Here, changes in mesh topology result in abrupt changes in the discretization error of the computed response. These abrupt changes in turn manifests... in shape optimization but may be present whenever (partial) differential equations are ap- proximated numerically with non-constant discretization methods e.g. remeshing of spatial domains or automatic time stepping in temporal domains. Keywords: Complex...

  18. A complex linear least-squares method to derive relative and absolute orientations of seismic sensors

    OpenAIRE

    F. Grigoli; Simone Cesca; Torsten Dahm; L. Krieger

    2012-01-01

    Determining the relative orientation of the horizontal components of seismic sensors is a common problem that limits data analysis and interpretation for several acquisition setups, including linear arrays of geophones deployed in borehole installations or ocean bottom seismometers deployed at the seafloor. To solve this problem we propose a new inversion method based on a complex linear algebra approach. Relative orientation angles are retrieved by minimizing, in a least-squares sense, the l...

  19. Adiabatic passage for a lossy two-level quantum system by a complex time method

    International Nuclear Information System (INIS)

    Dridi, G; Guérin, S

    2012-01-01

    Using a complex time method with the formalism of Stokes lines, we establish a generalization of the Davis–Dykhne–Pechukas formula which gives in the adiabatic limit the transition probability of a lossy two-state system driven by an external frequency-chirped pulse-shaped field. The conditions that allow this generalization are derived. We illustrate the result with the dissipative Allen–Eberly and Rosen–Zener models. (paper)

  20. A new sub-equation method applied to obtain exact travelling wave solutions of some complex nonlinear equations

    International Nuclear Information System (INIS)

    Zhang Huiqun

    2009-01-01

    By using a new coupled Riccati equations, a direct algebraic method, which was applied to obtain exact travelling wave solutions of some complex nonlinear equations, is improved. And the exact travelling wave solutions of the complex KdV equation, Boussinesq equation and Klein-Gordon equation are investigated using the improved method. The method presented in this paper can also be applied to construct exact travelling wave solutions for other nonlinear complex equations.

  1. Comprehension of complex biological processes by analytical methods: how far can we go using mass spectrometry?

    International Nuclear Information System (INIS)

    Gerner, C.

    2013-01-01

    Comprehensive understanding of complex biological processes is the basis for many biomedical issues of great relevance for modern society including risk assessment, drug development, quality control of industrial products and many more. Screening methods provide means for investigating biological samples without research hypothesis. However, the first boom of analytical screening efforts has passed and we again need to ask whether and how to apply screening methods. Mass spectrometry is a modern tool with unrivalled analytical capacities. This applies to all relevant characteristics of analytical methods such as specificity, sensitivity, accuracy, multiplicity and diversity of applications. Indeed, mass spectrometry qualifies to deal with complexity. Chronic inflammation is a common feature of almost all relevant diseases challenging our modern society; these diseases are apparently highly diverse and include arteriosclerosis, cancer, back pain, neurodegenerative diseases, depression and other. The complexity of mechanisms regulating chronic inflammation is the reason for the practical challenge to deal with it. The presentation shall give an overview of capabilities and limitations of the application of this analytical tool to solve critical questions with great relevance for our society. (author)

  2. Validation of Experimental whole-body SAR Assessment Method in a Complex Indoor Environment

    DEFF Research Database (Denmark)

    Bamba, Aliou; Joseph, Wout; Vermeeren, Gunter

    2012-01-01

    Assessing experimentally the whole-body specific absorption rate (SARwb) in a complex indoor environment is very challenging. An experimental method based on room electromagnetics theory (accounting only the Line-Of-Sight as specular path) to assess the whole-body SAR is validated by numerical...... of the proposed method is that it allows discarding the computation burden because it does not use any discretizations. Results show good agreement between measurement and computation at 2.8 GHz, as long as the plane wave assumption is valid, i.e., for high distances from the transmitter. Relative deviations 0...

  3. Accurate and simple measurement method of complex decay schemes radionuclide activity

    International Nuclear Information System (INIS)

    Legrand, J.; Clement, C.; Bac, C.

    1975-01-01

    A simple method for the measurement of the activity is described. It consists of using a well-type sodium iodide crystal whose efficiency mith monoenergetic photon rays has been computed or measured. For each radionuclide with a complex decay scheme a total efficiency is computed; it is shown that the efficiency is very high, near 100%. The associated incertainty is low, in spite of the important uncertainties on the different parameters used in the computation. The method has been applied to the measurement of the 152 Eu primary reference [fr

  4. [Progress in sample preparation and analytical methods for trace polar small molecules in complex samples].

    Science.gov (United States)

    Zhang, Qianchun; Luo, Xialin; Li, Gongke; Xiao, Xiaohua

    2015-09-01

    Small polar molecules such as nucleosides, amines, amino acids are important analytes in biological, food, environmental, and other fields. It is necessary to develop efficient sample preparation and sensitive analytical methods for rapid analysis of these polar small molecules in complex matrices. Some typical materials in sample preparation, including silica, polymer, carbon, boric acid and so on, are introduced in this paper. Meanwhile, the applications and developments of analytical methods of polar small molecules, such as reversed-phase liquid chromatography, hydrophilic interaction chromatography, etc., are also reviewed.

  5. A method for the determination of ascorbic acid using the iron(II)-pyridine-dimethylglyoxime complex

    International Nuclear Information System (INIS)

    Arya, S. P.; Mahajan, M.

    1998-01-01

    A simple and rapid spectrophotometric method for the determination of ascorbic acid is proposed. Ascorbic acid reduces iron (III) to iron (II) which forms a red colored complex with dimethylglyoxime in the presence of pyridine. The absorbance of the resulting solution is measured at 514 nm and a linear relationship between absorbance and concentration of ascorbic acid is observed up to 14 μg ml -1 . Studies on the interference of substances usually associated with ascorbic acid have been carried out and the applicability of the method has been tested by analysing pharmaceutical preparations of vitamin C [it

  6. Determination of rhenium in ores of complex composition by the kinetic method

    Energy Technology Data Exchange (ETDEWEB)

    Pavlova, L G; Gurkina, T V [Kazakhskij Gosudarstvennyj Univ., Alma-Ata (USSR); Tsentral' naya Lab. Yuzhno-Kazakhstanskogo Geologicheskogo Upravleniya, Alma-Ata (USSR))

    1979-09-01

    The kinetic rhenium determination method is proposed based on rhenium catalytic effect in the reaction of malachite green with thiourea. The accompanying elements, excluding molybdenum, do not interfere with the rhenium determination at their concentration of up to 0.1 M. The interfering influence of molybdenum can be eliminated by addition of tartaric acid to the solution up to the concentration of 0.1 M. This enables to determine rhenium in presence of 1000-fold quantity of molybdenum. The method is applicable for the analysis of complex copper-zinc sulphide ores.

  7. [TVT (transvaginal mesh) surgical method for complex resolution of pelvic floor defects].

    Science.gov (United States)

    Adamík, Z

    2006-01-01

    Assessment of the effects of a new surgical method for complex resolution of pelvic floor defects. Case study. Department of Obstetrics and Gynaecology, Bata Hospital, Zlín. We evaluated the procedures and results of the new TVM (transvaginal mesh) surgical method which we used in a group of 12 patients. Ten patients had vaginal prolapse following vaginal hysterectomy and in two cases there was uterine prolapse and vaginal prolapse. Only in one case there was a small protrusion in the range of 0.5 cm which we resolved by removal of the penetrated section. The resulting anatomic effect was very good in all the cases.

  8. Fabrication of advanced Bragg gratings with complex apodization profiles by use of the polarization control method

    DEFF Research Database (Denmark)

    Deyerl, Hans-Jürgen; Plougmann, Nikolai; Jensen, Jesper Bo Damm

    2004-01-01

    The polarization control method offers a flexible, robust, and low-cost route for the parallel fabrication of gratings with complex apodization profiles including several discrete phase shifts and chirp. The performance of several test gratings is evaluated in terms of their spectral response...... and compared with theoretical predictions. Short gratings with sidelobe-suppression levels in excess of 32 dB and transmission dips lower than 80 dB have been realized. Finally, most of the devices fabricated by the polarization control method show comparable quality to gratings manufactured by far more...

  9. The MARVEL domain protein, Singles Bar, is required for progression past the pre-fusion complex stage of myoblast fusion.

    Science.gov (United States)

    Estrada, Beatriz; Maeland, Anne D; Gisselbrecht, Stephen S; Bloor, James W; Brown, Nicholas H; Michelson, Alan M

    2007-07-15

    Multinucleated myotubes develop by the sequential fusion of individual myoblasts. Using a convergence of genomic and classical genetic approaches, we have discovered a novel gene, singles bar (sing), that is essential for myoblast fusion. sing encodes a small multipass transmembrane protein containing a MARVEL domain, which is found in vertebrate proteins involved in processes such as tight junction formation and vesicle trafficking where--as in myoblast fusion--membrane apposition occurs. sing is expressed in both founder cells and fusion competent myoblasts preceding and during myoblast fusion. Examination of embryos injected with double-stranded sing RNA or embryos homozygous for ethane methyl sulfonate-induced sing alleles revealed an identical phenotype: replacement of multinucleated myofibers by groups of single, myosin-expressing myoblasts at a stage when formation of the mature muscle pattern is complete in wild-type embryos. Unfused sing mutant myoblasts form clusters, suggesting that early recognition and adhesion of these cells are unimpaired. To further investigate this phenotype, we undertook electron microscopic ultrastructural studies of fusing myoblasts in both sing and wild-type embryos. These experiments revealed that more sing mutant myoblasts than wild-type contain pre-fusion complexes, which are characterized by electron-dense vesicles paired on either side of the fusing plasma membranes. In contrast, embryos mutant for another muscle fusion gene, blown fuse (blow), have a normal number of such complexes. Together, these results lead to the hypothesis that sing acts at a step distinct from that of blow, and that sing is required on both founder cell and fusion-competent myoblast membranes to allow progression past the pre-fusion complex stage of myoblast fusion, possibly by mediating fusion of the electron-dense vesicles to the plasma membrane.

  10. Evaluating polymer degradation with complex mixtures using a simplified surface area method.

    Science.gov (United States)

    Steele, Kandace M; Pelham, Todd; Phalen, Robert N

    2017-09-01

    Chemical-resistant gloves, designed to protect workers from chemical hazards, are made from a variety of polymer materials such as plastic, rubber, and synthetic rubber. One material does not provide protection against all chemicals, thus proper polymer selection is critical. Standardized testing, such as chemical degradation tests, are used to aid in the selection process. The current methods of degradation ratings based on changes in weight or tensile properties can be expensive and data often do not exist for complex chemical mixtures. There are hundreds of thousands of chemical products on the market that do not have chemical resistance data for polymer selection. The method described in this study provides an inexpensive alternative to gravimetric analysis. This method uses surface area change to evaluate degradation of a polymer material. Degradation tests for 5 polymer types against 50 complex mixtures were conducted using both gravimetric and surface area methods. The percent change data were compared between the two methods. The resulting regression line was y = 0.48x + 0.019, in units of percent, and the Pearson correlation coefficient was r = 0.9537 (p ≤ 0.05), which indicated a strong correlation between percent weight change and percent surface area change. On average, the percent change for surface area was about half that of the weight change. Using this information, an equivalent rating system was developed for determining the chemical degradation of polymer gloves using surface area.

  11. A Method to Predict the Structure and Stability of RNA/RNA Complexes.

    Science.gov (United States)

    Xu, Xiaojun; Chen, Shi-Jie

    2016-01-01

    RNA/RNA interactions are essential for genomic RNA dimerization and regulation of gene expression. Intermolecular loop-loop base pairing is a widespread and functionally important tertiary structure motif in RNA machinery. However, computational prediction of intermolecular loop-loop base pairing is challenged by the entropy and free energy calculation due to the conformational constraint and the intermolecular interactions. In this chapter, we describe a recently developed statistical mechanics-based method for the prediction of RNA/RNA complex structures and stabilities. The method is based on the virtual bond RNA folding model (Vfold). The main emphasis in the method is placed on the evaluation of the entropy and free energy for the loops, especially tertiary kissing loops. The method also uses recursive partition function calculations and two-step screening algorithm for large, complicated structures of RNA/RNA complexes. As case studies, we use the HIV-1 Mal dimer and the siRNA/HIV-1 mutant (T4) to illustrate the method.

  12. PAFit: A Statistical Method for Measuring Preferential Attachment in Temporal Complex Networks.

    Directory of Open Access Journals (Sweden)

    Thong Pham

    Full Text Available Preferential attachment is a stochastic process that has been proposed to explain certain topological features characteristic of complex networks from diverse domains. The systematic investigation of preferential attachment is an important area of research in network science, not only for the theoretical matter of verifying whether this hypothesized process is operative in real-world networks, but also for the practical insights that follow from knowledge of its functional form. Here we describe a maximum likelihood based estimation method for the measurement of preferential attachment in temporal complex networks. We call the method PAFit, and implement it in an R package of the same name. PAFit constitutes an advance over previous methods primarily because we based it on a nonparametric statistical framework that enables attachment kernel estimation free of any assumptions about its functional form. We show this results in PAFit outperforming the popular methods of Jeong and Newman in Monte Carlo simulations. What is more, we found that the application of PAFit to a publically available Flickr social network dataset yielded clear evidence for a deviation of the attachment kernel from the popularly assumed log-linear form. Independent of our main work, we provide a correction to a consequential error in Newman's original method which had evidently gone unnoticed since its publication over a decade ago.

  13. An iterative reconstruction method of complex images using expectation maximization for radial parallel MRI

    International Nuclear Information System (INIS)

    Choi, Joonsung; Kim, Dongchan; Oh, Changhyun; Han, Yeji; Park, HyunWook

    2013-01-01

    In MRI (magnetic resonance imaging), signal sampling along a radial k-space trajectory is preferred in certain applications due to its distinct advantages such as robustness to motion, and the radial sampling can be beneficial for reconstruction algorithms such as parallel MRI (pMRI) due to the incoherency. For radial MRI, the image is usually reconstructed from projection data using analytic methods such as filtered back-projection or Fourier reconstruction after gridding. However, the quality of the reconstructed image from these analytic methods can be degraded when the number of acquired projection views is insufficient. In this paper, we propose a novel reconstruction method based on the expectation maximization (EM) method, where the EM algorithm is remodeled for MRI so that complex images can be reconstructed. Then, to optimize the proposed method for radial pMRI, a reconstruction method that uses coil sensitivity information of multichannel RF coils is formulated. Experiment results from synthetic and in vivo data show that the proposed method introduces better reconstructed images than the analytic methods, even from highly subsampled data, and provides monotonic convergence properties compared to the conjugate gradient based reconstruction method. (paper)

  14. A method for the calculation of the cumulative failure probability distribution of complex repairable systems

    International Nuclear Information System (INIS)

    Caldarola, L.

    1976-01-01

    A method is proposed for the analytical evaluation of the cumulative failure probability distribution of complex repairable systems. The method is based on a set of integral equations each one referring to a specific minimal cut set of the system. Each integral equation links the unavailability of a minimal cut set to its failure probability density distribution and to the probability that the minimal cut set is down at the time t under the condition that it was down at time t'(t'<=t). The limitations for the applicability of the method are also discussed. It has been concluded that the method is applicable if the process describing the failure of a minimal cut set is a 'delayed semi-regenerative process'. (Auth.)

  15. Detection of circulating immune complexes by Raji cell assay: comparison of flow cytometric and radiometric methods

    International Nuclear Information System (INIS)

    Kingsmore, S.F.; Crockard, A.D.; Fay, A.C.; McNeill, T.A.; Roberts, S.D.; Thompson, J.M.

    1988-01-01

    Several flow cytometric methods for the measurement of circulating immune complexes (CIC) have recently become available. We report a Raji cell flow cytometric assay (FCMA) that uses aggregated human globulin (AHG) as primary calibrator. Technical advantages of the Raji cell flow cytometric assay are discussed, and its clinical usefulness is evaluated in a method comparison study with the widely used Raji cell immunoradiometric assay. FCMA is more precise and has greater analytic sensitivity for AHG. Diagnostic sensitivity by the flow cytometric method is superior in systemic lupus erythematosus (SLE), rheumatoid arthritis, and vasculitis patients: however, diagnostic specificity is similar for both assays, but the reference interval of FCMA is narrower. Significant correlations were found between CIC levels obtained with both methods in SLE, rheumatoid arthritis, and vasculitis patients and in longitudinal studies of two patients with cerebral SLE. The Raji cell FCMA is recommended for measurement of CIC levels to clinical laboratories with access to a flow cytometer

  16. A Porosity Method to Describe Complex 3D-Structures Theory and Application to an Explosion

    Directory of Open Access Journals (Sweden)

    M.-F. Robbe

    2006-01-01

    Full Text Available A theoretical method was developed to be able to describe the influence of structures of complex shape on a transient fluid flow without meshing the structures. Structures are considered as solid pores inside the fluid and act as an obstacle for the flow. The method was specifically adapted to fast transient cases.The porosity method was applied to the simulation of a Hypothetical Core Disruptive Accident in a small-scale replica of a Liquid Metal Fast Breeder Reactor. A 2D-axisymmetrical simulation of the MARS test was performed with the EUROPLEXUS code. Whereas the central internal structures of the mock-up could be described with a classical shell model, the influence of the 3D peripheral structures was taken into account with the porosity method

  17. Simultaneous determination of two active components of pharmaceutical preparations by sequential injection method using heteropoly complexes

    Directory of Open Access Journals (Sweden)

    Mohammed Khair E. A. Al-Shwaiyat

    2014-12-01

    Full Text Available New approach has been proposed for the simultaneous determination of two reducing agents based on the dependence of their reaction rate with 18-molybdo-2-phosphate heteropoly complex on pH. The method was automated using the manifold typical for the sequential analysis method. Ascorbic acid and rutin were determined by successive injection of two samples acidified to different pH. The linear range for rutin determination was 0.6-20 mg/L and the detection limit was 0.2 mg/L (l = 1 cm. The determination of rutin was possible in the presence of up to a 20-fold excess of ascorbic acid. The method was successfully applied to the determination of ascorbic acid and rutin in ascorutin tablets. The applicability of the proposed method for the determination of total polyphenol content in natural plant samples was shown.

  18. Complex transformation method and resonances in one-body quantum systems

    International Nuclear Information System (INIS)

    Sigal, I.M.

    1984-01-01

    We develop a new spectral deformation method in order to treat the resonance problem in one-body systems. Our result on the meromorphic continuation of matrix elements of the resolvent across the continuous spectrum overlaps considerably with an earlier result of E. Balslev [B] but our method is much simpler and more convenient, we believe, in applications. It is inspired by the local distortion technique of Nuttall-Thomas-Babbitt-Balslev, further developed in [B] but patterned on the complex scaling method of Combes and Balslev. The method is applicable to the multicenter problems in which each potential can be represented, roughly speaking, as a sum of exponentially decaying and dilation-analytic, spherically symmetric parts

  19. Quality assurance requirements and methods for high level waste package acceptability

    International Nuclear Information System (INIS)

    1992-12-01

    This document should serve as guidance for assigning the necessary items to control the conditioning process in such a way that waste packages are produced in compliance with the waste acceptance requirements. It is also provided to promote the exchange of information on quality assurance requirements and on the application of quality assurance methods associated with the production of high level waste packages, to ensure that these waste packages comply with the requirements for transportation, interim storage and waste disposal in deep geological formations. The document is intended to assist both the operators of conditioning facilities and repositories as well as national authorities and regulatory bodies, involved in the licensing of the conditioning of high level radioactive wastes or in the development of deep underground disposal systems. The document recommends the quality assurance requirements and methods which are necessary to generate data for these parameters identified in IAEA-TECDOC-560 on qualitative acceptance criteria, and indicates where and when the control methods can be applied, e.g. in the operation or commissioning of a process or in the development of a waste package design. Emphasis is on the control of the process and little reliance is placed on non-destructive or destructive testing. Qualitative criteria, relevant to disposal of high level waste, are repository dependent and are not addressed here. 37 refs, 3 figs, 2 tabs

  20. Use of the Delphi method in resolving complex water resources issues

    Science.gov (United States)

    Taylor, J.G.; Ryder, S.D.

    2003-01-01

    The tri-state river basins, shared by Georgia, Alabama, and Florida, are being modeled by the U.S. Fish and Wildlife Service and the U.S. Army Corps of Engineers to help facilitate agreement in an acrimonious water dispute among these different state governments. Modeling of such basin reservoir operations requires parallel understanding of several river system components: hydropower production, flood control, municipal and industrial water use, navigation, and reservoir fisheries requirements. The Delphi method, using repetitive surveying of experts, was applied to determine fisheries' water and lake-level requirements on 25 reservoirs in these interstate basins. The Delphi technique allowed the needs and requirements of fish populations to be brought into the modeling effort on equal footing with other water supply and demand components. When the subject matter is concisely defined and limited, this technique can rapidly assess expert opinion on any natural resource issue, and even move expert opinion toward greater agreement.

  1. Five Conditions Commonly Used to Down-regulate Tor Complex 1 Generate Different Physiological Situations Exhibiting Distinct Requirements and Outcomes*

    Science.gov (United States)

    Tate, Jennifer J.; Cooper, Terrance G.

    2013-01-01

    Five different physiological conditions have been used interchangeably to establish the sequence of molecular events needed to achieve nitrogen-responsive down-regulation of TorC1 and its subsequent regulation of downstream reporters: nitrogen starvation, methionine sulfoximine (Msx) addition, nitrogen limitation, rapamycin addition, and leucine starvation. Therefore, we tested a specific underlying assumption upon which the interpretation of data generated by these five experimental perturbations is premised. It is that they generate physiologically equivalent outcomes with respect to TorC1, i.e. its down-regulation as reflected by TorC1 reporter responses. We tested this assumption by performing head-to-head comparisons of the requirements for each condition to achieve a common outcome for a downstream proxy of TorC1 inactivation, nuclear Gln3 localization. We demonstrate that the five conditions for down-regulating TorC1 do not elicit physiologically equivalent outcomes. Four of the methods exhibit hierarchical Sit4 and PP2A phosphatase requirements to elicit nuclear Gln3-Myc13 localization. Rapamycin treatment required Sit4 and PP2A. Nitrogen limitation and short-term nitrogen starvation required only Sit4. G1 arrest-correlated, long-term nitrogen starvation and Msx treatment required neither PP2A nor Sit4. Starving cells of leucine or treating them with leucyl-tRNA synthetase inhibitors did not elicit nuclear Gln3-Myc13 localization. These data indicate that the five commonly used nitrogen-related conditions of down-regulating TorC1 are not physiologically equivalent and minimally involve partially differing regulatory mechanisms. Further, identical requirements for Msx treatment and long-term nitrogen starvation raise the possibility that their effects are achieved through a common regulatory pathway with glutamine, a glutamate or glutamine metabolite level as the sensed metabolic signal. PMID:23935103

  2. Sleep deprivation in parents caring for children with complex needs at home: a mixed methods systematic review.

    Science.gov (United States)

    McCann, Damhnat; Bull, Rosalind; Winzenberg, Tania

    2015-02-01

    A significant number of children with a range of complex conditions and health care needs are being cared for by parents in the home environment. This mixed methods systematic review aimed to determine the amount of sleep obtained by these parents and the extent to which the child-related overnight health or care needs affected parental sleep experience and daily functioning. Summary statistics were not able to be determined due to the heterogeneity of included studies, but the common themes that emerged are that parents of children with complex needs experience sleep deprivation that can be both relentless and draining and affects the parents themselves and their relationships. The degree of sleep deprivation varies by diagnosis, but a key contributing factor is the need for parents to be vigilant at night. Of particular importance to health care professionals is the inadequate overnight support provided to parents of children with complex needs, potentially placing these parents at risk of poorer health outcomes associated with sleep deprivation and disturbance. This needs to be addressed to enable parents to remain well and continue to provide the care that their child and family require. © The Author(s) 2014.

  3. ASF1 is required to load histones on the HIRA complex in preparation of paternal chromatin assembly at fertilization.

    Science.gov (United States)

    Horard, Béatrice; Sapey-Triomphe, Laure; Bonnefoy, Emilie; Loppin, Benjamin

    2018-05-11

    Anti-Silencing Factor 1 (ASF1) is a conserved H3-H4 histone chaperone involved in both Replication-Coupled and Replication-Independent (RI) nucleosome assembly pathways. At DNA replication forks, ASF1 plays an important role in regulating the supply of H3.1/2 and H4 to the CAF-1 chromatin assembly complex. ASF1 also provides H3.3-H4 dimers to HIRA and DAXX chaperones for RI nucleosome assembly. The early Drosophila embryo is an attractive system to study chromatin assembly in a developmental context. The formation of a diploid zygote begins with the unique, genome-wide RI assembly of paternal chromatin following sperm protamine eviction. Then, within the same cytoplasm, syncytial embryonic nuclei undergo a series of rapid, synchronous S and M phases to form the blastoderm embryo. Here, we have investigated the implication of ASF1 in these two distinct assembly processes. We show that depletion of the maternal pool of ASF1 with a specific shRNA induces a fully penetrant, maternal effect embryo lethal phenotype. Unexpectedly, despite the depletion of ASF1 protein to undetectable levels, we show that asf1 knocked-down (KD) embryos can develop to various stages, thus demonstrating that ASF1 is not absolutely required for the amplification of cleavage nuclei. Remarkably, we found that ASF1 is required for the formation of the male pronucleus, although ASF1 protein does not reside in the decondensing sperm nucleus. In asf1 KD embryos, HIRA localizes to the male nucleus but is only capable of limited and insufficient chromatin assembly. Finally, we show that the conserved HIRA B domain, which is involved in ASF1-HIRA interaction, is dispensable for female fertility. We conclude that ASF1 is critically required to load H3.3-H4 dimers on the HIRA complex prior to histone deposition on paternal DNA. This separation of tasks could optimize the rapid assembly of paternal chromatin within the gigantic volume of the egg cell. In contrast, ASF1 is surprisingly dispensable for the

  4. A Systematic Optimization Design Method for Complex Mechatronic Products Design and Development

    Directory of Open Access Journals (Sweden)

    Jie Jiang

    2018-01-01

    Full Text Available Designing a complex mechatronic product involves multiple design variables, objectives, constraints, and evaluation criteria as well as their nonlinearly coupled relationships. The design space can be very big consisting of many functional design parameters, structural design parameters, and behavioral design (or running performances parameters. Given a big design space and inexplicit relations among them, how to design a product optimally in an optimization design process is a challenging research problem. In this paper, we propose a systematic optimization design method based on design space reduction and surrogate modelling techniques. This method firstly identifies key design parameters from a very big design space to reduce the design space, secondly uses the identified key design parameters to establish a system surrogate model based on data-driven modelling principles for optimization design, and thirdly utilizes the multiobjective optimization techniques to achieve an optimal design of a product in the reduced design space. This method has been tested with a high-speed train design. With comparison to others, the research results show that this method is practical and useful for optimally designing complex mechatronic products.

  5. Iteratively-coupled propagating exterior complex scaling method for electron-hydrogen collisions

    International Nuclear Information System (INIS)

    Bartlett, Philip L; Stelbovics, Andris T; Bray, Igor

    2004-01-01

    A newly-derived iterative coupling procedure for the propagating exterior complex scaling (PECS) method is used to efficiently calculate the electron-impact wavefunctions for atomic hydrogen. An overview of this method is given along with methods for extracting scattering cross sections. Differential scattering cross sections at 30 eV are presented for the electron-impact excitation to the n = 1, 2, 3 and 4 final states, for both PECS and convergent close coupling (CCC), which are in excellent agreement with each other and with experiment. PECS results are presented at 27.2 eV and 30 eV for symmetric and asymmetric energy-sharing triple differential cross sections, which are in excellent agreement with CCC and exterior complex scaling calculations, and with experimental data. At these intermediate energies, the efficiency of the PECS method with iterative coupling has allowed highly accurate partial-wave solutions of the full Schroedinger equation, for L ≤ 50 and a large number of coupled angular momentum states, to be obtained with minimal computing resources. (letter to the editor)

  6. Using logic model methods in systematic review synthesis: describing complex pathways in referral management interventions.

    Science.gov (United States)

    Baxter, Susan K; Blank, Lindsay; Woods, Helen Buckley; Payne, Nick; Rimmer, Melanie; Goyder, Elizabeth

    2014-05-10

    There is increasing interest in innovative methods to carry out systematic reviews of complex interventions. Theory-based approaches, such as logic models, have been suggested as a means of providing additional insights beyond that obtained via conventional review methods. This paper reports the use of an innovative method which combines systematic review processes with logic model techniques to synthesise a broad range of literature. The potential value of the model produced was explored with stakeholders. The review identified 295 papers that met the inclusion criteria. The papers consisted of 141 intervention studies and 154 non-intervention quantitative and qualitative articles. A logic model was systematically built from these studies. The model outlines interventions, short term outcomes, moderating and mediating factors and long term demand management outcomes and impacts. Interventions were grouped into typologies of practitioner education, process change, system change, and patient intervention. Short-term outcomes identified that may result from these interventions were changed physician or patient knowledge, beliefs or attitudes and also interventions related to changed doctor-patient interaction. A range of factors which may influence whether these outcomes lead to long term change were detailed. Demand management outcomes and intended impacts included content of referral, rate of referral, and doctor or patient satisfaction. The logic model details evidence and assumptions underpinning the complex pathway from interventions to demand management impact. The method offers a useful addition to systematic review methodologies. PROSPERO registration number: CRD42013004037.

  7. A Comparison of Multidimensional Item Selection Methods in Simple and Complex Test Designs

    Directory of Open Access Journals (Sweden)

    Eren Halil ÖZBERK

    2017-03-01

    Full Text Available In contrast with the previous studies, this study employed various test designs (simple and complex which allow the evaluation of the overall ability score estimations across multiple real test conditions. In this study, four factors were manipulated, namely the test design, number of items per dimension, correlation between dimensions and item selection methods. Using the generated item and ability parameters, dichotomous item responses were generated in by using M3PL compensatory multidimensional IRT model with specified correlations. MCAT composite ability score accuracy was evaluated using absolute bias (ABSBIAS, correlation and the root mean square error (RMSE between true and estimated ability scores. The results suggest that the multidimensional test structure, number of item per dimension and correlation between dimensions had significant effect on item selection methods for the overall score estimations. For simple structure test design it was found that V1 item selection has the lowest absolute bias estimations for both long and short tests while estimating overall scores. As the model gets complex KL item selection method performed better than other two item selection method.

  8. Impact of a Modified Jigsaw Method for Learning an Unfamiliar, Complex Topic

    Directory of Open Access Journals (Sweden)

    Denise Kolanczyk

    2017-09-01

    Full Text Available Objective: The aim of this study was to use the jigsaw method with an unfamiliar, complex topic and to evaluate the effectiveness of the jigsaw teaching method on student learning of assigned material (“jigsaw expert” versus non-assigned material (“jigsaw learner”. Innovation: The innovation was implemented in an advanced cardiology elective. Forty students were assigned a pre-reading and one of four valvular heart disorders, a topic not previously taught in the curriculum. A pre-test and post-test evaluated overall student learning. Student performance on pre/post tests as the “jigsaw expert” and “jigsaw learner” was also compared. Critical Analysis: Overall, the post-test mean score of 85.75% was significantly higher than that of the pre-test score of 56.75% (p<0.05. There was significant improvement in scores regardless of whether the material was assigned (“jigsaw experts” pre=58.8% and post=82.5%; p<0.05 or not assigned (“jigsaw learners” pre= 56.25% and post= 86.56%, p<0.05 for pre-study. Next Steps: The use of the jigsaw method to teach unfamiliar, complex content helps students to become both teachers and active listeners, which are essential to the skills and professionalism of a health care provider. Further studies are needed to evaluate use of the jigsaw method to teach unfamiliar, complex content on long-term retention and to further examine the effects of expert vs. non-expert roles. Conflict of Interest We declare no conflicts of interest or financial interests that the authors or members of their immediate families have in any product or service discussed in the manuscript, including grants (pending or received, employment, gifts, stock holdings or options, honoraria, consultancies, expert testimony, patents and royalties.   Type: Note

  9. Combining biophysical methods for the analysis of protein complex stoichiometry and affinity in SEDPHAT

    International Nuclear Information System (INIS)

    Zhao, Huaying; Schuck, Peter

    2015-01-01

    Global multi-method analysis for protein interactions (GMMA) can increase the precision and complexity of binding studies for the determination of the stoichiometry, affinity and cooperativity of multi-site interactions. The principles and recent developments of biophysical solution methods implemented for GMMA in the software SEDPHAT are reviewed, their complementarity in GMMA is described and a new GMMA simulation tool set in SEDPHAT is presented. Reversible macromolecular interactions are ubiquitous in signal transduction pathways, often forming dynamic multi-protein complexes with three or more components. Multivalent binding and cooperativity in these complexes are often key motifs of their biological mechanisms. Traditional solution biophysical techniques for characterizing the binding and cooperativity are very limited in the number of states that can be resolved. A global multi-method analysis (GMMA) approach has recently been introduced that can leverage the strengths and the different observables of different techniques to improve the accuracy of the resulting binding parameters and to facilitate the study of multi-component systems and multi-site interactions. Here, GMMA is described in the software SEDPHAT for the analysis of data from isothermal titration calorimetry, surface plasmon resonance or other biosensing, analytical ultracentrifugation, fluorescence anisotropy and various other spectroscopic and thermodynamic techniques. The basic principles of these techniques are reviewed and recent advances in view of their particular strengths in the context of GMMA are described. Furthermore, a new feature in SEDPHAT is introduced for the simulation of multi-method data. In combination with specific statistical tools for GMMA in SEDPHAT, simulations can be a valuable step in the experimental design

  10. Evaluation of integration methods for hybrid simulation of complex structural systems through collapse

    Science.gov (United States)

    Del Carpio R., Maikol; Hashemi, M. Javad; Mosqueda, Gilberto

    2017-10-01

    This study examines the performance of integration methods for hybrid simulation of large and complex structural systems in the context of structural collapse due to seismic excitations. The target application is not necessarily for real-time testing, but rather for models that involve large-scale physical sub-structures and highly nonlinear numerical models. Four case studies are presented and discussed. In the first case study, the accuracy of integration schemes including two widely used methods, namely, modified version of the implicit Newmark with fixed-number of iteration (iterative) and the operator-splitting (non-iterative) is examined through pure numerical simulations. The second case study presents the results of 10 hybrid simulations repeated with the two aforementioned integration methods considering various time steps and fixed-number of iterations for the iterative integration method. The physical sub-structure in these tests consists of a single-degree-of-freedom (SDOF) cantilever column with replaceable steel coupons that provides repeatable highlynonlinear behavior including fracture-type strength and stiffness degradations. In case study three, the implicit Newmark with fixed-number of iterations is applied for hybrid simulations of a 1:2 scale steel moment frame that includes a relatively complex nonlinear numerical substructure. Lastly, a more complex numerical substructure is considered by constructing a nonlinear computational model of a moment frame coupled to a hybrid model of a 1:2 scale steel gravity frame. The last two case studies are conducted on the same porotype structure and the selection of time steps and fixed number of iterations are closely examined in pre-test simulations. The generated unbalance forces is used as an index to track the equilibrium error and predict the accuracy and stability of the simulations.

  11. Simulating Engineering Flows through Complex Porous Media via the Lattice Boltzmann Method

    Directory of Open Access Journals (Sweden)

    Vesselin Krassimirov Krastev

    2018-03-01

    Full Text Available In this paper, recent achievements in the application of the lattice Boltzmann method (LBM to complex fluid flows are reported. More specifically, we focus on flows through reactive porous media, such as the flow through the substrate of a selective catalytic reactor (SCR for the reduction of gaseous pollutants in the automotive field; pulsed-flow analysis through heterogeneous catalyst architectures; and transport and electro-chemical phenomena in microbial fuel cells (MFC for novel waste-to-energy applications. To the authors’ knowledge, this is the first known application of LBM modeling to the study of MFCs, which represents by itself a highly innovative and challenging research area. The results discussed here essentially confirm the capabilities of the LBM approach as a flexible and accurate computational tool for the simulation of complex multi-physics phenomena of scientific and technological interest, across physical scales.

  12. Leak detection of complex pipelines based on the filter diagonalization method: robust technique for eigenvalue assessment

    International Nuclear Information System (INIS)

    Lay-Ekuakille, Aimé; Pariset, Carlo; Trotta, Amerigo

    2010-01-01

    The FDM (filter diagonalization method), an interesting technique used in nuclear magnetic resonance data processing for tackling FFT (fast Fourier transform) limitations, can be used by considering pipelines, especially complex configurations, as a vascular apparatus with arteries, veins, capillaries, etc. Thrombosis, which might occur in humans, can be considered as a leakage for the complex pipeline, the human vascular apparatus. The choice of eigenvalues in FDM or in spectra-based techniques is a key issue in recovering the solution of the main equation (for FDM) or frequency domain transformation (for FFT) in order to determine the accuracy in detecting leaks in pipelines. This paper deals with the possibility of improving the leak detection accuracy of the FDM technique thanks to a robust algorithm by assessing the problem of eigenvalues, making it less experimental and more analytical using Tikhonov-based regularization techniques. The paper starts from the results of previous experimental procedures carried out by the authors

  13. Comparison of different wind data interpolation methods for a region with complex terrain in Central Asia

    Science.gov (United States)

    Reinhardt, Katja; Samimi, Cyrus

    2018-01-01

    While climatological data of high spatial resolution are largely available in most developed countries, the network of climatological stations in many other regions of the world still constitutes large gaps. Especially for those regions, interpolation methods are important tools to fill these gaps and to improve the data base indispensible for climatological research. Over the last years, new hybrid methods of machine learning and geostatistics have been developed which provide innovative prospects in spatial predictive modelling. This study will focus on evaluating the performance of 12 different interpolation methods for the wind components \\overrightarrow{u} and \\overrightarrow{v} in a mountainous region of Central Asia. Thereby, a special focus will be on applying new hybrid methods on spatial interpolation of wind data. This study is the first evaluating and comparing the performance of several of these hybrid methods. The overall aim of this study is to determine whether an optimal interpolation method exists, which can equally be applied for all pressure levels, or whether different interpolation methods have to be used for the different pressure levels. Deterministic (inverse distance weighting) and geostatistical interpolation methods (ordinary kriging) were explored, which take into account only the initial values of \\overrightarrow{u} and \\overrightarrow{v} . In addition, more complex methods (generalized additive model, support vector machine and neural networks as single methods and as hybrid methods as well as regression-kriging) that consider additional variables were applied. The analysis of the error indices revealed that regression-kriging provided the most accurate interpolation results for both wind components and all pressure heights. At 200 and 500 hPa, regression-kriging is followed by the different kinds of neural networks and support vector machines and for 850 hPa it is followed by the different types of support vector machine and

  14. Discovery of Novel Complex Metal Hydrides for Hydrogen Storage through Molecular Modeling and Combinatorial Methods

    Energy Technology Data Exchange (ETDEWEB)

    Lesch, David A; Adriaan Sachtler, J.W. J.; Low, John J; Jensen, Craig M; Ozolins, Vidvuds; Siegel, Don; Harmon, Laurel

    2011-02-14

    UOP LLC, a Honeywell Company, Ford Motor Company, and Striatus, Inc., collaborated with Professor Craig Jensen of the University of Hawaii and Professor Vidvuds Ozolins of University of California, Los Angeles on a multi-year cost-shared program to discover novel complex metal hydrides for hydrogen storage. This innovative program combined sophisticated molecular modeling with high throughput combinatorial experiments to maximize the probability of identifying commercially relevant, economical hydrogen storage materials with broad application. A set of tools was developed to pursue the medium throughput (MT) and high throughput (HT) combinatorial exploratory investigation of novel complex metal hydrides for hydrogen storage. The assay programs consisted of monitoring hydrogen evolution as a function of temperature. This project also incorporated theoretical methods to help select candidate materials families for testing. The Virtual High Throughput Screening served as a virtual laboratory, calculating structures and their properties. First Principles calculations were applied to various systems to examine hydrogen storage reaction pathways and the associated thermodynamics. The experimental program began with the validation of the MT assay tool with NaAlH4/0.02 mole Ti, the state of the art hydrogen storage system given by decomposition of sodium alanate to sodium hydride, aluminum metal, and hydrogen. Once certified, a combinatorial 21-point study of the NaAlH4 LiAlH4Mg(AlH4)2 phase diagram was investigated with the MT assay. Stability proved to be a problem as many of the materials decomposed during synthesis, altering the expected assay results. This resulted in repeating the entire experiment with a mild milling approach, which only temporarily increased capacity. NaAlH4 was the best performer in both studies and no new mixed alanates were observed, a result consistent with the VHTS. Powder XRD suggested that the reverse reaction, the regeneration of the

  15. Fast methods for long-range interactions in complex systems. Lecture notes

    International Nuclear Information System (INIS)

    Sutmann, Godehard; Gibbon, Paul; Lippert, Thomas

    2011-01-01

    Parallel computing and computer simulations of complex particle systems including charges have an ever increasing impact in a broad range of fields in the physical sciences, e.g. in astrophysics, statistical physics, plasma physics, material sciences, physical chemistry, and biophysics. The present summer school, funded by the German Heraeus-Foundation, took place at the Juelich Supercomputing Centre from 6 - 10 September 2010. The focus was on providing an introduction and overview over different methods, algorithms and new trends for the computational treatment of long-range interactions in particle systems. The Lecture Notes contain an introduction into particle simulation, as well as five different fast methods, i.e. the Fast Multipole Method, Barnes-Hut Tree Method, Multigrid, FFT based methods, and Fast Summation using the non-equidistant FFT. In addition to introducing the methods, efficient parallelization of the methods is presented in detail. This publication was edited at the Juelich Supercomputing Centre (JSC) which is an integral part of the Institute for Advanced Simulation (IAS). The IAS combines the Juelich simulation sciences and the supercomputer facility in one organizational unit. It includes those parts of the scientific institutes at Forschungszentrum Juelich which use simulation on supercomputers as their main research methodology. (orig.)

  16. Fast methods for long-range interactions in complex systems. Lecture notes

    Energy Technology Data Exchange (ETDEWEB)

    Sutmann, Godehard; Gibbon, Paul; Lippert, Thomas (eds.)

    2011-10-13

    Parallel computing and computer simulations of complex particle systems including charges have an ever increasing impact in a broad range of fields in the physical sciences, e.g. in astrophysics, statistical physics, plasma physics, material sciences, physical chemistry, and biophysics. The present summer school, funded by the German Heraeus-Foundation, took place at the Juelich Supercomputing Centre from 6 - 10 September 2010. The focus was on providing an introduction and overview over different methods, algorithms and new trends for the computational treatment of long-range interactions in particle systems. The Lecture Notes contain an introduction into particle simulation, as well as five different fast methods, i.e. the Fast Multipole Method, Barnes-Hut Tree Method, Multigrid, FFT based methods, and Fast Summation using the non-equidistant FFT. In addition to introducing the methods, efficient parallelization of the methods is presented in detail. This publication was edited at the Juelich Supercomputing Centre (JSC) which is an integral part of the Institute for Advanced Simulation (IAS). The IAS combines the Juelich simulation sciences and the supercomputer facility in one organizational unit. It includes those parts of the scientific institutes at Forschungszentrum Juelich which use simulation on supercomputers as their main research methodology. (orig.)

  17. A digital processing method for the analysis of complex nuclear spectra

    International Nuclear Information System (INIS)

    Madan, V.K.; Abani, M.C.; Bairi, B.R.

    1994-01-01

    This paper describes a digital processing method using frequency power spectra for the analysis of complex nuclear spectra. The power spectra were estimated by employing modified discrete Fourier transform. The method was applied to observed spectral envelopes. The results for separating closely-spaced doublets in nuclear spectra of low statistical precision compared favorably with those obtained by using a popular peak fitting program SAMPO. The paper also describes limitations of the peak fitting methods. It describes the advantages of digital processing techniques for type II digital signals including nuclear spectra. A compact computer program occupying less than 2.5 kByte of memory space was written in BASIC for the processing of observed spectral envelopes. (orig.)

  18. New method for rekindling the nonlinear solitary waves in Maxwellian complex space plasma

    Science.gov (United States)

    Das, G. C.; Sarma, Ridip

    2018-04-01

    Our interest is to study the nonlinear wave phenomena in complex plasma constituents with Maxwellian electrons and ions. The main reason for this consideration is to exhibit the effects of dust charge fluctuations on acoustic modes evaluated by the use of a new method. A special method (G'/G) has been developed to yield the coherent features of nonlinear waves augmented through the derivation of a Korteweg-de Vries equation and found successfully the different nature of solitons recognized in space plasmas. Evolutions have shown with the input of appropriate typical plasma parameters to support our theoretical observations in space plasmas. All conclusions are in good accordance with the actual occurrences and could be of interest to further the investigations in experiments and satellite observations in space. In this paper, we present not only the model that exhibited nonlinear solitary wave propagation but also a new mathematical method to the execution.

  19. Application of a non-contiguous grid generation method to complex configurations

    International Nuclear Information System (INIS)

    Chen, S.; McIlwain, S.; Khalid, M.

    2003-01-01

    An economical non-contiguous grid generation method was developed to efficiently generate structured grids for complex 3D problems. Compared with traditional contiguous grids, this new approach generated grids for different block clusters independently and was able to distribute the grid points more economically according to the user's specific topology design. The method was evaluated by applying it to a Navier-Stokes computation of flow past a hypersonic projectile. Both the flow velocity and the heat transfer characteristics of the projectile agreed qualitatively with other numerical data in the literature and with available field data. Detailed grid topology designs for 3D geometries were addressed, and the advantages of this approach were analysed and compared with traditional contiguous grid generation methods. (author)

  20. Probabilistic Requirements (Partial) Verification Methods Best Practices Improvement. Variables Acceptance Sampling Calculators: Empirical Testing. Volume 2

    Science.gov (United States)

    Johnson, Kenneth L.; White, K. Preston, Jr.

    2012-01-01

    The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. In this paper, the results of empirical tests intended to assess the accuracy of acceptance sampling plan calculators implemented for six variable distributions are presented.

  1. A New Method to Develop Human Dental Pulp Cells and Platelet-rich Fibrin Complex.

    Science.gov (United States)

    He, Xuan; Chen, Wen-Xia; Ban, Guifei; Wei, Wei; Zhou, Jun; Chen, Wen-Jin; Li, Xian-Yu

    2016-11-01

    Platelet-rich fibrin (PRF) has been used as a scaffold material in various tissue regeneration studies. In the previous methods to combine seed cells with PRF, the structure of PRF was damaged, and the manipulation time in vitro was also increased. The objective of this in vitro study was to explore an appropriate method to develop a PRF-human dental pulp cell (hDPC) complex to maintain PRF structure integrity and to find out the most efficient part of PRF. The PRF-hDPC complex was developed at 3 different time points during PRF preparation: (1) the before centrifugation (BC) group, the hDPC suspension was added to the venous blood before blood centrifugation; (2) the immediately after centrifugation (IAC) group, the hDPC suspension was added immediately after blood centrifugation; (3) the after centrifugation (AC) group, the hDPC suspension was added 10 minutes after blood centrifugation; and (4) the control group, PRF without hDPC suspension. The prepared PRF-hDPC complexes were cultured for 7 days. The samples were fixed for histologic, immunohistochemistry, and scanning electron microscopic evaluation. Real-time polymerase chain reaction was performed to evaluate messenger RNA expression of alkaline phosphatase and dentin sialophosphoprotein. Enzyme-linked immunosorbent assay quantification for growth factors was performed within the different parts of the PRF. Histologic, immunohistochemistry, and scanning electron microscopic results revealed that hDPCs were only found in the BC group and exhibited favorable proliferation. Real-time polymerase chain reaction revealed that alkaline phosphatase and dentin sialophosphoprotein expression increased in the cultured PRF-hDPC complex. The lower part of the PRF released the maximum quantity of growth factors. Our new method to develop a PRF-hDPCs complex maintained PRF structure integrity. The hDPCs were distributed in the buffy coat, which might be the most efficient part of PRF. Copyright © 2016 American

  2. A Method for and Issues Associated with the Determination of Space Suit Joint Requirements

    Science.gov (United States)

    Matty, Jennifer E.; Aitchison, Lindsay

    2009-01-01

    In the design of a new space suit it is necessary to have requirements that define what mobility space suit joints should be capable of achieving in both a system and at the component level. NASA elected to divide mobility into its constituent parts-range of motion (ROM) and torque- in an effort to develop clean design requirements that limit subject performance bias and are easily verified. Unfortunately, the measurement of mobility can be difficult to obtain. Current technologies, such as the Vicon motion capture system, allow for the relatively easy benchmarking of range of motion (ROM) for a wide array of space suit systems. The ROM evaluations require subjects in the suit to accurately evaluate the ranges humans can achieve in the suit. However, when it comes to torque, there are significant challenges for both benchmarking current performance and writing requirements for future suits. This is reflected in the fact that torque definitions have been applied to very few types of space suits and with limited success in defining all the joints accurately. This paper discussed the advantages and disadvantages to historical joint torque evaluation methods, describes more recent efforts directed at benchmarking joint torques of prototype space suits, and provides an outline for how NASA intends to address joint torque in design requirements for the Constellation Space Suit System (CSSS).

  3. Identifying influential spreaders in complex networks based on kshell hybrid method

    Science.gov (United States)

    Namtirtha, Amrita; Dutta, Animesh; Dutta, Biswanath

    2018-06-01

    Influential spreaders are the key players in maximizing or controlling the spreading in a complex network. Identifying the influential spreaders using kshell decomposition method has become very popular in the recent time. In the literature, the core nodes i.e. with the largest kshell index of a network are considered as the most influential spreaders. We have studied the kshell method and spreading dynamics of nodes using Susceptible-Infected-Recovered (SIR) epidemic model to understand the behavior of influential spreaders in terms of its topological location in the network. From the study, we have found that every node in the core area is not the most influential spreader. Even a strategically placed lower shell node can also be a most influential spreader. Moreover, the core area can also be situated at the periphery of the network. The existing indexing methods are only designed to identify the most influential spreaders from core nodes and not from lower shells. In this work, we propose a kshell hybrid method to identify highly influential spreaders not only from the core but also from lower shells. The proposed method comprises the parameters such as kshell power, node's degree, contact distance, and many levels of neighbors' influence potential. The proposed method is evaluated using nine real world network datasets. In terms of the spreading dynamics, the experimental results show the superiority of the proposed method over the other existing indexing methods such as the kshell method, the neighborhood coreness centrality, the mixed degree decomposition, etc. Furthermore, the proposed method can also be applied to large-scale networks by considering the three levels of neighbors' influence potential.

  4. DLGS97/SAP97 is developmentally upregulated and is required for complex adult behaviors and synapse morphology and function.

    Science.gov (United States)

    Mendoza-Topaz, Carolina; Urra, Francisco; Barría, Romina; Albornoz, Valeria; Ugalde, Diego; Thomas, Ulrich; Gundelfinger, Eckart D; Delgado, Ricardo; Kukuljan, Manuel; Sanxaridis, Parthena D; Tsunoda, Susan; Ceriani, M Fernanda; Budnik, Vivian; Sierralta, Jimena

    2008-01-02

    The synaptic membrane-associated guanylate kinase (MAGUK) scaffolding protein family is thought to play key roles in synapse assembly and synaptic plasticity. Evidence supporting these roles in vivo is scarce, as a consequence of gene redundancy in mammals. The genome of Drosophila contains only one MAGUK gene, discs large (dlg), from which two major proteins originate: DLGA [PSD95 (postsynaptic density 95)-like] and DLGS97 [SAP97 (synapse-associated protein)-like]. These differ only by the inclusion in DLGS97 of an L27 domain, important for the formation of supramolecular assemblies. Known dlg mutations affect both forms and are lethal at larval stages attributable to tumoral overgrowth of epithelia. We generated independent null mutations for each, dlgA and dlgS97. These allowed unveiling of a shift in expression during the development of the nervous system: predominant expression of DLGA in the embryo, balanced expression of both during larval stages, and almost exclusive DLGS97 expression in the adult brain. Loss of embryonic DLGS97 does not alter the development of the nervous system. At larval stages, DLGA and DLGS97 fulfill both unique and partially redundant functions in the neuromuscular junction. Contrary to dlg and dlgA mutants, dlgS97 mutants are viable to adulthood, but they exhibit marked alterations in complex behaviors such as phototaxis, circadian activity, and courtship, whereas simpler behaviors like locomotion and odor and light perception are spared. We propose that the increased repertoire of associations of a synaptic scaffold protein given by an additional domain of protein-protein interaction underlies its ability to integrate molecular networks required for complex functions in adult synapses.

  5. Advances in complexity of beam halo-chaos and its control methods for beam transport networks

    International Nuclear Information System (INIS)

    Fang Jinqing

    2004-11-01

    The complexity theory of beam halo-chaos in beam transport networks and its control methods for a new subject of high-tech field is discussed. It is pointed that in recent years, there has been growing interest in proton beams of high power linear accelerator due to its attractive features in possible breakthrough applications in national defense and industry. In particular, high-current accelerator driven clean activity nuclear power systems for various applications as energy resources has been one of the most focusing issues in the current research, because it provides a safer, cleaner and cheaper nuclear energy resource. However, halo-chaos in high-current beam transport networks become a key concerned issue because it can generate excessive radioactivity therefore significantly limits its applications. It is very important to study the complexity properties of beam halo-chaos and to understand the basic physical mechanisms for halo chaos formation as well as to develop effective control methods for its suppression. These are very challenging subjects for the current research. The main research advances in the subjects, including experimental investigation and the oretical research, especially some very efficient control methods developed through many years of efforts of authors are reviewed and summarized. Finally, some research outlooks are given. (author)

  6. Evolutionary analysis of apolipoprotein E by Maximum Likelihood and complex network methods

    Directory of Open Access Journals (Sweden)

    Leandro de Jesus Benevides

    Full Text Available Abstract Apolipoprotein E (apo E is a human glycoprotein with 299 amino acids, and it is a major component of very low density lipoproteins (VLDL and a group of high-density lipoproteins (HDL. Phylogenetic studies are important to clarify how various apo E proteins are related in groups of organisms and whether they evolved from a common ancestor. Here, we aimed at performing a phylogenetic study on apo E carrying organisms. We employed a classical and robust method, such as Maximum Likelihood (ML, and compared the results using a more recent approach based on complex networks. Thirty-two apo E amino acid sequences were downloaded from NCBI. A clear separation could be observed among three major groups: mammals, fish and amphibians. The results obtained from ML method, as well as from the constructed networks showed two different groups: one with mammals only (C1 and another with fish (C2, and a single node with the single sequence available for an amphibian. The accordance in results from the different methods shows that the complex networks approach is effective in phylogenetic studies. Furthermore, our results revealed the conservation of apo E among animal groups.

  7. A method of reconstructing complex stratigraphic surfaces with multitype fault constraints

    Science.gov (United States)

    Deng, Shi-Wu; Jia, Yu; Yao, Xing-Miao; Liu, Zhi-Ning

    2017-06-01

    The construction of complex stratigraphic surfaces is widely employed in many fields, such as petroleum exploration, geological modeling, and geological structure analysis. It also serves as an important foundation for data visualization and visual analysis in these fields. The existing surface construction methods have several deficiencies and face various difficulties, such as the presence of multitype faults and roughness of resulting surfaces. In this paper, a surface modeling method that uses geometric partial differential equations (PDEs) is introduced for the construction of stratigraphic surfaces. It effectively solves the problem of surface roughness caused by the irregularity of stratigraphic data distribution. To cope with the presence of multitype complex faults, a two-way projection algorithm between threedimensional space and a two-dimensional plane is proposed. Using this algorithm, a unified method based on geometric PDEs is developed for dealing with multitype faults. Moreover, the corresponding geometric PDE is derived, and an algorithm based on an evolutionary solution is developed. The algorithm proposed for constructing spatial surfaces with real data verifies its computational efficiency and its ability to handle irregular data distribution. In particular, it can reconstruct faulty surfaces, especially those with overthrust faults.

  8. Application of Semiempirical Methods to Transition Metal Complexes: Fast Results but Hard-to-Predict Accuracy.

    KAUST Repository

    Minenkov, Yury

    2018-05-22

    A series of semiempirical PM6* and PM7 methods has been tested in reproducing of relative conformational energies of 27 realistic-size complexes of 16 different transition metals (TMs). An analysis of relative energies derived from single-point energy evaluations on density functional theory (DFT) optimized conformers revealed pronounced deviations between semiempirical and DFT methods indicating fundamental difference in potential energy surfaces (PES). To identify the origin of the deviation, we compared fully optimized PM7 and respective DFT conformers. For many complexes, differences in PM7 and DFT conformational energies have been confirmed often manifesting themselves in false coordination of some atoms (H, O) to TMs and chemical transformations/distortion of coordination center geometry in PM7 structures. Despite geometry optimization with fixed coordination center geometry leads to some improvements in conformational energies, the resulting accuracy is still too low to recommend explored semiempirical methods for out-of-the-box conformational search/sampling: careful testing is always needed.

  9. Methods for the analysis of complex fluorescence decays: sum of Becquerel functions versus sum of exponentials

    International Nuclear Information System (INIS)

    Menezes, Filipe; Fedorov, Alexander; Baleizão, Carlos; Berberan-Santos, Mário N; Valeur, Bernard

    2013-01-01

    Ensemble fluorescence decays are usually analyzed with a sum of exponentials. However, broad continuous distributions of lifetimes, either unimodal or multimodal, occur in many situations. A simple and flexible fitting function for these cases that encompasses the exponential is the Becquerel function. In this work, the applicability of the Becquerel function for the analysis of complex decays of several kinds is tested. For this purpose, decays of mixtures of four different fluorescence standards (binary, ternary and quaternary mixtures) are measured and analyzed. For binary and ternary mixtures, the expected sum of narrow distributions is well recovered from the Becquerel functions analysis, if the correct number of components is used. For ternary mixtures, however, satisfactory fits are also obtained with a number of Becquerel functions smaller than the true number of fluorophores in the mixture, at the expense of broadening the lifetime distributions of the fictitious components. The quaternary mixture studied is well fitted with both a sum of three exponentials and a sum of two Becquerel functions, showing the inevitable loss of information when the number of components is large. Decays of a fluorophore in a heterogeneous environment, known to be represented by unimodal and broad continuous distributions (as previously obtained by the maximum entropy method), are also measured and analyzed. It is concluded that these distributions can be recovered by the Becquerel function method with an accuracy similar to that of the much more complex maximum entropy method. It is also shown that the polar (or phasor) plot is not always helpful for ascertaining the degree (and kind) of complexity of a fluorescence decay. (paper)

  10. Calculation of seismic response of a flexible rotor by complex modal method, 1

    International Nuclear Information System (INIS)

    Azuma, Takao; Saito, Shinobu

    1984-01-01

    In rotary machines, at the time of earthquakes, whether the rotating part and stationary part touch or whether the bearings and seals are damaged or not are problems. In order to examine these problems, it is necessary to analyze the seismic response of a rotary shaft or sometimes a casing system. But the conventional analysis methods are unsatisfactory. Accordingly, in the case of a general shaft system supported with slide bearings and on which gyro effect acts, complex modal method must be used. This calculation method is explained in detail in the book of Lancaster, however, when this method is applied to the seismic response of rotary shafts, the calculation time is considerably different according to the method of final integration. In this study, good results were obtained when the method which did not depend on numerical integration was attempted. The equation of motion and its solution, the displacement vector of a foundation, the verification of the calculation program and the example of calculating the seismic response of two coupled rotor shafts are reported. (Kako, I.)

  11. On rational complex of investigation methods in prophylactic examination of patients with chronic kidney diseases

    International Nuclear Information System (INIS)

    Yazykov, A.S.; Telichko, F.F.

    1989-01-01

    A retrospective evaluation of the total quantity of X-ray procedures and the radiation degree in 310 patients with chronic kidney diseases is given. It is ascertained that only account of integral absorbed dose in the organ tissues, comprising the doses of X-ray examinations of other organs during the patient lifetime, can serve as the main condition for developing well-grounded recommendations concerning rational complex of examination methods during prophylactic examination of patients with chronic kidney disease. 9 refs.; 4 figs

  12. Continuum level density of a coupled-channel system in the complex scaling method

    International Nuclear Information System (INIS)

    Suzuki, Ryusuke; Kato, Kiyoshi; Kruppa, Andras; Giraud, Bertrand G.

    2008-01-01

    We study the continuum level density (CLD) in the formalism of the complex scaling method (CSM) for coupled-channel systems. We apply the formalism to the 4 He=[ 3 H+p]+[ 3 He+n] coupled-channel cluster model where there are resonances at low energy. Numerical calculations of the CLD in the CSM with a finite number of L 2 basis functions are consistent with the exact result calculated from the S-matrix by solving coupled-channel equations. We also study channel densities. In this framework, the extended completeness relation (ECR) plays an important role. (author)

  13. Novel encapsulation method for probiotics using an interpolymer complex in supercriticial carbon dioxide

    CSIR Research Space (South Africa)

    Moolman, FS

    2006-10-01

    Full Text Available on Bioencapsulation, Lausanne, CH. Oct.6-7, 2006 O5-3 – page 1 A novel encapsulation method for probiotics using an interpolymer complex in supercriticial carbon dioxide F.S Moolman1, P.W. Labuschagne1, M.S. Thantsha2, T.L. van der Merwe1, H. Rolfes2 and T....cloete@up.ac.za 1. Introduction Evidence for the health benefits of probiotics is increasing. These benefits include protection against pathogenic bacteria, stimulation of the immune system, reduction in carcinogenesis, vitamin production and degradation...

  14. Methods of analysis for complex organic aerosol mixtures from urban emission sources of particulate carbon

    International Nuclear Information System (INIS)

    Mazurek, M.A.; Hildemann, L.M.; Simoneit, B.R.T.

    1990-10-01

    Organic aerosols comprise approximately 30% by mass of the total fine particulate matter present in urban atmospheres. The chemical composition of such aerosols is complex and reflects input from multiple sources of primary emissions to the atmosphere, as well as from secondary production of carbonaceous aerosol species via photochemical reactions. To identify discrete sources of fine carbonaceous particles in urban atmospheres, analytical methods must reconcile both bulk chemical and molecular properties of the total carbonaceous aerosol fraction. This paper presents an overview of the analytical protocol developed and used in a study of the major sources of fine carbon particles emitted to an urban atmosphere. 23 refs., 1 fig., 2 tabs

  15. The complex variable boundary element method: Applications in determining approximative boundaries

    Science.gov (United States)

    Hromadka, T.V.

    1984-01-01

    The complex variable boundary element method (CVBEM) is used to determine approximation functions for boundary value problems of the Laplace equation such as occurs in potential theory. By determining an approximative boundary upon which the CVBEM approximator matches the desired constant (level curves) boundary conditions, the CVBEM is found to provide the exact solution throughout the interior of the transformed problem domain. Thus, the acceptability of the CVBEM approximation is determined by the closeness-of-fit of the approximative boundary to the study problem boundary. ?? 1984.

  16. A Proactive Complex Event Processing Method for Large-Scale Transportation Internet of Things

    OpenAIRE

    Wang, Yongheng; Cao, Kening

    2014-01-01

    The Internet of Things (IoT) provides a new way to improve the transportation system. The key issue is how to process the numerous events generated by IoT. In this paper, a proactive complex event processing method is proposed for large-scale transportation IoT. Based on a multilayered adaptive dynamic Bayesian model, a Bayesian network structure learning algorithm using search-and-score is proposed to support accurate predictive analytics. A parallel Markov decision processes model is design...

  17. Daily radiotoxicological supervision of personnel at the Pierrelatte industrial complex. Methods and results

    International Nuclear Information System (INIS)

    Chalabreysse, Jacques.

    1978-05-01

    A 13 year experience gained from daily radiotoxicological supervision of personnel at the PIERRELATTE industrial complex is presented. This study is divided into two parts: part one is theoretical: bibliographical synthesis of all scattered documents and publications; a homogeneous survey of all literature on the subject is thus available. Part two reviews the experience gained in professional surroundings: laboratory measurements and analyses (development of methods and daily applications); mathematical formulae to answer the first questions which arise before an individual liable to be contaminated; results obtained at PIERRELATTE [fr

  18. A Novel Method for Assessing Task Complexity in Outpatient Clinical-Performance Measures.

    Science.gov (United States)

    Hysong, Sylvia J; Amspoker, Amber B; Petersen, Laura A

    2016-04-01

    Clinical-performance measurement has helped improve the quality of health-care; yet success in attaining high levels of quality across multiple domains simultaneously still varies considerably. Although many sources of variability in care quality have been studied, the difficulty required to complete the clinical work itself has received little attention. We present a task-based methodology for evaluating the difficulty of clinical-performance measures (CPMs) by assessing the complexity of their component requisite tasks. Using Functional Job Analysis (FJA), subject-matter experts (SMEs) generated task lists for 17 CPMs; task lists were rated on ten dimensions of complexity, and then aggregated into difficulty composites. Eleven outpatient work SMEs; 133 VA Medical Centers nationwide. Clinical Performance: 17 outpatient CPMs (2000-2008) at 133 VA Medical Centers nationwide. Measure Difficulty: for each CPM, the number of component requisite tasks and the average rating across ten FJA complexity scales for the set of tasks comprising the measure. Measures varied considerably in the number of component tasks (M = 10.56, SD = 6.25, min = 5, max = 25). Measures of chronic care following acute myocardial infarction exhibited significantly higher measure difficulty ratings compared to diabetes or screening measures, but not to immunization measures ([Formula: see text] = 0.45, -0.04, -0.05, and -0.06 respectively; F (3, 186) = 3.57, p = 0.015). Measure difficulty ratings were not significantly correlated with the number of component tasks (r = -0.30, p = 0.23). Evaluating the difficulty of achieving recommended CPM performance levels requires more than simply counting the tasks involved; using FJA to assess the complexity of CPMs' component tasks presents an alternate means of assessing the difficulty of primary-care CPMs and accounting for performance variation among measures and performers. This in turn could be used in designing

  19. Identifying and prioritizing customer requirements from tractor production by QFD method

    Directory of Open Access Journals (Sweden)

    H Taghizadeh

    2017-05-01

    Full Text Available Introduction Discovering and understanding customer needs and expectations are considered as important factors on customer satisfaction and play vital role to maintain the current activity among its competitors, proceeding and obtaining customer satisfaction which are critical factors to design a successful production; thus the successful organizations must meet their needs containing the quality of the products or services to customers. Quality Function Deployment (QFD is a technique for studying demands and needs of customers which is going to give more emphasis to the customer's interests in this way. The QFD method in general implemented various tools and methods for reaching qualitative goals; but the most important and the main tool of this method is the house of quality diagrams. The Analytic Hierarchy Process (AHP is a famous and common MADM method based on pair wise comparisons used for determining the priority of understudied factors in various studies until now. With considering effectiveness of QFD method to explicating customer's demands and obtaining customer satisfaction, generally, the researchers followed this question's suite and scientific answer: how can QFD explicate real demands and requirements of customers from tractor final production and what is the prioritization of these demands and requirements in view of customers. Accordingly, the aim of this study was to identify and prioritize the customer requirements of Massey Ferguson (MF 285 tractor production in Iran tractor manufacturing company with t- student statistical test, AHP and QFD methods. Materials and Methods Research method was descriptive and statistical population included all of the tractor customers of Tractor Manufacturing Company in Iran from March 2011 to March 2015. The statistical sample size was 171 which are determined with Cochran index. Moreover, 20 experts' opinion has been considered for determining product's technical requirements. Literature

  20. SACS2: Dynamic and Formal Safety Analysis Method for Complex Safety Critical System

    International Nuclear Information System (INIS)

    Koh, Kwang Yong; Seong, Poong Hyun

    2009-01-01

    Fault tree analysis (FTA) is one of the most widely used safety analysis technique in the development of safety critical systems. However, over the years, several drawbacks of the conventional FTA have become apparent. One major drawback is that conventional FTA uses only static gates and hence can not capture dynamic behaviors of the complex system precisely. Although several attempts such as dynamic fault tree (DFT), PANDORA, formal fault tree (FFT) and so on, have been made to overcome this problem, they can not still do absolute or actual time modeling because they adapt relative time concept and can capture only sequential behaviors of the system. Second drawback of conventional FTA is its lack of rigorous semantics. Because it is informal in nature, safety analysis results heavily depend on an analyst's ability and are error-prone. Finally reasoning process which is to check whether basic events really cause top events is done manually and hence very labor-intensive and timeconsuming for the complex systems. In this paper, we propose a new safety analysis method for complex safety critical system in qualitative manner. We introduce several temporal gates based on timed computational tree logic (TCTL) which can represent quantitative notion of time. Then, we translate the information of the fault trees into UPPAAL query language and the reasoning process is automatically done by UPPAAL which is the model checker for time critical system

  1. The method of measurement and synchronization control for large-scale complex loading system

    International Nuclear Information System (INIS)

    Liao Min; Li Pengyuan; Hou Binglin; Chi Chengfang; Zhang Bo

    2012-01-01

    With the development of modern industrial technology, measurement and control system was widely used in high precision, complex industrial control equipment and large-tonnage loading device. The measurement and control system is often used to analyze the distribution of stress and displacement in the complex bearing load or the complex nature of the mechanical structure itself. In ITER GS mock-up with 5 flexible plates, for each load combination, detect and measure potential slippage between the central flexible plate and the neighboring spacers is necessary as well as the potential slippage between each pre-stressing bar and its neighboring plate. The measurement and control system consists of seven sets of EDC controller and board, computer system, 16-channel quasi-dynamic strain gauge, 25 sets of displacement sensors, 7 sets of load and displacement sensors in the cylinders. This paper demonstrates the principles and methods of EDC220 digital controller to achieve synchronization control, and R and D process of multi-channel loading control software and measurement software. (authors)

  2. A complex guided spectral transform Lanczos method for studying quantum resonance states

    International Nuclear Information System (INIS)

    Yu, Hua-Gen

    2014-01-01

    A complex guided spectral transform Lanczos (cGSTL) algorithm is proposed to compute both bound and resonance states including energies, widths and wavefunctions. The algorithm comprises of two layers of complex-symmetric Lanczos iterations. A short inner layer iteration produces a set of complex formally orthogonal Lanczos (cFOL) polynomials. They are used to span the guided spectral transform function determined by a retarded Green operator. An outer layer iteration is then carried out with the transform function to compute the eigen-pairs of the system. The guided spectral transform function is designed to have the same wavefunctions as the eigenstates of the original Hamiltonian in the spectral range of interest. Therefore the energies and/or widths of bound or resonance states can be easily computed with their wavefunctions or by using a root-searching method from the guided spectral transform surface. The new cGSTL algorithm is applied to bound and resonance states of HO, and compared to previous calculations

  3. The Effect of Pressure and Temperature on Separation of Free Gadolinium(III) From Gd-DTPA Complex by Nanofiltration-Complexation Method

    Science.gov (United States)

    Rahayu, Iman; Anggraeni, Anni; Ukun, MSS; Bahti, Husein H.

    2017-05-01

    Nowdays, the utilization of rare earth elements has been carried out widely in industry and medicine, one of them is gadolinium in Gd-DTPA complex is used as a contrast agent in a magnetic resonance imaging (MRI) diagnostic to increase the visual contrast between normal tissue and diseased. Although the stability of a given complex may be high enough, the complexation step couldnot have been completed, so there is possible to gadolinium(III) in the complex compound. Therefore, the function of that compounds should be dangerous because of the toxicity of gadolinium(III) in human body. So, it is necessarry to separate free gadolinium(III) from Gd-DTPA complex by nanofiltration-complexation. The method of this study is complexing of Gd2O3 with DTPA ligand by reflux and separation of Gd-DTPA complex from gadolinium(III) with a nanofiltration membrane on the variation of pressures(2, 3, 4, 5, 6 bars) and temperature (25, 30, 35, 40 °C) and determined the flux and rejection. The results of this study are the higher of pressures and temperatures, permeation flux are increasing and ion rejections are decreasing and gave the free gadolinium(III) rejection until 86.26%.

  4. The Reliasep method used for the functional modeling of complex systems

    International Nuclear Information System (INIS)

    Dubiez, P.; Gaufreteau, P.; Pitton, J.P.

    1997-07-01

    The RELIASEP R method and its support tool have been recommended to carry out the functional analysis of large systems within the framework of the design of new power units. Let us first recall the principles of the method based on the breakdown of functions into tree(s). These functions are characterised by their performance and constraints. Then the main modifications made under EDF requirement and in particular the 'viewpoints' analyses are presented. The knowledge obtained from the first studies carried out are discussed. (author)

  5. The Reliasep method used for the functional modeling of complex systems

    Energy Technology Data Exchange (ETDEWEB)

    Dubiez, P.; Gaufreteau, P.; Pitton, J.P

    1997-07-01

    The RELIASEP{sup R} method and its support tool have been recommended to carry out the functional analysis of large systems within the framework of the design of new power units. Let us first recall the principles of the method based on the breakdown of functions into tree(s). These functions are characterised by their performance and constraints. Then the main modifications made under EDF requirement and in particular the `viewpoints` analyses are presented. The knowledge obtained from the first studies carried out are discussed. (author)

  6. Evaluating a complex system-wide intervention using the difference in differences method: the Delivering Choice Programme.

    Science.gov (United States)

    Round, Jeff; Drake, Robyn; Kendall, Edward; Addicott, Rachael; Agelopoulos, Nicky; Jones, Louise

    2015-03-01

    We report the use of difference in differences (DiD) methodology to evaluate a complex, system-wide healthcare intervention. We use the worked example of evaluating the Marie Curie Delivering Choice Programme (DCP) for advanced illness in a large urban healthcare economy. DiD was selected because a randomised controlled trial was not feasible. The method allows for before and after comparison of changes that occur in an intervention site with a matched control site. This enables analysts to control for the effect of the intervention in the absence of a local control. Any policy, seasonal or other confounding effects over the test period are assumed to have occurred in a balanced way at both sites. Data were obtained from primary care trusts. Outcomes were place of death, inpatient admissions, length of stay and costs. Small changes were identified between pre- and post-DCP outputs in the intervention site. The proportion of home deaths and median cost increased slightly, while the number of admissions per patient and the average length of stay per admission decreased slightly. None of these changes was statistically significant. Effects estimates were limited by small numbers accessing new services and selection bias in sample population and comparator site. In evaluating the effect of a complex healthcare intervention, the choice of analysis method and output measures is crucial. Alternatives to randomised controlled trials may be required for evaluating large scale complex interventions and the DiD approach is suitable, subject to careful selection of measured outputs and control population. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  7. Developments based on stochastic and determinist methods for studying complex nuclear systems

    International Nuclear Information System (INIS)

    Giffard, F.X.

    2000-01-01

    In the field of reactor and fuel cycle physics, particle transport plays and important role. Neutronic design, operation and evaluation calculations of nuclear system make use of large and powerful computer codes. However, current limitations in terms of computer resources make it necessary to introduce simplifications and approximations in order to keep calculation time and cost within reasonable limits. Two different types of methods are available in these codes. The first one is the deterministic method, which is applicable in most practical cases but requires approximations. The other method is the Monte Carlo method, which does not make these approximations but which generally requires exceedingly long running times. The main motivation of this work is to investigate the possibility of a combined use of the two methods in such a way as to retain their advantages while avoiding their drawbacks. Our work has mainly focused on the speed-up of 3-D continuous energy Monte Carlo calculations (TRIPOLI-4 code) by means of an optimized biasing scheme derived from importance maps obtained from the deterministic code ERANOS. The application of this method to two different practical shielding-type problems has demonstrated its efficiency: speed-up factors of 100 have been reached. In addition, the method offers the advantage of being easily implemented as it is not very to the choice of the importance mesh grid. It has also been demonstrated that significant speed-ups can be achieved by this method in the case of coupled neutron-gamma transport problems, provided that the interdependence of the neutron and photon importance maps is taken into account. Complementary studies are necessary to tackle a problem brought out by this work, namely undesirable jumps in the Monte Carlo variance estimates. (author)

  8. A method for determining customer requirement weights based on TFMF and TLR

    Science.gov (United States)

    Ai, Qingsong; Shu, Ting; Liu, Quan; Zhou, Zude; Xiao, Zheng

    2013-11-01

    'Customer requirements' (CRs) management plays an important role in enterprise systems (ESs) by processing customer-focused information. Quality function deployment (QFD) is one of the main CRs analysis methods. Because CR weights are crucial for the input of QFD, we developed a method for determining CR weights based on trapezoidal fuzzy membership function (TFMF) and 2-tuple linguistic representation (TLR). To improve the accuracy of CR weights, we propose to apply TFMF to describe CR weights so that they can be appropriately represented. Because the fuzzy logic is not capable of aggregating information without loss, TLR model is adopted as well. We first describe the basic concepts of TFMF and TLR and then introduce an approach to compute CR weights. Finally, an example is provided to explain and verify the proposed method.

  9. Performance of methods for estimation of table beet water requirement in Alagoas

    Directory of Open Access Journals (Sweden)

    Daniella P. dos Santos

    Full Text Available ABSTRACT Optimization of water use in agriculture is fundamental, particularly in regions where water scarcity is intense, requiring the adoption of technologies that promote increased irrigation efficiency. The objective of this study was to evaluate evapotranspiration models and to estimate the crop coefficients of beet grown in a drainage lysimeter in the Agreste region of Alagoas. The experiment was conducted at the Campus of the Federal University of Alagoas - UFAL, in the municipality of Arapiraca, AL, between March and April 2014. Crop evapotranspiration (ETc was estimated in drainage lysimeters and reference evapotranspiration (ETo by Penman-Monteith-FAO 56 and Hargreaves-Samani methods. The Hargreaves-Samani method presented a good performance index for ETo estimation compared with the Penman-Monteith-FAO method, indicating that it is adequate for the study area. Beet ETc showed a cumulative demand of 202.11 mm for a cumulative reference evapotranspiration of 152.00 mm. Kc values determined using the Penman-Monteith-FAO 56 and Hargreaves-Samani methods were overestimated, in comparison to the Kc values of the FAO-56 standard method. With the obtained results, it is possible to correct the equations of the methods for the region, allowing for adequate irrigation management.

  10. A Study of Storage Ring Requirements for an Explosive Detection System Using NRA Method.

    Energy Technology Data Exchange (ETDEWEB)

    Wang, T. F. (Tai-Sen F.); Kwan, T. J. T. (Thomas J. T.)

    2005-01-01

    The technical feasibility of an explosives detection system based on the nuclear resonance absorption (NRA) of gamma rays in nitrogen-rich materials was demonstrated at Los Alamos National Laboratory (LANL) in 1993 by using an RFQ proton accelerator and a tomographic imaging prototype. The study is being continued recently to examine deployment of such an active interrogation system in realistic scenarios. The approach is to use an accelerator and electron-cooling-equipped storage rings(s) to provide the high quality and high current proton beam needed in a practical application. In this work, we investigate the requirements on the storage ring(s) with external gamma-ray-production target for a variant of the airport luggage inspection system considered in the earlier LANL experiments. Estimations are carried out based on the required inspection throughput, the gamma ray yield, the proton beam emittance growth due to scatters with the photon-production target, beam current limit in the storage ring, and the electron-cooling rate. Studies using scaling and reasonable parameter values indicate that it is possible to use no more than a few storage rings per inspection station in a practical NRA luggage inspection complex having more than ten inspection stations.

  11. The complex nature of mixed farming systems requires multidimensional actions supported by integrative research and development efforts.

    Science.gov (United States)

    González-García, E; Gourdine, J L; Alexandre, G; Archimède, H; Vaarst, M

    2012-05-01

    Mixed farming systems (MFS) have demonstrated some success by focusing on the use of integrative and holistic mechanisms, and rationally building on and using the natural and local resource base without exhausting it, while enhancing biodiversity, optimizing complementarities between crops and animal systems and finally increasing opportunities in rural livelihoods. Focusing our analysis and discussion on field experiences and empirical knowledge in the Caribbean islands, this paper discusses the opportunities for a change needed in current MFS research-development philosophy. The importance of shifting from fragile/specialized production systems to MFS under current global conditions is argued with an emphasis on the case of Small Islands Developing States (SIDS) and the Caribbean. Particular vulnerable characteristics as well as the potential and constraints of SIDS and their agricultural sectors are described, while revealing the opportunities for the 'richness' of the natural and local resources to support authentic and less dependent production system strategies. Examples are provided of the use of natural grasses, legumes, crop residues and agro-industrial by-products. We analyse the requirement for a change in research strategies and initiatives through the development of a complex but necessary multi-/inter-/trans-disciplinary teamwork spirit. We stress as essential the collaboration and active participation of local and regional actors, stakeholders and end-users in the identification of research priorities, as well as the generation, exchange and dissemination of knowledge and technology innovations, while strengthening the leadership roles in the conduct of integrative and participative research and development projects.

  12. SAF-A forms a complex with BRG1 and both components are required for RNA polymerase II mediated transcription.

    Directory of Open Access Journals (Sweden)

    Dzeneta Vizlin-Hodzic

    Full Text Available BACKGROUND: Scaffold attachment factor A (SAF-A participates in the regulation of gene expression by organizing chromatin into transcriptionally active domains and by interacting directly with RNA polymerase II. METHODOLOGY: Here we use co-localization, co-immunoprecipitation (co-IP and in situ proximity ligation assay (PLA to identify Brahma Related Gene 1 (BRG1, the ATP-driven motor of the human SWI-SNF chromatin remodeling complex, as another SAF-A interaction partner in mouse embryonic stem (mES cells. We also employ RNA interference to investigate functional aspects of the SAF-A/BRG1 interaction. PRINCIPAL FINDINGS: We find that endogenous SAF-A protein interacts with endogenous BRG1 protein in mES cells, and that the interaction does not solely depend on the presence of mRNA. Moreover the interaction remains intact when cells are induced to differentiate. Functional analyses reveal that dual depletion of SAF-A and BRG1 abolishes global transcription by RNA polymerase II, while the nucleolar RNA polymerase I transcription machinery remains unaffected. CONCLUSIONS: We demonstrate that SAF-A interacts with BRG1 and that both components are required for RNA Polymerase II Mediated Transcription.

  13. An ancient duplication of exon 5 in the Snap25 gene is required for complex neuronal development/function.

    Directory of Open Access Journals (Sweden)

    Jenny U Johansson

    2008-11-01

    Full Text Available Alternative splicing is an evolutionary innovation to create functionally diverse proteins from a limited number of genes. SNAP-25 plays a central role in neuroexocytosis by bridging synaptic vesicles to the plasma membrane during regulated exocytosis. The SNAP-25 polypeptide is encoded by a single copy gene, but in higher vertebrates a duplication of exon 5 has resulted in two mutually exclusive splice variants, SNAP-25a and SNAP-25b. To address a potential physiological difference between the two SNAP-25 proteins, we generated gene targeted SNAP-25b deficient mouse mutants by replacing the SNAP-25b specific exon with a second SNAP-25a equivalent. Elimination of SNAP-25b expression resulted in developmental defects, spontaneous seizures, and impaired short-term synaptic plasticity. In adult mutants, morphological changes in hippocampus and drastically altered neuropeptide expression were accompanied by severe impairment of spatial learning. We conclude that the ancient exon duplication in the Snap25 gene provides additional SNAP-25-function required for complex neuronal processes in higher eukaryotes.

  14. The nonlinear response of the complex structural system in nuclear reactors using dynamic substructure method

    International Nuclear Information System (INIS)

    Zheng, Z.C.; Xie, G.; Du, Q.H.

    1987-01-01

    Because of the existence of nonlinear characteristics in practical engineering structures, such as large steam turbine-foundation system and offshore platform, it is necessary to predict nonlinear dynamic responses for these very large and complex structural systems subjected extreme load. Due to the limited storage and high executing cost of computers, there are still some difficulties in the analysis for such systems although the traditional finite element methods provide basic available methods to the problems. The dynamic substructure methods, which were developed as a branch of general structural dynamics in the past more than 20 years and have been widely used from aircraft, space vehicles to other mechanical and civil engineering structures, present a powerful method to the analysis of very large structural systems. The key to success is due to the considerable reduction in the number of degrees of freedom while not changing the physical essence of the problems investigated. The dynamic substructure method has been extended to nonlinear system and applicated to the analysis of nonlinear dynamic response of an offshore platform by Z.C. Zheng, et al. (1983, 1985a, b, c). In this paper, the method is presented to analyze dynamic responses of the systems contained intrinsic nonlinearities and with nonlinear attachments and nonlinear supports of nuclear structural systems. The efficiency of the method becomes more clear for nonlinear dynamic problems due to the adoption of iterating processes. For simplicity, the analysis procedure is demonstrated briefly. The generalized substructure method of nonlinear systems is similar to linear systems, only the nonlinear terms are treated as pseudo-forces. Interface coordinates are classified into two categories, the connecting interface coordinates which connect with each other directly in the global system and the linking interface coordinates which link to each other through attachments. (orig./GL)

  15. MAIA - Method for Architecture of Information Applied: methodological construct of information processing in complex contexts

    Directory of Open Access Journals (Sweden)

    Ismael de Moura Costa

    2017-04-01

    Full Text Available Introduction: Paper to presentation the MAIA Method for Architecture of Information Applied evolution, its structure, results obtained and three practical applications.Objective: Proposal of a methodological constructo for treatment of complex information, distinguishing information spaces and revealing inherent configurations of those spaces. Metodology: The argument is elaborated from theoretical research of analitical hallmark, using distinction as a way to express concepts. Phenomenology is used as a philosophical position, which considers the correlation between Subject↔Object. The research also considers the notion of interpretation as an integrating element for concepts definition. With these postulates, the steps to transform the information spaces are formulated. Results: This article explores not only how the method is structured to process information in its contexts, starting from a succession of evolutive cicles, divided in moments, which, on their turn, evolve to transformation acts. Conclusions: This article explores not only how the method is structured to process information in its contexts, starting from a succession of evolutive cicles, divided in moments, which, on their turn, evolve to transformation acts. Besides that, the article presents not only possible applications as a cientific method, but also as configuration tool in information spaces, as well as generator of ontologies. At last, but not least, presents a brief summary of the analysis made by researchers who have already evaluated the method considering the three aspects mentioned.

  16. S-curve networks and an approximate method for estimating degree distributions of complex networks

    International Nuclear Information System (INIS)

    Guo Jin-Li

    2010-01-01

    In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference values for optimizing the distribution of IPv4 address resource and the development of IPv6. Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing limit, it proposes a finite network model with a bulk growth. The model is said to be an S-curve network. Analysis demonstrates that the analytic method based on uniform distributions (i.e., Barabási-Albert method) is not suitable for the network. It develops an approximate method to predict the growth dynamics of the individual nodes, and uses this to calculate analytically the degree distribution and the scaling exponents. The analytical result agrees with the simulation well, obeying an approximately power-law form. This method can overcome a shortcoming of Barabási-Albert method commonly used in current network research. (general)

  17. S-curve networks and an approximate method for estimating degree distributions of complex networks

    Science.gov (United States)

    Guo, Jin-Li

    2010-12-01

    In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference values for optimizing the distribution of IPv4 address resource and the development of IPv6. Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing limit, it proposes a finite network model with a bulk growth. The model is said to be an S-curve network. Analysis demonstrates that the analytic method based on uniform distributions (i.e., Barabási-Albert method) is not suitable for the network. It develops an approximate method to predict the growth dynamics of the individual nodes, and uses this to calculate analytically the degree distribution and the scaling exponents. The analytical result agrees with the simulation well, obeying an approximately power-law form. This method can overcome a shortcoming of Barabási-Albert method commonly used in current network research.

  18. On the complexity of a combined homotopy interior method for convex programming

    Science.gov (United States)

    Yu, Bo; Xu, Qing; Feng, Guochen

    2007-03-01

    In [G.C. Feng, Z.H. Lin, B. Yu, Existence of an interior pathway to a Karush-Kuhn-Tucker point of a nonconvex programming problem, Nonlinear Anal. 32 (1998) 761-768; G.C. Feng, B. Yu, Combined homotopy interior point method for nonlinear programming problems, in: H. Fujita, M. Yamaguti (Eds.), Advances in Numerical Mathematics, Proceedings of the Second Japan-China Seminar on Numerical Mathematics, Lecture Notes in Numerical and Applied Analysis, vol. 14, Kinokuniya, Tokyo, 1995, pp. 9-16; Z.H. Lin, B. Yu, G.C. Feng, A combined homotopy interior point method for convex programming problem, Appl. Math. Comput. 84 (1997) 193-211.], a combined homotopy was constructed for solving non-convex programming and convex programming with weaker conditions, without assuming the logarithmic barrier function to be strictly convex and the solution set to be bounded. It was proven that a smooth interior path from an interior point of the feasible set to a K-K-T point of the problem exists. This shows that combined homotopy interior point methods can solve the problem that commonly used interior point methods cannot solveE However, so far, there is no result on its complexity, even for linear programming. The main difficulty is that the objective function is not monotonically decreasing on the combined homotopy path. In this paper, by taking a piecewise technique, under commonly used conditions, polynomiality of a combined homotopy interior point method is given for convex nonlinear programming.

  19. A Low-Complexity ESPRIT-Based DOA Estimation Method for Co-Prime Linear Arrays.

    Science.gov (United States)

    Sun, Fenggang; Gao, Bin; Chen, Lizhen; Lan, Peng

    2016-08-25

    The problem of direction-of-arrival (DOA) estimation is investigated for co-prime array, where the co-prime array consists of two uniform sparse linear subarrays with extended inter-element spacing. For each sparse subarray, true DOAs are mapped into several equivalent angles impinging on the traditional uniform linear array with half-wavelength spacing. Then, by applying the estimation of signal parameters via rotational invariance technique (ESPRIT), the equivalent DOAs are estimated, and the candidate DOAs are recovered according to the relationship among equivalent and true DOAs. Finally, the true DOAs are estimated by combining the results of the two subarrays. The proposed method achieves a better complexity-performance tradeoff as compared to other existing methods.

  20. An Embedded Ghost-Fluid Method for Compressible Flow in Complex Geometry

    KAUST Repository

    Almarouf, Mohamad Abdulilah Alhusain Alali; Samtaney, Ravi

    2016-01-01

    We present an embedded ghost-fluid method for numerical solutions of the compressible Navier Stokes (CNS) equations in arbitrary complex domains. The PDE multidimensional extrapolation approach of Aslam [1] is used to reconstruct the solution in the ghost-fluid regions and impose boundary conditions at the fluid-solid interface. The CNS equations are numerically solved by the second order multidimensional upwind method of Colella [2] and Saltzman [3]. Block-structured adaptive mesh refinement implemented under the Chombo framework is utilized to reduce the computational cost while keeping high-resolution mesh around the embedded boundary and regions of high gradient solutions. Numerical examples with different Reynolds numbers for low and high Mach number flow will be presented. We compare our simulation results with other reported experimental and computational results. The significance and advantages of our implementation, which revolve around balancing between the solution accuracy and implementation difficulties, are briefly discussed as well. © 2016 Trans Tech Publications.

  1. An Embedded Ghost-Fluid Method for Compressible Flow in Complex Geometry

    KAUST Repository

    Almarouf, Mohamad Abdulilah Alhusain Alali

    2016-06-03

    We present an embedded ghost-fluid method for numerical solutions of the compressible Navier Stokes (CNS) equations in arbitrary complex domains. The PDE multidimensional extrapolation approach of Aslam [1] is used to reconstruct the solution in the ghost-fluid regions and impose boundary conditions at the fluid-solid interface. The CNS equations are numerically solved by the second order multidimensional upwind method of Colella [2] and Saltzman [3]. Block-structured adaptive mesh refinement implemented under the Chombo framework is utilized to reduce the computational cost while keeping high-resolution mesh around the embedded boundary and regions of high gradient solutions. Numerical examples with different Reynolds numbers for low and high Mach number flow will be presented. We compare our simulation results with other reported experimental and computational results. The significance and advantages of our implementation, which revolve around balancing between the solution accuracy and implementation difficulties, are briefly discussed as well. © 2016 Trans Tech Publications.

  2. Complex absorbing potentials within EOM-CC family of methods: Theory, implementation, and benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Zuev, Dmitry; Jagau, Thomas-C.; Krylov, Anna I. [Department of Chemistry, University of Southern California, Los Angeles, California 90089-0482 (United States); Bravaya, Ksenia B. [Department of Chemistry, Boston University, Boston, Massachusetts 02215-2521 (United States); Epifanovsky, Evgeny [Department of Chemistry, University of Southern California, Los Angeles, California 90089-0482 (United States); Department of Chemistry, University of California, Berkeley, California 94720 (United States); Q-Chem, Inc., 6601 Owens Drive, Suite 105 Pleasanton, California 94588 (United States); Shao, Yihan [Q-Chem, Inc., 6601 Owens Drive, Suite 105 Pleasanton, California 94588 (United States); Sundstrom, Eric; Head-Gordon, Martin [Department of Chemistry, University of California, Berkeley, California 94720 (United States)

    2014-07-14

    A production-level implementation of equation-of-motion coupled-cluster singles and doubles (EOM-CCSD) for electron attachment and excitation energies augmented by a complex absorbing potential (CAP) is presented. The new method enables the treatment of metastable states within the EOM-CC formalism in a similar manner as bound states. The numeric performance of the method and the sensitivity of resonance positions and lifetimes to the CAP parameters and the choice of one-electron basis set are investigated. A protocol for studying molecular shape resonances based on the use of standard basis sets and a universal criterion for choosing the CAP parameters are presented. Our results for a variety of π{sup *} shape resonances of small to medium-size molecules demonstrate that CAP-augmented EOM-CCSD is competitive relative to other theoretical approaches for the treatment of resonances and is often able to reproduce experimental results.

  3. Highly efficient parallel direct solver for solving dense complex matrix equations from method of moments

    Directory of Open Access Journals (Sweden)

    Yan Chen

    2017-03-01

    Full Text Available Based on the vectorised and cache optimised kernel, a parallel lower upper decomposition with a novel communication avoiding pivoting scheme is developed to solve dense complex matrix equations generated by the method of moments. The fine-grain data rearrangement and assembler instructions are adopted to reduce memory accessing times and improve CPU cache utilisation, which also facilitate vectorisation of the code. Through grouping processes in a binary tree, a parallel pivoting scheme is designed to optimise the communication pattern and thus reduces the solving time of the proposed solver. Two large electromagnetic radiation problems are solved on two supercomputers, respectively, and the numerical results demonstrate that the proposed method outperforms those in open source and commercial libraries.

  4. A Corner-Point-Grid-Based Voxelization Method for Complex Geological Structure Model with Folds

    Science.gov (United States)

    Chen, Qiyu; Mariethoz, Gregoire; Liu, Gang

    2017-04-01

    3D voxelization is the foundation of geological property modeling, and is also an effective approach to realize the 3D visualization of the heterogeneous attributes in geological structures. The corner-point grid is a representative data model among all voxel models, and is a structured grid type that is widely applied at present. When carrying out subdivision for complex geological structure model with folds, we should fully consider its structural morphology and bedding features to make the generated voxels keep its original morphology. And on the basis of which, they can depict the detailed bedding features and the spatial heterogeneity of the internal attributes. In order to solve the shortage of the existing technologies, this work puts forward a corner-point-grid-based voxelization method for complex geological structure model with folds. We have realized the fast conversion from the 3D geological structure model to the fine voxel model according to the rule of isocline in Ramsay's fold classification. In addition, the voxel model conforms to the spatial features of folds, pinch-out and other complex geological structures, and the voxels of the laminas inside a fold accords with the result of geological sedimentation and tectonic movement. This will provide a carrier and model foundation for the subsequent attribute assignment as well as the quantitative analysis and evaluation based on the spatial voxels. Ultimately, we use examples and the contrastive analysis between the examples and the Ramsay's description of isoclines to discuss the effectiveness and advantages of the method proposed in this work when dealing with the voxelization of 3D geologic structural model with folds based on corner-point grids.

  5. Accurate gradient approximation for complex interface problems in 3D by an improved coupling interface method

    Energy Technology Data Exchange (ETDEWEB)

    Shu, Yu-Chen, E-mail: ycshu@mail.ncku.edu.tw [Department of Mathematics, National Cheng Kung University, Tainan 701, Taiwan (China); Mathematics Division, National Center for Theoretical Sciences (South), Tainan 701, Taiwan (China); Chern, I-Liang, E-mail: chern@math.ntu.edu.tw [Department of Applied Mathematics, National Chiao Tung University, Hsin Chu 300, Taiwan (China); Department of Mathematics, National Taiwan University, Taipei 106, Taiwan (China); Mathematics Division, National Center for Theoretical Sciences (Taipei Office), Taipei 106, Taiwan (China); Chang, Chien C., E-mail: mechang@iam.ntu.edu.tw [Institute of Applied Mechanics, National Taiwan University, Taipei 106, Taiwan (China); Department of Mathematics, National Taiwan University, Taipei 106, Taiwan (China)

    2014-10-15

    Most elliptic interface solvers become complicated for complex interface problems at those “exceptional points” where there are not enough neighboring interior points for high order interpolation. Such complication increases especially in three dimensions. Usually, the solvers are thus reduced to low order accuracy. In this paper, we classify these exceptional points and propose two recipes to maintain order of accuracy there, aiming at improving the previous coupling interface method [26]. Yet the idea is also applicable to other interface solvers. The main idea is to have at least first order approximations for second order derivatives at those exceptional points. Recipe 1 is to use the finite difference approximation for the second order derivatives at a nearby interior grid point, whenever this is possible. Recipe 2 is to flip domain signatures and introduce a ghost state so that a second-order method can be applied. This ghost state is a smooth extension of the solution at the exceptional point from the other side of the interface. The original state is recovered by a post-processing using nearby states and jump conditions. The choice of recipes is determined by a classification scheme of the exceptional points. The method renders the solution and its gradient uniformly second-order accurate in the entire computed domain. Numerical examples are provided to illustrate the second order accuracy of the presently proposed method in approximating the gradients of the original states for some complex interfaces which we had tested previous in two and three dimensions, and a real molecule ( (1D63)) which is double-helix shape and composed of hundreds of atoms.

  6. Direct infusion-SIM as fast and robust method for absolute protein quantification in complex samples

    Directory of Open Access Journals (Sweden)

    Christina Looße

    2015-06-01

    Full Text Available Relative and absolute quantification of proteins in biological and clinical samples are common approaches in proteomics. Until now, targeted protein quantification is mainly performed using a combination of HPLC-based peptide separation and selected reaction monitoring on triple quadrupole mass spectrometers. Here, we show for the first time the potential of absolute quantification using a direct infusion strategy combined with single ion monitoring (SIM on a Q Exactive mass spectrometer. By using complex membrane fractions of Escherichia coli, we absolutely quantified the recombinant expressed heterologous human cytochrome P450 monooxygenase 3A4 (CYP3A4 comparing direct infusion-SIM with conventional HPLC-SIM. Direct-infusion SIM revealed only 14.7% (±4.1 (s.e.m. deviation on average, compared to HPLC-SIM and a decreased processing and analysis time of 4.5 min (that could be further decreased to 30 s for a single sample in contrast to 65 min by the LC–MS method. Summarized, our simplified workflow using direct infusion-SIM provides a fast and robust method for quantification of proteins in complex protein mixtures.

  7. Reduction of uranyl carbonate and hydroxyl complexes and neptunyl carbonate complexes studied with chemical-electrochemical methods and rixs spectroscopy

    International Nuclear Information System (INIS)

    Butorin, Sergei; Nordgren, Joseph; Ollila, Kaija; Albinsson, Yngve; Werme, Lars

    2003-10-01

    Sweden and Finland plan to dispose of spent fuel from commercial nuclear power plants in deep underground repositories sited in granitic rocks. The fuel assemblies will be placed in canisters consisting of an outer corrosion-resistant copper shell with an inner cast iron insert that gives mechanical strength and reduces void space in the canister. The canister will be placed in a disposal borehole lined with compacted bentonite blocks. After sealing of the borehole, groundwater seepage will saturate the bentonite. The water flow path and transport mechanism between the host rock and the canister will be via diffusion through the swollen bentonite. Any oxygen trapped in the repository will be consumed by reaction with the host rock, pyrite in the bentonite and through microbial activity, giving long-term conditions with low redox potentials. Under these conditions, uranium dioxide - the matrix of unirradiated fuel - is a stable phase. This reducing near-field environment can upset by radiolysis of water caused by the radioactivity of the fuel, which after a few hundred years will be primarily alpha activity. Radiolysis of water produces equal amounts of oxidising and reducing species, but the reducing species produced by alpha radiolysis is molecular hydrogen, which is expected to be far less reactive than the produced oxidising species, H 2 O 2 . Alpha radiolysis could create locally oxidising conditions close to the fuel surface and oxidise the U(IV) in the uranium dioxide fuel to the more soluble U(VI) oxidation state. Furthermore, the solubility of U(VI) is enhanced in the presence of bicarbonate/carbonate by the formation of strong anionic uranyl carbonate complexes. This increase in solubility can amount to 4 to 5 orders of magnitude depending on the composition of the groundwater in contact with the fuel. The other tetravalent actinides in the fuel, Np and Pu, also have higher solubilities when oxidised beyond 4 + to neptunyl and plutonyl species. Once these

  8. Method for calculating required shielding in medical x-ray rooms

    International Nuclear Information System (INIS)

    Karppinen, J.

    1997-10-01

    The new annual radiation dose limits - 20 mSv (previously 50 mSv) for radiation workers and 1 mSv (previously 5 mSv) for other persons - implies that the adequacy of existing radiation shielding must be re-evaluated. In principle, one could assume that the thicknesses of old radiation shields should be increased by about one or two half-value layers in order to comply with the new dose limits. However, the assumptions made in the earlier shielding calculations are highly conservative; the required shielding was often determined by applying the maximum high-voltage of the x-ray tube for the whole workload. A more realistic calculation shows that increased shielding is typically not necessary if more practical x-ray tube voltages are used in the evaluation. We have developed a PC-based calculation method for calculating the x-ray shielding which is more realistic than the highly conservative method formerly used. The method may be used to evaluate an existing shield for compliance with new regulations. As examples of these calculations, typical x-ray rooms are considered. The lead and concrete thickness requirements as a function of x-ray tube voltage and workload are also given in tables. (author)

  9. Evaluation of methods to estimate the essential amino acids requirements of fish from the muscle amino acid profile

    Directory of Open Access Journals (Sweden)

    Álvaro José de Almeida Bicudo

    2014-03-01

    Full Text Available Many methods to estimate amino acid requirement based on amino acid profile of fish have been proposed. This study evaluates the methodology proposed by Meyer & Fracalossi (2005 and by Tacon (1989 to estimate amino acids requirement of fish, which do exempt knowledge on previous nutritional requirement of reference amino acid. Data on amino acid requirement of pacu, Piaractus mesopotamicus, were used to validate de accuracy of those methods. Meyer & Fracalossi's and Tacon's methodology estimated the lysine requirement of pacu, respectively, at 13 and 23% above requirement determined using dose-response method. The values estimated by both methods lie within the range of requirements determined for other omnivorous fish species, the Meyer & Fracalossi (2005 method showing better accuracy.

  10. Computational fluid dynamics: complex flows requiring supercomputers. January 1975-July 1988 (Citations from the INSPEC: Information Services for the Physics and Engineering Communities data base). Report for January 1975-July 1988

    International Nuclear Information System (INIS)

    1988-08-01

    This bibliography contains citations concerning computational fluid dynamics (CFD), a new method in computational science to perform complex flow simulations in three dimensions. Applications include aerodynamic design and analysis for aircraft, rockets, and missiles, and automobiles; heat-transfer studies; and combustion processes. Included are references to supercomputers, array processors, and parallel processors where needed for complete, integrated design. Also included are software packages and grid-generation techniques required to apply CFD numerical solutions. Numerical methods for fluid dynamics, not requiring supercomputers, are found in a separate published search. (Contains 83 citations fully indexed and including a title list.)

  11. Screening tests for hazard classification of complex waste materials – Selection of methods

    International Nuclear Information System (INIS)

    Weltens, R.; Vanermen, G.; Tirez, K.; Robbens, J.; Deprez, K.; Michiels, L.

    2012-01-01

    In this study we describe the development of an alternative methodology for hazard characterization of waste materials. Such an alternative methodology for hazard assessment of complex waste materials is urgently needed, because the lack of a validated instrument leads to arbitrary hazard classification of such complex waste materials. False classification can lead to human and environmental health risks and also has important financial consequences for the waste owner. The Hazardous Waste Directive (HWD) describes the methodology for hazard classification of waste materials. For mirror entries the HWD classification is based upon the hazardous properties (H1–15) of the waste which can be assessed from the hazardous properties of individual identified waste compounds or – if not all compounds are identified – from test results of hazard assessment tests performed on the waste material itself. For the latter the HWD recommends toxicity tests that were initially designed for risk assessment of chemicals in consumer products (pharmaceuticals, cosmetics, biocides, food, etc.). These tests (often using mammals) are not designed nor suitable for the hazard characterization of waste materials. With the present study we want to contribute to the development of an alternative and transparent test strategy for hazard assessment of complex wastes that is in line with the HWD principles for waste classification. It is necessary to cope with this important shortcoming in hazardous waste classification and to demonstrate that alternative methods are available that can be used for hazard assessment of waste materials. Next, by describing the pros and cons of the available methods, and by identifying the needs for additional or further development of test methods, we hope to stimulate research efforts and development in this direction. In this paper we describe promising techniques and argument on the test selection for the pilot study that we have performed on different

  12. Switching industrial production processes from complex to defined media: method development and case study using the example of Penicillium chrysogenum.

    Science.gov (United States)

    Posch, Andreas E; Spadiut, Oliver; Herwig, Christoph

    2012-06-22

    Filamentous fungi are versatile cell factories and widely used for the production of antibiotics, organic acids, enzymes and other industrially relevant compounds at large scale. As a fact, industrial production processes employing filamentous fungi are commonly based on complex raw materials. However, considerable lot-to-lot variability of complex media ingredients not only demands for exhaustive incoming components inspection and quality control, but unavoidably affects process stability and performance. Thus, switching bioprocesses from complex to defined media is highly desirable. This study presents a strategy for strain characterization of filamentous fungi on partly complex media using redundant mass balancing techniques. Applying the suggested method, interdependencies between specific biomass and side-product formation rates, production of fructooligosaccharides, specific complex media component uptake rates and fungal strains were revealed. A 2-fold increase of the overall penicillin space time yield and a 3-fold increase in the maximum specific penicillin formation rate were reached in defined media compared to complex media. The newly developed methodology enabled fast characterization of two different industrial Penicillium chrysogenum candidate strains on complex media based on specific complex media component uptake kinetics and identification of the most promising strain for switching the process from complex to defined conditions. Characterization at different complex/defined media ratios using only a limited number of analytical methods allowed maximizing the overall industrial objectives of increasing both, method throughput and the generation of scientific process understanding.

  13. Identifying Hierarchical and Overlapping Protein Complexes Based on Essential Protein-Protein Interactions and “Seed-Expanding” Method

    Directory of Open Access Journals (Sweden)

    Jun Ren

    2014-01-01

    Full Text Available Many evidences have demonstrated that protein complexes are overlapping and hierarchically organized in PPI networks. Meanwhile, the large size of PPI network wants complex detection methods have low time complexity. Up to now, few methods can identify overlapping and hierarchical protein complexes in a PPI network quickly. In this paper, a novel method, called MCSE, is proposed based on λ-module and “seed-expanding.” First, it chooses seeds as essential PPIs or edges with high edge clustering values. Then, it identifies protein complexes by expanding each seed to a λ-module. MCSE is suitable for large PPI networks because of its low time complexity. MCSE can identify overlapping protein complexes naturally because a protein can be visited by different seeds. MCSE uses the parameter λ_th to control the range of seed expanding and can detect a hierarchical organization of protein complexes by tuning the value of λ_th. Experimental results of S. cerevisiae show that this hierarchical organization is similar to that of known complexes in MIPS database. The experimental results also show that MCSE outperforms other previous competing algorithms, such as CPM, CMC, Core-Attachment, Dpclus, HC-PIN, MCL, and NFC, in terms of the functional enrichment and matching with known protein complexes.

  14. Process qualification and control in electron beams--requirements, methods, new concepts and challenges

    International Nuclear Information System (INIS)

    Mittendorfer, J.; Gratzl, F.; Hanis, D.

    2004-01-01

    In this paper the status of process qualification and control in electron beam irradiation is analyzed in terms of requirements, concepts, methods and challenges for a state-of-the-art process control concept for medical device sterilization. Aspects from process qualification to routine process control are described together with the associated process variables. As a case study the 10 MeV beams at Mediscan GmbH are considered. Process control concepts like statistical process control (SPC) and a new concept to determine process capability is briefly discussed

  15. Flood control design requirements and flood evaluation methods of inland nuclear power plant

    International Nuclear Information System (INIS)

    Zhang Ailing; Wang Ping; Zhu Jingxing

    2011-01-01

    Effect of flooding is one of the key safety factors and environmental factors in inland nuclear power plant sitting. Up to now, the rule of law and standard systems are established for the selection of nuclear power plant location and flood control requirements in China. In this paper flood control standards of China and other countries are introduced. Several inland nuclear power plants are taken as examples to thoroughly discuss the related flood evaluation methods. The suggestions are also put forward in the paper. (authors)

  16. Improved Dynamic Analysis method for quantitative PIXE and SXRF element imaging of complex materials

    International Nuclear Information System (INIS)

    Ryan, C.G.; Laird, J.S.; Fisher, L.A.; Kirkham, R.; Moorhead, G.F.

    2015-01-01

    The Dynamic Analysis (DA) method in the GeoPIXE software provides a rapid tool to project quantitative element images from PIXE and SXRF imaging event data both for off-line analysis and in real-time embedded in a data acquisition system. Initially, it assumes uniform sample composition, background shape and constant model X-ray relative intensities. A number of image correction methods can be applied in GeoPIXE to correct images to account for chemical concentration gradients, differential absorption effects, and to correct images for pileup effects. A new method, applied in a second pass, uses an end-member phase decomposition obtained from the first pass, and DA matrices determined for each end-member, to re-process the event data with each pixel treated as an admixture of end-member terms. This paper describes the new method and demonstrates through examples and Monte-Carlo simulations how it better tracks spatially complex composition and background shape while still benefitting from the speed of DA.

  17. A time-minimizing hybrid method for fitting complex Moessbauer spectra

    International Nuclear Information System (INIS)

    Steiner, K.J.

    2000-07-01

    The process of fitting complex Moessbauer-spectra is known to be time-consuming. The fitting process involves a mathematical model for the combined hyperfine interaction which can be solved by an iteration method only. The iteration method is very sensitive to its input-parameters. In other words, with arbitrary input-parameters it is most unlikely that the iteration method will converge. Up to now a scientist has to spent her/his time to guess appropriate input parameters for the iteration process. The idea is to replace the guessing phase by a genetic algorithm. The genetic algorithm starts with an initial population of arbitrary input parameters. Each parameter set is called an individual. The first step is to evaluate the fitness of all individuals. Afterwards the current population is recombined to form a new population. The process of recombination involves the successive application of genetic operators which are selection, crossover, and mutation. These operators mimic the process of natural evolution, i.e. the concept of the survival of the fittest. Even though there is no formal proof that the genetic algorithm will eventually converge, there is an excellent chance that there will be a population with very good individuals after some generations. The hybrid method presented in the following combines a very modern version of a genetic algorithm with a conventional least-square routine solving the combined interaction Hamiltonian i.e. providing a physical solution with the original Moessbauer parameters by a minimum of input. (author)

  18. Addressing Phase Errors in Fat-Water Imaging Using a Mixed Magnitude/Complex Fitting Method

    Science.gov (United States)

    Hernando, D.; Hines, C. D. G.; Yu, H.; Reeder, S.B.

    2012-01-01

    Accurate, noninvasive measurements of liver fat content are needed for the early diagnosis and quantitative staging of nonalcoholic fatty liver disease. Chemical shift-based fat quantification methods acquire images at multiple echo times using a multiecho spoiled gradient echo sequence, and provide fat fraction measurements through postprocessing. However, phase errors, such as those caused by eddy currents, can adversely affect fat quantification. These phase errors are typically most significant at the first echo of the echo train, and introduce bias in complex-based fat quantification techniques. These errors can be overcome using a magnitude-based technique (where the phase of all echoes is discarded), but at the cost of significantly degraded signal-to-noise ratio, particularly for certain choices of echo time combinations. In this work, we develop a reconstruction method that overcomes these phase errors without the signal-to-noise ratio penalty incurred by magnitude fitting. This method discards the phase of the first echo (which is often corrupted) while maintaining the phase of the remaining echoes (where phase is unaltered). We test the proposed method on 104 patient liver datasets (from 52 patients, each scanned twice), where the fat fraction measurements are compared to coregistered spectroscopy measurements. We demonstrate that mixed fitting is able to provide accurate fat fraction measurements with high signal-to-noise ratio and low bias over a wide choice of echo combinations. PMID:21713978

  19. Method for Determining the Activation Energy Distribution Function of Complex Reactions by Sieving and Thermogravimetric Measurements.

    Science.gov (United States)

    Bufalo, Gennaro; Ambrosone, Luigi

    2016-01-14

    A method for studying the kinetics of thermal degradation of complex compounds is suggested. Although the method is applicable to any matrix whose grain size can be measured, herein we focus our investigation on thermogravimetric analysis, under a nitrogen atmosphere, of ground soft wheat and ground maize. The thermogravimetric curves reveal that there are two well-distinct jumps of mass loss. They correspond to volatilization, which is in the temperature range 298-433 K, and decomposition regions go from 450 to 1073 K. Thermal degradation is schematized as a reaction in the solid state whose kinetics is analyzed separately in each of the two regions. By means of a sieving analysis different size fractions of the material are separated and studied. A quasi-Newton fitting algorithm is used to obtain the grain size distribution as best fit to experimental data. The individual fractions are thermogravimetrically analyzed for deriving the functional relationship between activation energy of the degradation reactions and the particle size. Such functional relationship turns out to be crucial to evaluate the moments of the activation energy distribution, which is unknown in terms of the distribution calculated by sieve analysis. From the knowledge of moments one can reconstruct the reaction conversion. The method is applied first to the volatilization region, then to the decomposition region. The comparison with the experimental data reveals that the method reproduces the experimental conversion with an accuracy of 5-10% in the volatilization region and of 3-5% in the decomposition region.

  20. Improved Dynamic Analysis method for quantitative PIXE and SXRF element imaging of complex materials

    Energy Technology Data Exchange (ETDEWEB)

    Ryan, C.G., E-mail: chris.ryan@csiro.au; Laird, J.S.; Fisher, L.A.; Kirkham, R.; Moorhead, G.F.

    2015-11-15

    The Dynamic Analysis (DA) method in the GeoPIXE software provides a rapid tool to project quantitative element images from PIXE and SXRF imaging event data both for off-line analysis and in real-time embedded in a data acquisition system. Initially, it assumes uniform sample composition, background shape and constant model X-ray relative intensities. A number of image correction methods can be applied in GeoPIXE to correct images to account for chemical concentration gradients, differential absorption effects, and to correct images for pileup effects. A new method, applied in a second pass, uses an end-member phase decomposition obtained from the first pass, and DA matrices determined for each end-member, to re-process the event data with each pixel treated as an admixture of end-member terms. This paper describes the new method and demonstrates through examples and Monte-Carlo simulations how it better tracks spatially complex composition and background shape while still benefitting from the speed of DA.

  1. Theoretical study of the electronic structure of f-element complexes by quantum chemical methods

    International Nuclear Information System (INIS)

    Vetere, V.

    2002-09-01

    This thesis is related to comparative studies of the chemical properties of molecular complexes containing lanthanide or actinide trivalent cations, in the context of the nuclear waste disposal. More precisely, our aim was a quantum chemical analysis of the metal-ligand bonding in such species. Various theoretical approaches were compared, for the inclusion of correlation (density functional theory, multiconfigurational methods) and of relativistic effects (relativistic scalar and 2-component Hamiltonians, relativistic pseudopotentials). The performance of these methods were checked by comparing computed structural properties to published experimental data, on small model systems: lanthanide and actinide tri-halides and on X 3 M-L species (X=F, Cl; M=La, Nd, U; L = NH 3 , acetonitrile, CO). We have thus shown the good performance of density functionals combined with a quasi-relativistic method, as well as of gradient-corrected functionals associated with relativistic pseudopotentials. In contrast, functionals including some part of exact exchange are less reliable to reproduce experimental trends, and we have given a possible explanation for this result . Then, a detailed analysis of the bonding has allowed us to interpret the discrepancies observed in the structural properties of uranium and lanthanides complexes, based on a covalent contribution to the bonding, in the case of uranium(III), which does not exist in the lanthanide(III) homologues. Finally, we have examined more sizeable systems, closer to experimental species, to analyse the influence of the coordination number, of the counter-ions and of the oxidation state of uranium, on the metal-ligand bonding. (author)

  2. Requirements and testing methods for surfaces of metallic bipolar plates for low-temperature PEM fuel cells

    Science.gov (United States)

    Jendras, P.; Lötsch, K.; von Unwerth, T.

    2017-03-01

    To reduce emissions and to substitute combustion engines automotive manufacturers, legislature and first users aspire hydrogen fuel cell vehicles. Up to now the focus of research was set on ensuring functionality and increasing durability of fuel cell components. Therefore, expensive materials were used. Contemporary research and development try to substitute these substances by more cost-effective material combinations. The bipolar plate is a key component with the greatest influence on volume and mass of a fuel cell stack and they have to meet complex requirements. They support bending sensitive components of stack, spread reactants over active cell area and form the electrical contact to another cell. Furthermore, bipolar plates dissipate heat of reaction and separate one cell gastight from the other. Consequently, they need a low interfacial contact resistance (ICR) to the gas diffusion layer, high flexural strength, good thermal conductivity and a high durability. To reduce costs stainless steel is a favoured material for bipolar plates in automotive applications. Steel is characterized by good electrical and thermal conductivity but the acid environment requires a high chemical durability against corrosion as well. On the one hand formation of a passivating oxide layer increasing ICR should be inhibited. On the other hand pitting corrosion leading to increased permeation rate may not occur. Therefore, a suitable substrate lamination combination is wanted. In this study material testing methods for bipolar plates are considered.

  3. Complexity Quantification for Overhead Transmission Line Emergency Repair Scheme via a Graph Entropy Method Improved with Petri Net and AHP Weighting Method

    Directory of Open Access Journals (Sweden)

    Jing Zhou

    2014-01-01

    Full Text Available According to the characteristics of emergency repair in overhead transmission line accidents, a complexity quantification method for emergency repair scheme is proposed based on the entropy method in software engineering, which is improved by using group AHP (analytical hierarchical process method and Petri net. Firstly, information structure chart model and process control flowchart model could be built by Petri net. Then impact factors on complexity of emergency repair scheme could be quantified into corresponding entropy values, respectively. Finally, by using group AHP method, weight coefficient of each entropy value would be given before calculating the overall entropy value for the whole emergency repair scheme. By comparing group AHP weighting method with average weighting method, experiment results for the former showed a stronger correlation between quantified entropy values of complexity and the actual consumed time in repair, which indicates that this new method is more valid.

  4. Translocation of the papillomavirus L2/vDNA complex across the limiting membrane requires the onset of mitosis.

    Science.gov (United States)

    Calton, Christine M; Bronnimann, Matthew P; Manson, Ariana R; Li, Shuaizhi; Chapman, Janice A; Suarez-Berumen, Marcela; Williamson, Tatum R; Molugu, Sudheer K; Bernal, Ricardo A; Campos, Samuel K

    2017-05-01

    The human papillomavirus type 16 (HPV16) L2 protein acts as a chaperone to ensure that the viral genome (vDNA) traffics from endosomes to the trans-Golgi network (TGN) and eventually the nucleus, where HPV replication occurs. En route to the nucleus, the L2/vDNA complex must translocate across limiting intracellular membranes. The details of this critical process remain poorly characterized. We have developed a system based on subcellular compartmentalization of the enzyme BirA and its cognate substrate to detect membrane translocation of L2-BirA from incoming virions. We find that L2 translocation requires transport to the TGN and is strictly dependent on entry into mitosis, coinciding with mitotic entry in synchronized cells. Cell cycle arrest causes retention of L2/vDNA at the TGN; only release and progression past G2/M enables translocation across the limiting membrane and subsequent infection. Microscopy of EdU-labeled vDNA reveals a rapid and dramatic shift in vDNA localization during early mitosis. At late G2/early prophase vDNA egresses from the TGN to a pericentriolar location, accumulating there through prometaphase where it begins to associate with condensed chromosomes. By metaphase and throughout anaphase the vDNA is seen bound to the mitotic chromosomes, ensuring distribution into both daughter nuclei. Mutations in a newly defined chromatin binding region of L2 potently blocked translocation, suggesting that translocation is dependent on chromatin binding during prometaphase. This represents the first time a virus has been shown to functionally couple the penetration of limiting membranes to cellular mitosis, explaining in part the tropism of HPV for mitotic basal keratinocytes.

  5. Investigation into complexing of pentavalent actinide forms with some anions of organic acids by the coprecipitation method

    International Nuclear Information System (INIS)

    Moskvin, A.I.; Poznyakov, A.N.; AN SSSR, Moscow. Inst. Geokhimii i Analiticheskoj Khimii)

    1979-01-01

    Complexing of pentavolent forms of Np, Pu, Am actinides with anions of acetic, oxalic acids and EDTA is studied using the method of coprecipitation with iron hydroxide. Composition and stability constants of the actinide complexes formed are determined. The acids anions are arranged in a row in the order of decrease of complexing tendency that is EDTA anion>C 2 O 4 2- >CH 3 COO -

  6. Lattice Boltzmann methods for complex micro-flows: applicability and limitations for practical applications

    Energy Technology Data Exchange (ETDEWEB)

    Suga, K, E-mail: suga@me.osakafu-u.ac.jp [Department of Mechanical Engineering, Osaka Prefecture University, 1-1 Gakuen-cho, Naka-ku, Sakai, Osaka 599-8531 (Japan)

    2013-06-15

    The extensive evaluation studies of the lattice Boltzmann method for micro-scale flows ({mu}-flow LBM) by the author's group are summarized. For the two-dimensional test cases, force-driven Poiseuille flows, Couette flows, a combined nanochannel flow, and flows in a nanochannel with a square- or triangular cylinder are discussed. The three-dimensional (3D) test cases are nano-mesh flows and a flow between 3D bumpy walls. The reference data for the complex test flow geometries are from the molecular dynamics simulations of the Lennard-Jones fluid by the author's group. The focused flows are mainly in the slip and a part of the transitional flow regimes at Kn < 1. The evaluated schemes of the {mu}-flow LBMs are the lattice Bhatnagar-Gross-Krook and the multiple-relaxation time LBMs with several boundary conditions and discrete velocity models. The effects of the discrete velocity models, the wall boundary conditions, the near-wall correction models of the molecular mean free path and the regularization process are discussed to confirm the applicability and the limitations of the {mu}-flow LBMs for complex flow geometries. (invited review)

  7. Lattice Boltzmann methods for complex micro-flows: applicability and limitations for practical applications

    International Nuclear Information System (INIS)

    Suga, K

    2013-01-01

    The extensive evaluation studies of the lattice Boltzmann method for micro-scale flows (μ-flow LBM) by the author's group are summarized. For the two-dimensional test cases, force-driven Poiseuille flows, Couette flows, a combined nanochannel flow, and flows in a nanochannel with a square- or triangular cylinder are discussed. The three-dimensional (3D) test cases are nano-mesh flows and a flow between 3D bumpy walls. The reference data for the complex test flow geometries are from the molecular dynamics simulations of the Lennard-Jones fluid by the author's group. The focused flows are mainly in the slip and a part of the transitional flow regimes at Kn < 1. The evaluated schemes of the μ-flow LBMs are the lattice Bhatnagar–Gross–Krook and the multiple-relaxation time LBMs with several boundary conditions and discrete velocity models. The effects of the discrete velocity models, the wall boundary conditions, the near-wall correction models of the molecular mean free path and the regularization process are discussed to confirm the applicability and the limitations of the μ-flow LBMs for complex flow geometries. (invited review)

  8. Method

    Directory of Open Access Journals (Sweden)

    Ling Fiona W.M.

    2017-01-01

    Full Text Available Rapid prototyping of microchannel gain lots of attention from researchers along with the rapid development of microfluidic technology. The conventional methods carried few disadvantages such as high cost, time consuming, required high operating pressure and temperature and involve expertise in operating the equipment. In this work, new method adapting xurography method is introduced to replace the conventional method of fabrication of microchannels. The novelty in this study is replacing the adhesion film with clear plastic film which was used to cut the design of the microchannel as the material is more suitable for fabricating more complex microchannel design. The microchannel was then mold using polymethyldisiloxane (PDMS and bonded with a clean glass to produce a close microchannel. The microchannel produced had a clean edge indicating good master mold was produced using the cutting plotter and the bonding between the PDMS and glass was good where no leakage was observed. The materials used in this method is cheap and the total time consumed is less than 5 hours where this method is suitable for rapid prototyping of microchannel.

  9. A numerical method for complex structural dynamics in nuclear plant facilities

    International Nuclear Information System (INIS)

    Zeitner, W.

    1979-01-01

    The solution of dynamic problems is often connected with difficulties in setting up a system of equations of motion because of the constraint conditions of the system. Such constraint conditions may be of geometric nature as for example gaps or slidelines, they may be compatibility conditions or thermodynamic criteria for the energy balance of a system. The numerical method proposed in this paper for the treatment of a dynamic problem with constraint conditions requires only to set up the equations of motion without considering the constraints. This always leads to a relatively simple formulation. The constraint conditions themselves are included in the integration procedure by a numerical application of Gauss' principle. (orig.)

  10. Redox control of electric melters with complex feed compositions. Part I: analytical methods and models

    International Nuclear Information System (INIS)

    Bickford, D.F.; Diemer, R.B. Jr.

    1985-01-01

    The redox state of glass from electric melters with complex feed compositions is determined by balance between gases above the melt, and transition metals and organic compounds in the feed. Part I discusses experimental and computational methods of relating flowrates and other melter operating conditions to the redox state of glass, and composition of the melter offgas. Computerized thermodynamic computational methods are useful in predicting the sequence and products of redox reactions and in assessing individual process variations. Melter redox state can be predicted by combining monitoring of melter operating conditions, redox measurement of fused melter feed samples, and periodic redox measurement of product. Mossbauer spectroscopy, and other methods which measure Fe(II)/Fe(III) in glass, can be used to measure melter redox state. Part II develops preliminary operating limits for the vitrification of High-Level Radioactive Waste. Limits on reducing potential to preclude the accumulation of combustible gases, accumulation of sulfides and selenides, and degradation of melter components are the most critical. Problems associated with excessively oxidizing conditions, such as glass foaming and potential ruthenium volatility, are controlled when sufficient formic acid is added to adjust melter feed rheology

  11. A versatile embedded boundary adaptive mesh method for compressible flow in complex geometry

    KAUST Repository

    Almarouf, Mohamad Abdulilah Alhusain Alali

    2017-02-25

    We present an embedded ghost-fluid method for numerical solutions of the compressible Navier Stokes (CNS) equations in arbitrary complex domains. A PDE multidimensional extrapolation approach is used to reconstruct the solution in the ghost-fluid regions and imposing boundary conditions on the fluid-solid interface, coupled with a multi-dimensional algebraic interpolation for freshly cleared cells. The CNS equations are numerically solved by the second order multidimensional upwind method. Block-structured adaptive mesh refinement, implemented with the Chombo framework, is utilized to reduce the computational cost while keeping high resolution mesh around the embedded boundary and regions of high gradient solutions. The versatility of the method is demonstrated via several numerical examples, in both static and moving geometry, ranging from low Mach number nearly incompressible flows to supersonic flows. Our simulation results are extensively verified against other numerical results and validated against available experimental results where applicable. The significance and advantages of our implementation, which revolve around balancing between the solution accuracy and implementation difficulties, are briefly discussed as well.

  12. A versatile embedded boundary adaptive mesh method for compressible flow in complex geometry

    KAUST Repository

    Almarouf, Mohamad Abdulilah Alhusain Alali; Samtaney, Ravi

    2017-01-01

    We present an embedded ghost-fluid method for numerical solutions of the compressible Navier Stokes (CNS) equations in arbitrary complex domains. A PDE multidimensional extrapolation approach is used to reconstruct the solution in the ghost-fluid regions and imposing boundary conditions on the fluid-solid interface, coupled with a multi-dimensional algebraic interpolation for freshly cleared cells. The CNS equations are numerically solved by the second order multidimensional upwind method. Block-structured adaptive mesh refinement, implemented with the Chombo framework, is utilized to reduce the computational cost while keeping high resolution mesh around the embedded boundary and regions of high gradient solutions. The versatility of the method is demonstrated via several numerical examples, in both static and moving geometry, ranging from low Mach number nearly incompressible flows to supersonic flows. Our simulation results are extensively verified against other numerical results and validated against available experimental results where applicable. The significance and advantages of our implementation, which revolve around balancing between the solution accuracy and implementation difficulties, are briefly discussed as well.

  13. Food deserts in Winnipeg, Canada: a novel method for measuring a complex and contested construct

    Directory of Open Access Journals (Sweden)

    Joyce Slater

    2017-10-01

    Full Text Available Introduction: "Food deserts" have emerged over the past 20 years as spaces of concern for communities, public health authorities and researchers because of their potential negative impact on dietary quality and subsequent health outcomes. Food deserts are residential geographic spaces, typically in urban settings, where low-income residents have limited or no access to retail food establishments with sufficient variety at affordable cost. Research on food deserts presents methodological challenges including retail food store identification and classification, identification of low-income populations, and transportation and proximity metrics. Furthermore, the complex methods often used in food desert research can be difficult to reproduce and communicate to key stakeholders. To address these challenges, this study sought to demonstrate the feasibility of implementing a simple and reproducible method of identifying food deserts using data easily available in the Canadian context. Methods: This study was conducted in Winnipeg, Canada in 2014. Food retail establishments were identified from Yellow Pages and verified by public health dietitians. We calculated two scenarios of food deserts based on location of the lowest-income quintile population: (a living ≥ 500 m from a national chain grocery store, or (b living ≥ 500 m from a national chain grocery store or a full-service grocery store. Results: The number of low-income residents living in a food desert ranged from 64 574 to 104 335, depending on the scenario used. Conclusion: This study shows that food deserts affect a significant proportion of the Winnipeg population, and while concentrated in the urban core, exist in suburban neighbourhoods also. The methods utilized represent an accessible and transparent, reproducible process for identifying food deserts. These methods can be used for costeffective, periodic surveillance and meaningful engagement with communities, retailers and policy

  14. A New Efficient Analytical Method for Picolinate Ion Measurements in Complex Aqueous Solutions

    Energy Technology Data Exchange (ETDEWEB)

    Parazols, M.; Dodi, A. [CEA Cadarache, Lab Anal Radiochim and Chim, DEN, F-13108 St Paul Les Durance (France)

    2010-07-01

    This study focuses on the development of a new simple but sensitive, fast and quantitative liquid chromatography method for picolinate ion measurement in high ionic strength aqueous solutions. It involves cation separation over a chromatographic CS16 column using methane sulfonic acid as a mobile phase and detection by UV absorbance (254 nm). The CS16 column is a high-capacity stationary phase exhibiting both cation exchange and RP properties. It allows interaction with picolinate ions which are in their zwitterionic form at the pH of the mobile phase (1.3-1.7). Analysis is performed in 30 min with a detection limit of about 0.05 {mu}M and a quantification limit of about 0.15 {mu}M. Moreover, this analytical technique has been tested efficiently on complex aqueous samples from an effluent treatment facility. (authors)

  15. Simple method for determining binding energies of fullerene and complex atomic negative ions

    Science.gov (United States)

    Felfli, Zineb; Msezane, Alfred

    2017-04-01

    A robust potential which embeds fully the vital core polarization interaction has been used in the Regge pole method to explore low-energy electron scattering from C60, Eu and Nb through the total cross sections (TCSs) calculations. From the characteristic dramatically sharp resonances in the TCSs manifesting negative ion formation in these systems, we extracted the binding energies for the C60, Euand Nbanions they are found to be in outstanding agreement with the measured electron affinities of C60, Eu and Nb. Common among these considered systems, including the standard atomic Au is the formation of their ground state negative ions at the second Ramsauer-Townsend (R-T) minima of their TCSs. Indeed, this is a signature of all the fullerenes and complex atoms considered thus far. Shape resonances, R-T minima and binding energies of the resultant anions are presented. This work was supported by U.S. DOE, Basic Energy Sciences, Office of Energy Research.

  16. Method of investigation of nuclear reactions in charge-nonsymmetrical muonic complexes

    CERN Document Server

    Bystritsky, V M; Penkov, F M

    1999-01-01

    A method for experimental determination of the nuclear fusion rates in the d mu He molecules in the states with J=0 and J=1 (J is the orbital moment of the system) and of the effective rate of transition between these states (rotational transition 1-0) is proposed. It is shown that information on the desired characteristics can be found from joint analysis of the time distribution and yield of products of nuclear fusion reactions in deuterium-helium muonic molecules and muonic X-ray obtained in experiments with the D sub 2 +He mixture at three (and more) appreciably different densities. The planned experiments with the D sub 2 +He mixture at the meson facility PSI (Switzerland) are optimized to gain more accurate information about the desired parameters on the assumption that different mechanisms for the 1-0 transition of the d mu He complex are realized. (author)

  17. Reliable simultaneous zymographic method of characterization of cellulolytic enzymes from fungal cellulase complex.

    Science.gov (United States)

    Dojnov, Biljana; Grujić, Marica; Vujčić, Zoran

    2015-08-01

    A method for zymographic detection of specific cellulases in a complex (endocellulase, exocellulase, and cellobiase) from crude fermentation extracts, after a single electrophoretic separation, is described in this paper. Cellulases were printed onto a membrane and, subsequently, substrate gel. Cellobiase isoforms were detected on the membrane using esculine as substrate, endocellulase isoforms on substrate gel with copolymerized carboxymethyl cellulose (CMC), while exocellulase isoforms were detected in electrophoresis gel with 4-methylumbelliferyl-β-d-cellobioside (MUC). This can be a useful additional tool for monitoring and control of fungal cellulase production in industrial processes and fundamental research, screening for particular cellulase producers, or testing of new lignocellulose substrates. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Three-body Coulomb breakup of 11Li in the complex scaling method

    International Nuclear Information System (INIS)

    Myo, Takayuki; Aoyama, Shigeyoshi; Kato, Kiyoshi; Ikeda, Kiyomi

    2003-01-01

    Coulomb breakup strengths of 11 Li into a three-body 9 Li+n+n system are studied in the complex scaling method. We decompose the transition strengths into the contributions from three-body resonances, two-body '' 10 Li+n'' and three-body '' 9 Li+n+n'' continuum states. In the calculated results, we cannot find the dipole resonances with a sharp decay width in 11 Li. There is a low energy enhancement in the breakup strength, which is produced by both the two- and three-body continuum states. The enhancement given by the three-body continuum states is found to have a strong connection to the halo structure of 11 Li. The calculated breakup strength distribution is compared with the experimental data from MSU, RIKEN and GSI

  19. The application of method supplier’s complex evaluation. Case study

    Directory of Open Access Journals (Sweden)

    Ekaterina Chytilová

    2012-01-01

    Full Text Available The main goal of this article includes the illustration of selecting bidders evaluation with help Method of complex evaluation of suppliers (MCE. Nowadays the evaluation of suppliers has more importance is in the supply chain management. For SMEs with discontinuous custom manufacturing supplier evaluation at first stage becomes a priority to maintain and enhance the competitiveness of farm output and overall competitiveness. This article presents results of control MCE. The results of this article are results of suppliers’ evaluation conditions and eliminations of MCE application on the base of real enterprise data. MCE is oriented to small and medium-sized enterprises with discontinue manufacturing to order. Research is oriented to selecting procedure of existing suppliers at the first stage of supply chain. Nationality and geographic location haven’t importance to MCE application. Illustrative case study presents the evaluation process to the specific conditions and subsequently demonstrated viability of MCE.

  20. An efficient fringe integral equation method for optimizing the antenna location on complex bodies

    DEFF Research Database (Denmark)

    Jørgensen, Erik; Meincke, Peter; Breinbjerg, Olav

    2001-01-01

    The radiation pattern of an antenna mounted nearby, or directly on, a complex three-dimensional (3D) structure can be significantly influenced by this structure. Integral equations combined with the method of moments (MoM) provide an accurate means for calculating the scattering from the structures...... in such applications. The structure is then modelled by triangular or rectangular surface patches with corresponding surface current expansion functions. A MoM matrix which is independent of the antenna location can be obtained by modelling the antenna as an impressed electric or magnetic source, e.g., a slot antenna...... can be modelled by a magnetic Hertzian dipole. For flush-mounted antennas, or antennas mounted in close vicinity of the scattering structure, the nearby impressed source induces a highly peaked surface current on the scattering structure. For the low-order basis functions usually applied...

  1. SB certification handout material requirements, test methods, responsibilities, and minimum classification levels for mixture-based specification for flexible base.

    Science.gov (United States)

    2012-10-01

    A handout with tables representing the material requirements, test methods, responsibilities, and minimum classification levels mixture-based specification for flexible base and details on aggregate and test methods employed, along with agency and co...

  2. Medicinal Chemistry Projects Requiring Imaginative Structure-Based Drug Design Methods.

    Science.gov (United States)

    Moitessier, Nicolas; Pottel, Joshua; Therrien, Eric; Englebienne, Pablo; Liu, Zhaomin; Tomberg, Anna; Corbeil, Christopher R

    2016-09-20

    Computational methods for docking small molecules to proteins are prominent in drug discovery. There are hundreds, if not thousands, of documented examples-and several pertinent cases within our research program. Fifteen years ago, our first docking-guided drug design project yielded nanomolar metalloproteinase inhibitors and illustrated the potential of structure-based drug design. Subsequent applications of docking programs to the design of integrin antagonists, BACE-1 inhibitors, and aminoglycosides binding to bacterial RNA demonstrated that available docking programs needed significant improvement. At that time, docking programs primarily considered flexible ligands and rigid proteins. We demonstrated that accounting for protein flexibility, employing displaceable water molecules, and using ligand-based pharmacophores improved the docking accuracy of existing methods-enabling the design of bioactive molecules. The success prompted the development of our own program, Fitted, implementing all of these aspects. The primary motivation has always been to respond to the needs of drug design studies; the majority of the concepts behind the evolution of Fitted are rooted in medicinal chemistry projects and collaborations. Several examples follow: (1) Searching for HDAC inhibitors led us to develop methods considering drug-zinc coordination and its effect on the pKa of surrounding residues. (2) Targeting covalent prolyl oligopeptidase (POP) inhibitors prompted an update to Fitted to identify reactive groups and form bonds with a given residue (e.g., a catalytic residue) when the geometry allows it. Fitted-the first fully automated covalent docking program-was successfully applied to the discovery of four new classes of covalent POP inhibitors. As a result, efficient stereoselective syntheses of a few screening hits were prioritized rather than synthesizing large chemical libraries-yielding nanomolar inhibitors. (3) In order to study the metabolism of POP inhibitors by

  3. Monitoring Freeze Thaw Transitions in Arctic Soils using Complex Resistivity Method

    Science.gov (United States)

    Wu, Y.; Hubbard, S. S.; Ulrich, C.; Dafflon, B.; Wullschleger, S. D.

    2012-12-01

    The Arctic region, which is a sensitive system that has emerged as a focal point for climate change studies, is characterized by a large amount of stored carbon and a rapidly changing landscape. Seasonal freeze-thaw transitions in the Arctic alter subsurface biogeochemical processes that control greenhouse gas fluxes from the subsurface. Our ability to monitor freeze thaw cycles and associated biogeochemical transformations is critical to the development of process rich ecosystem models, which are in turn important for gaining a predictive understanding of Arctic terrestrial system evolution and feedbacks with climate. In this study, we conducted both laboratory and field investigations to explore the use of the complex resistivity method to monitor freeze thaw transitions of arctic soil in Barrow, AK. In the lab studies, freeze thaw transitions were induced on soil samples having different average carbon content through exposing the arctic soil to temperature controlled environments at +4 oC and -20 oC. Complex resistivity and temperature measurements were collected using electrical and temperature sensors installed along the soil columns. During the laboratory experiments, resistivity gradually changed over two orders of magnitude as the temperature was increased or decreased between -20 oC and 0 oC. Electrical phase responses at 1 Hz showed a dramatic and immediate response to the onset of freeze and thaw. Unlike the resistivity response, the phase response was found to be exclusively related to unfrozen water in the soil matrix, suggesting that this geophysical attribute can be used as a proxy for the monitoring of the onset and progression of the freeze-thaw transitions. Spectral electrical responses contained additional information about the controls of soil grain size distribution on the freeze thaw dynamics. Based on the demonstrated sensitivity of complex resistivity signals to the freeze thaw transitions, field complex resistivity data were collected over

  4. A systematic method for identifying vital areas at complex nuclear facilities.

    Energy Technology Data Exchange (ETDEWEB)

    Beck, David Franklin; Hockert, John

    2005-05-01

    Identifying the areas to be protected is an important part of the development of measures for physical protection against sabotage at complex nuclear facilities. In June 1999, the International Atomic Energy Agency published INFCIRC/225/Rev.4, 'The Physical Protection of Nuclear Material and Nuclear Facilities.' This guidance recommends that 'Safety specialists, in close cooperation with physical protection specialists, should evaluate the consequences of malevolent acts, considered in the context of the State's design basis threat, to identify nuclear material, or the minimum complement of equipment, systems or devices to be protected against sabotage.' This report presents a structured, transparent approach for identifying the areas that contain this minimum complement of equipment, systems, and devices to be protected against sabotage that is applicable to complex nuclear facilities. The method builds upon safety analyses to develop sabotage fault trees that reflect sabotage scenarios that could cause unacceptable radiological consequences. The sabotage actions represented in the fault trees are linked to the areas from which they can be accomplished. The fault tree is then transformed (by negation) into its dual, the protection location tree, which reflects the sabotage actions that must be prevented in order to prevent unacceptable radiological consequences. The minimum path sets of this fault tree dual yield, through the area linkage, sets of areas, each of which contains nuclear material, or a minimum complement of equipment, systems or devices that, if protected, will prevent sabotage. This method also provides guidance for the selection of the minimum path set that permits optimization of the trade-offs among physical protection effectiveness, safety impact, cost and operational impact.

  5. A numerical calculation method for flow discretisation in complex geometry with body-fitted grids

    International Nuclear Information System (INIS)

    Jin, X.

    2001-04-01

    A numerical calculation method basing on body fitted grids is developed in this work for computational fluid dynamics in complex geometry. The method solves the conservation equations in a general nonorthogonal coordinate system which matches the curvilinear boundary. The nonorthogonal, patched grid is generated by a grid generator which solves algebraic equations. By means of an interface its geometrical data can be used by this method. The conservation equations are transformed from the Cartesian system to a general curvilinear system keeping the physical Cartesian velocity components as dependent variables. Using a staggered arrangement of variables, the three Cartesian velocity components are defined on every cell surface. Thus the coupling between pressure and velocity is ensured, and numerical oscillations are avoided. The contravariant velocity for calculating mass flux on one cell surface is resulting from dependent Cartesian velocity components. After the discretisation and linear interpolation, a three dimensional 19-point pressure equation is found. Using the explicit treatment for cross-derivative terms, it reduces to the usual 7-point equation. Under the same data and process structure, this method is compatible with the code FLUTAN using Cartesian coordinates. In order to verify this method, several laminar flows are simulated in orthogonal grids at tilted space directions and in nonorthogonal grids with variations of cell angles. The simulated flow types are considered like various duct flows, transient heat conduction, natural convection in a chimney and natural convection in cavities. Their results achieve very good agreement with analytical solutions or empirical data. Convergence for highly nonorthogonal grids is obtained. After the successful validation of this method, it is applied for a reactor safety case. A transient natural convection flow for an optional sump cooling concept SUCO is simulated. The numerical result is comparable with the

  6. A Sensitivity Analysis Method to Study the Behavior of Complex Process-based Models

    Science.gov (United States)

    Brugnach, M.; Neilson, R.; Bolte, J.

    2001-12-01

    The use of process-based models as a tool for scientific inquiry is becoming increasingly relevant in ecosystem studies. Process-based models are artificial constructs that simulate the system by mechanistically mimicking the functioning of its component processes. Structurally, a process-based model can be characterized, in terms of its processes and the relationships established among them. Each process comprises a set of functional relationships among several model components (e.g., state variables, parameters and input data). While not encoded explicitly, the dynamics of the model emerge from this set of components and interactions organized in terms of processes. It is the task of the modeler to guarantee that the dynamics generated are appropriate and semantically equivalent to the phenomena being modeled. Despite the availability of techniques to characterize and understand model behavior, they do not suffice to completely and easily understand how a complex process-based model operates. For example, sensitivity analysis studies model behavior by determining the rate of change in model output as parameters or input data are varied. One of the problems with this approach is that it considers the model as a "black box", and it focuses on explaining model behavior by analyzing the relationship input-output. Since, these models have a high degree of non-linearity, understanding how the input affects an output can be an extremely difficult task. Operationally, the application of this technique may constitute a challenging task because complex process-based models are generally characterized by a large parameter space. In order to overcome some of these difficulties, we propose a method of sensitivity analysis to be applicable to complex process-based models. This method focuses sensitivity analysis at the process level, and it aims to determine how sensitive the model output is to variations in the processes. Once the processes that exert the major influence in

  7. A FEM-based method to determine the complex material properties of piezoelectric disks.

    Science.gov (United States)

    Pérez, N; Carbonari, R C; Andrade, M A B; Buiochi, F; Adamowski, J C

    2014-08-01

    Numerical simulations allow modeling piezoelectric devices and ultrasonic transducers. However, the accuracy in the results is limited by the precise knowledge of the elastic, dielectric and piezoelectric properties of the piezoelectric material. To introduce the energy losses, these properties can be represented by complex numbers, where the real part of the model essentially determines the resonance frequencies and the imaginary part determines the amplitude of each resonant mode. In this work, a method based on the Finite Element Method (FEM) is modified to obtain the imaginary material properties of piezoelectric disks. The material properties are determined from the electrical impedance curve of the disk, which is measured by an impedance analyzer. The method consists in obtaining the material properties that minimize the error between experimental and numerical impedance curves over a wide range of frequencies. The proposed methodology starts with a sensitivity analysis of each parameter, determining the influence of each parameter over a set of resonant modes. Sensitivity results are used to implement a preliminary algorithm approaching the solution in order to avoid the search to be trapped into a local minimum. The method is applied to determine the material properties of a Pz27 disk sample from Ferroperm. The obtained properties are used to calculate the electrical impedance curve of the disk with a Finite Element algorithm, which is compared with the experimental electrical impedance curve. Additionally, the results were validated by comparing the numerical displacement profile with the displacements measured by a laser Doppler vibrometer. The comparison between the numerical and experimental results shows excellent agreement for both electrical impedance curve and for the displacement profile over the disk surface. The agreement between numerical and experimental displacement profiles shows that, although only the electrical impedance curve is

  8. Only one ATP-binding DnaX subunit is required for initiation complex formation by the Escherichia coli DNA polymerase III holoenzyme.

    Science.gov (United States)

    Wieczorek, Anna; Downey, Christopher D; Dallmann, H Garry; McHenry, Charles S

    2010-09-17

    The DnaX complex (DnaX(3)δδ'χ psi) within the Escherichia coli DNA polymerase III holoenzyme serves to load the dimeric sliding clamp processivity factor, β(2), onto DNA. The complex contains three DnaX subunits, which occur in two forms: τ and the shorter γ, produced by translational frameshifting. Ten forms of E. coli DnaX complex containing all possible combinations of wild-type or a Walker A motif K51E variant τ or γ have been reconstituted and rigorously purified. DnaX complexes containing three DnaX K51E subunits do not bind ATP. Comparison of their ability to support formation of initiation complexes, as measured by processive replication by the DNA polymerase III holoenzyme, indicates a minimal requirement for one ATP-binding DnaX subunit. DnaX complexes containing two mutant DnaX subunits support DNA synthesis at about two-thirds the level of their wild-type counterparts. β(2) binding (determined functionally) is diminished 12-30-fold for DnaX complexes containing two K51E subunits, suggesting that multiple ATPs must be bound to place the DnaX complex into a conformation with maximal affinity for β(2). DNA synthesis activity can be restored by increased concentrations of β(2). In contrast, severe defects in ATP hydrolysis are observed upon introduction of a single K51E DnaX subunit. Thus, ATP binding, hydrolysis, and the ability to form initiation complexes are not tightly coupled. These results suggest that although ATP hydrolysis likely enhances β(2) loading, it is not absolutely required in a mechanistic sense for formation of functional initiation complexes.

  9. Comparatively Studied Color Correction Methods for Color Calibration of Automated Microscopy Complex of Biomedical Specimens

    Directory of Open Access Journals (Sweden)

    T. A. Kravtsova

    2016-01-01

    Full Text Available The paper considers a task of generating the requirements and creating a calibration target for automated microscopy systems (AMS of biomedical specimens to provide the invariance of algorithms and software to the hardware configuration. The required number of color fields of the calibration target and their color coordinates are mostly determined by the color correction method, for which coefficients of the equations are estimated during the calibration process. The paper analyses existing color calibration techniques for digital imaging systems using an optical microscope and shows that there is a lack of published results of comparative studies to demonstrate a particular useful color correction method for microscopic images. A comparative study of ten image color correction methods in RGB space using polynomials and combinations of color coordinate of different orders was carried out. The method of conditioned least squares to estimate the coefficients in the color correction equations using captured images of 217 color fields of the calibration target Kodak Q60-E3 was applied. The regularization parameter in this method was chosen experimentally. It was demonstrated that the best color correction quality characteristics are provided by the method that uses a combination of color coordinates of the 3rd order. The study of the influence of the number and the set of color fields included in calibration target on color correction quality for microscopic images was performed. Six train sets containing 30, 35, 40, 50, 60 and 80 color fields, and test set of 47 color fields not included in any of the train sets were formed. It was found out that the train set of 60 color fields minimizes the color correction error values for both operating modes of digital camera: using "default" color settings and with automatic white balance. At the same time it was established that the use of color fields from the widely used now Kodak Q60-E3 target does not

  10. Detailed Simulation of Complex Hydraulic Problems with Macroscopic and Mesoscopic Mathematical Methods

    Directory of Open Access Journals (Sweden)

    Chiara Biscarini

    2013-01-01

    Full Text Available The numerical simulation of fast-moving fronts originating from dam or levee breaches is a challenging task for small scale engineering projects. In this work, the use of fully three-dimensional Navier-Stokes (NS equations and lattice Boltzmann method (LBM is proposed for testing the validity of, respectively, macroscopic and mesoscopic mathematical models. Macroscopic simulations are performed employing an open-source computational fluid dynamics (CFD code that solves the NS combined with the volume of fluid (VOF multiphase method to represent free-surface flows. The mesoscopic model is a front-tracking experimental variant of the LBM. In the proposed LBM the air-gas interface is represented as a surface with zero thickness that handles the passage of the density field from the light to the dense phase and vice versa. A single set of LBM equations represents the liquid phase, while the free surface is characterized by an additional variable, the liquid volume fraction. Case studies show advantages and disadvantages of the proposed LBM and NS with specific regard to the computational efficiency and accuracy in dealing with the simulation of flows through complex geometries. In particular, the validation of the model application is developed by simulating the flow propagating through a synthetic urban setting and comparing results with analytical and experimental laboratory measurements.

  11. A method for the determination of ascorbic acid using the iron(II)-pyridine-dimethylglyoxime complex

    Energy Technology Data Exchange (ETDEWEB)

    Arya, S. P.; Mahajan, M. [Haryana, Kurukshetra Univ. (India). Dept. of Chemistry

    1998-05-01

    A simple and rapid spectrophotometric method for the determination of ascorbic acid is proposed. Ascorbic acid reduces iron (III) to iron (II) which forms a red colored complex with dimethylglyoxime in the presence of pyridine. The absorbance of the resulting solution is measured at 514 nm and a linear relationship between absorbance and concentration of ascorbic acid is observed up to 14 {mu}g ml{sup -1}. Studies on the interference of substances usually associated with ascorbic acid have been carried out and the applicability of the method has been tested by analysing pharmaceutical preparations of vitamin C. [Italiano] Si propone un rapido e semplice metodo spettrofotometrico per la determinazione dell`acido ascorbico. L`acido ascorbico riduce il ferro(III) a ferro(II) che forma con la dimetilgliossima, in presenza di piridina, un complesso colorato in rosso. L`assorbanza della soluzione risultante e` misurata a 514 nm e si ottiene una relazione lineare tra assorbanza e concentrazione dell`acido ascorbico fino a 14 {mu}g ml{sup -1}. Si sono condotti studi sugli interferenti usualmente associati all`acido ascorbico ed e` stata valutata l`applicabilita` del metodo all`analisi di preparati farmaceutici di vitamina C.

  12. HS-GC-MS method for the analysis of fragrance allergens in complex cosmetic matrices.

    Science.gov (United States)

    Desmedt, B; Canfyn, M; Pype, M; Baudewyns, S; Hanot, V; Courselle, P; De Beer, J O; Rogiers, V; De Paepe, K; Deconinck, E

    2015-01-01

    Potential allergenic fragrances are part of the Cosmetic Regulation with labelling and concentration restrictions. This means that they have to be declared on the ingredients list, when their concentration exceeds the labelling limit of 10 ppm or 100 ppm for leave-on or rinse-off cosmetics, respectively. Labelling is important regarding consumer safety. In this way, sensitised people towards fragrances might select their products based on the ingredients list to prevent elicitation of an allergic reaction. It is therefore important to quantify potential allergenic ingredients in cosmetic products. An easy to perform liquid extraction was developed, combined with a new headspace GC-MS method. The latter was capable of analysing 24 volatile allergenic fragrances in complex cosmetic formulations, such as hydrophilic (O/W) and lipophilic (W/O) creams, lotions and gels. This method was successfully validated using the total error approach. The trueness deviations for all components were smaller than 8%, and the expectation tolerance limits did not exceed the acceptance limits of ± 20% at the labelling limit. The current methodology was used to analyse 18 cosmetic samples that were already identified as being illegal on the EU market for containing forbidden skin whitening substances. Our results showed that these cosmetic products also contained undeclared fragrances above the limit value for labelling, which imposes an additional health risk for the consumer. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Application of the Decomposition Method to the Design Complexity of Computer-based Display

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyoung Ju; Lee, Seung Woo; Seong, Poong Hyun [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Park, Jin Kyun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2012-05-15

    The importance of the design of human machine interfaces (HMIs) for human performance and safety has long been recognized in process industries. In case of nuclear power plants (NPPs), HMIs have significant implications for the safety of the NPPs since poor implementation of HMIs can impair the operators' information searching ability which is considered as one of the important aspects of human behavior. To support and increase the efficiency of the operators' information searching behavior, advanced HMIs based on computer technology are provided. Operators in advanced main control room (MCR) acquire information through video display units (VDUs), and large display panel (LDP) required for the operation of NPPs. These computer-based displays contain a very large quantity of information and present them in a variety of formats than conventional MCR. For example, these displays contain more elements such as abbreviations, labels, icons, symbols, coding, and highlighting than conventional ones. As computer-based displays contain more information, complexity of the elements becomes greater due to less distinctiveness of each element. A greater understanding is emerging about the effectiveness of designs of computer-based displays, including how distinctively display elements should be designed. And according to Gestalt theory, people tend to group similar elements based on attributes such as shape, color or pattern based on the principle of similarity. Therefore, it is necessary to consider not only human operator's perception but the number of element consisting of computer-based display

  14. Application of the Decomposition Method to the Design Complexity of Computer-based Display

    International Nuclear Information System (INIS)

    Kim, Hyoung Ju; Lee, Seung Woo; Seong, Poong Hyun; Park, Jin Kyun

    2012-01-01

    The importance of the design of human machine interfaces (HMIs) for human performance and safety has long been recognized in process industries. In case of nuclear power plants (NPPs), HMIs have significant implications for the safety of the NPPs since poor implementation of HMIs can impair the operators' information searching ability which is considered as one of the important aspects of human behavior. To support and increase the efficiency of the operators' information searching behavior, advanced HMIs based on computer technology are provided. Operators in advanced main control room (MCR) acquire information through video display units (VDUs), and large display panel (LDP) required for the operation of NPPs. These computer-based displays contain a very large quantity of information and present them in a variety of formats than conventional MCR. For example, these displays contain more elements such as abbreviations, labels, icons, symbols, coding, and highlighting than conventional ones. As computer-based displays contain more information, complexity of the elements becomes greater due to less distinctiveness of each element. A greater understanding is emerging about the effectiveness of designs of computer-based displays, including how distinctively display elements should be designed. And according to Gestalt theory, people tend to group similar elements based on attributes such as shape, color or pattern based on the principle of similarity. Therefore, it is necessary to consider not only human operator's perception but the number of element consisting of computer-based display

  15. Development of complex electrokinetic decontamination method for soil contaminated with uranium

    International Nuclear Information System (INIS)

    Kim, Gye-Nam; Kim, Seung-Soo; Park, Hye-Min; Kim, Wan-Suk; Moon, Jei-Kwon; Hyeon, Jay-Hyeok

    2012-01-01

    520L complex electrokinetic soil decontamination equipment was manufactured to clean up uranium contaminated soils from Korean nuclear facilities. To remove uranium at more than 95% from the radioactive soil through soil washing and electrokinetic technology, decontamination experiments were carried out. To reduce the generation of large quantities of metal oxides in cathode, a pH controller is used to control the pH of the electrolyte waste solution between 0.5 and 1 for the formation of UO 2+ . More than 80% metal oxides were removed through pre-washing, an electrolyte waste solution was circulated by a pump, and a metal oxide separator filtered the metal oxide particles. 80–85% of the uranium was removed from the soil by soil washing as part of the pre-treatment. When the initial uranium concentration of the soil was 21.7 Bq/g, the required electrokinetic decontamination time was 25 days. When the initial concentration of 238 U in the soil was higher, a longer decontamination time was needed, but the removal rate of 238 U from the soil was higher.

  16. Principal Physicochemical Methods Used to Characterize Dendrimer Molecule Complexes Used as Genetic Therapy Agents, Nanovaccines or Drug Carriers.

    Science.gov (United States)

    Alberto, Rodríguez Fonseca Rolando; Joao, Rodrigues; de Los Angeles, Muñoz-Fernández María; Alberto, Martínez Muñoz; Manuel Jonathan, Fragoso Vázquez; José, Correa Basurto

    2017-08-30

    Nanomedicine is the application of nanotechnology to medicine. This field is related to the study of nanodevices and nanomaterials applied to various medical uses, such as in improving the pharmacological properties of different molecules. Dendrimers are synthetic nanoparticles whose physicochemical properties vary according to their chemical structure. These molecules have been extensively investigated as drug nanocarriers to improve drug solubility and as sustained-release systems. New therapies such as gene therapy and the development of nanovaccines can be improved by the use of dendrimers. The biophysical and physicochemical characterization of nucleic acid/peptide-dendrimer complexes is crucial to identify their functional properties prior to biological evaluation. In that sense, it is necessary to first identify whether the peptide-dendrimer or nucleic aciddendrimer complexes can be formed and whether the complex can dissociate under the appropriate conditions at the target cells. In addition, biophysical and physicochemical characterization is required to determine how long the complexes remain stable, what proportion of peptide or nucleic acid is required to form the complex or saturate the dendrimer, and the size of the complex formed. In this review, we present the latest information on characterization systems for dendrimer-nucleic acid, dendrimer-peptide and dendrimer-drug complexes with several biotechnological and pharmacological applications. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  17. A novel pre-oxidation method for elemental mercury removal utilizing a complex vaporized absorbent

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Yi, E-mail: zhaoyi9515@163.com; Hao, Runlong; Guo, Qing

    2014-09-15

    Graphical abstract: - Highlights: • An innovative liquid-phase complex absorbent (LCA) for Hg{sup 0} removal was prepared. • A novel integrative process for Hg{sup 0} removal was proposed. • The simultaneous removal efficiencies of SO{sub 2}, NO and Hg{sup 0} were 100%, 79.5% and 80.4%, respectively. • The reaction mechanism of simultaneous removal of SO{sub 2}, NO and Hg{sup 0} was proposed. - Abstract: A novel semi-dry integrative method for elemental mercury (Hg{sup 0}) removal has been proposed in this paper, in which Hg{sup 0} was initially pre-oxidized by a vaporized liquid-phase complex absorbent (LCA) composed of a Fenton reagent, peracetic acid (CH{sub 3}COOOH) and sodium chloride (NaCl), after which Hg{sup 2+} was absorbed by the resultant Ca(OH){sub 2}. The experimental results indicated that CH{sub 3}COOOH and NaCl were the best additives for Hg{sup 0} oxidation. Among the influencing factors, the pH of the LCA and the adding rate of the LCA significantly affected the Hg{sup 0} removal. The coexisting gases, SO{sub 2} and NO, were characterized as either increasing or inhibiting in the removal process, depending on their concentrations. Under optimal reaction conditions, the efficiency for the single removal of Hg{sup 0} was 91%. Under identical conditions, the efficiencies of the simultaneous removal of SO{sub 2}, NO and Hg{sup 0} were 100%, 79.5% and 80.4%, respectively. Finally, the reaction mechanism for the simultaneous removal of SO{sub 2}, NO and Hg{sup 0} was proposed based on the characteristics of the removal products as determined by X-ray diffraction (XRD), atomic fluorescence spectrometry (AFS), the analysis of the electrode potentials, and through data from related research references.

  18. A novel pre-oxidation method for elemental mercury removal utilizing a complex vaporized absorbent

    International Nuclear Information System (INIS)

    Zhao, Yi; Hao, Runlong; Guo, Qing

    2014-01-01

    Graphical abstract: - Highlights: • An innovative liquid-phase complex absorbent (LCA) for Hg 0 removal was prepared. • A novel integrative process for Hg 0 removal was proposed. • The simultaneous removal efficiencies of SO 2 , NO and Hg 0 were 100%, 79.5% and 80.4%, respectively. • The reaction mechanism of simultaneous removal of SO 2 , NO and Hg 0 was proposed. - Abstract: A novel semi-dry integrative method for elemental mercury (Hg 0 ) removal has been proposed in this paper, in which Hg 0 was initially pre-oxidized by a vaporized liquid-phase complex absorbent (LCA) composed of a Fenton reagent, peracetic acid (CH 3 COOOH) and sodium chloride (NaCl), after which Hg 2+ was absorbed by the resultant Ca(OH) 2 . The experimental results indicated that CH 3 COOOH and NaCl were the best additives for Hg 0 oxidation. Among the influencing factors, the pH of the LCA and the adding rate of the LCA significantly affected the Hg 0 removal. The coexisting gases, SO 2 and NO, were characterized as either increasing or inhibiting in the removal process, depending on their concentrations. Under optimal reaction conditions, the efficiency for the single removal of Hg 0 was 91%. Under identical conditions, the efficiencies of the simultaneous removal of SO 2 , NO and Hg 0 were 100%, 79.5% and 80.4%, respectively. Finally, the reaction mechanism for the simultaneous removal of SO 2 , NO and Hg 0 was proposed based on the characteristics of the removal products as determined by X-ray diffraction (XRD), atomic fluorescence spectrometry (AFS), the analysis of the electrode potentials, and through data from related research references

  19. A method for developing standardised interactive education for complex clinical guidelines

    Directory of Open Access Journals (Sweden)

    Vaughan Janet I

    2012-11-01

    Full Text Available Abstract Background Although systematic use of the Perinatal Society of Australia and New Zealand internationally endorsed Clinical Practice Guideline for Perinatal Mortality (PSANZ-CPG improves health outcomes, implementation is inadequate. Its complexity is a feature known to be associated with non-compliance. Interactive education is effective as a guideline implementation strategy, but lacks an agreed definition. SCORPIO is an educational framework containing interactive and didactic teaching, but has not previously been used to implement guidelines. Our aim was to transform the PSANZ-CPG into an education workshop to develop quality standardised interactive education acceptable to participants for learning skills in collaborative interprofessional care. Methods The workshop was developed using the construct of an educational framework (SCORPIO, the PSANZ-CPG, a transformation process and tutor training. After a pilot workshop with key target and stakeholder groups, modifications were made to this and subsequent workshops based on multisource written observations from interprofessional participants, tutors and an independent educator. This participatory action research process was used to monitor acceptability and educational standards. Standardised interactive education was defined as the attainment of content and teaching standards. Quantitative analysis of positive expressed as a percentage of total feedback was used to derive a total quality score. Results Eight workshops were held with 181 participants and 15 different tutors. Five versions resulted from the action research methodology. Thematic analysis of multisource observations identified eight recurring education themes or quality domains used for standardisation. The two content domains were curriculum and alignment with the guideline and the six teaching domains; overload, timing, didacticism, relevance, reproducibility and participant engagement. Engagement was the most

  20. Direct risk standardisation: a new method for comparing casemix adjusted event rates using complex models.

    Science.gov (United States)

    Nicholl, Jon; Jacques, Richard M; Campbell, Michael J

    2013-10-29

    Comparison of outcomes between populations or centres may be confounded by any casemix differences and standardisation is carried out to avoid this. However, when the casemix adjustment models are large and complex, direct standardisation has been described as "practically impossible", and indirect standardisation may lead to unfair comparisons. We propose a new method of directly standardising for risk rather than standardising for casemix which overcomes these problems. Using a casemix model which is the same model as would be used in indirect standardisation, the risk in individuals is estimated. Risk categories are defined, and event rates in each category for each centre to be compared are calculated. A weighted sum of the risk category specific event rates is then calculated. We have illustrated this method using data on 6 million admissions to 146 hospitals in England in 2007/8 and an existing model with over 5000 casemix combinations, and a second dataset of 18,668 adult emergency admissions to 9 centres in the UK and overseas and a published model with over 20,000 casemix combinations and a continuous covariate. Substantial differences between conventional directly casemix standardised rates and rates from direct risk standardisation (DRS) were found. Results based on DRS were very similar to Standardised Mortality Ratios (SMRs) obtained from indirect standardisation, with similar standard errors. Direct risk standardisation using our proposed method is as straightforward as using conventional direct or indirect standardisation, always enables fair comparisons of performance to be made, can use continuous casemix covariates, and was found in our examples to have similar standard errors to the SMR. It should be preferred when there is a risk that conventional direct or indirect standardisation will lead to unfair comparisons.

  1. The NDUFB6 subunit of the mitochondrial respiratory chain complex I is required for electron transfer activity: A proof of principle study on stable and controlled RNA interference in human cell lines

    Energy Technology Data Exchange (ETDEWEB)

    Loublier, Sandrine; Bayot, Aurelien; Rak, Malgorzata; El-Khoury, Riyad; Benit, Paule [Inserm U676, Hopital Robert Debre, F-75019 Paris (France); Universite Paris 7, Faculte de medecine Denis Diderot, IFR02 Paris (France); Rustin, Pierre, E-mail: pierre.rustin@inserm.fr [Inserm U676, Hopital Robert Debre, F-75019 Paris (France); Universite Paris 7, Faculte de medecine Denis Diderot, IFR02 Paris (France)

    2011-10-22

    Highlights: {yields} NDUFB6 is required for activity of mitochondrial complex I in human cell lines. {yields} Lentivirus based RNA interference results in frequent off target insertions. {yields} Flp-In recombinase mediated miRNA insertion allows gene-specific extinction. -- Abstract: Molecular bases of inherited deficiencies of mitochondrial respiratory chain complex I are still unknown in a high proportion of patients. Among 45 subunits making up this large complex, more than half has unknown function(s). Understanding the function of these subunits would contribute to our knowledge on mitochondrial physiology but might also reveal that some of these subunits are not required for the catalytic activity of the complex. A direct consequence of this finding would be the reduction of the number of candidate genes to be sequenced in patients with decreased complex I activity. In this study, we tested two different methods to stably extinct complex I subunits in cultured cells. We first found that lentivirus-mediated shRNA expression frequently resulted in the unpredicted extinction of additional gene(s) beside targeted ones. This can be ascribed to uncontrolled genetic material insertions in the genome of the host cell. This approach thus appeared inappropriate to study unknown functions of a gene. Next, we found it possible to specifically extinct a CI subunit gene by direct insertion of a miR targeting CI subunits in a Flp site (HEK293 Flp-In cells). By using this strategy we unambiguously demonstrated that the NDUFB6 subunit is required for complex I activity, and defined conditions suitable to undertake a systematic and stable extinction of the different supernumerary subunits in human cells.

  2. The NDUFB6 subunit of the mitochondrial respiratory chain complex I is required for electron transfer activity: A proof of principle study on stable and controlled RNA interference in human cell lines

    International Nuclear Information System (INIS)

    Loublier, Sandrine; Bayot, Aurelien; Rak, Malgorzata; El-Khoury, Riyad; Benit, Paule; Rustin, Pierre

    2011-01-01

    Highlights: → NDUFB6 is required for activity of mitochondrial complex I in human cell lines. → Lentivirus based RNA interference results in frequent off target insertions. → Flp-In recombinase mediated miRNA insertion allows gene-specific extinction. -- Abstract: Molecular bases of inherited deficiencies of mitochondrial respiratory chain complex I are still unknown in a high proportion of patients. Among 45 subunits making up this large complex, more than half has unknown function(s). Understanding the function of these subunits would contribute to our knowledge on mitochondrial physiology but might also reveal that some of these subunits are not required for the catalytic activity of the complex. A direct consequence of this finding would be the reduction of the number of candidate genes to be sequenced in patients with decreased complex I activity. In this study, we tested two different methods to stably extinct complex I subunits in cultured cells. We first found that lentivirus-mediated shRNA expression frequently resulted in the unpredicted extinction of additional gene(s) beside targeted ones. This can be ascribed to uncontrolled genetic material insertions in the genome of the host cell. This approach thus appeared inappropriate to study unknown functions of a gene. Next, we found it possible to specifically extinct a CI subunit gene by direct insertion of a miR targeting CI subunits in a Flp site (HEK293 Flp-In cells). By using this strategy we unambiguously demonstrated that the NDUFB6 subunit is required for complex I activity, and defined conditions suitable to undertake a systematic and stable extinction of the different supernumerary subunits in human cells.

  3. Application of spatial methods to identify areas with lime requirement in eastern Croatia

    Science.gov (United States)

    Bogunović, Igor; Kisic, Ivica; Mesic, Milan; Zgorelec, Zeljka; Percin, Aleksandra; Pereira, Paulo

    2016-04-01

    With more than 50% of acid soils in all agricultural land in Croatia, soil acidity is recognized as a big problem. Low soil pH leads to a series of negative phenomena in plant production and therefore as a compulsory measure for reclamation of acid soils is liming, recommended on the base of soil analysis. The need for liming is often erroneously determined only on the basis of the soil pH, because the determination of cation exchange capacity, the hydrolytic acidity and base saturation is a major cost to producers. Therefore, in Croatia, as well as some other countries, the amount of liming material needed to ameliorate acid soils is calculated by considering their hydrolytic acidity. For this research, several interpolation methods were tested to identify the best spatial predictor of hidrolitic acidity. The purpose of this study was to: test several interpolation methods to identify the best spatial predictor of hidrolitic acidity; and to determine the possibility of using multivariate geostatistics in order to reduce the number of needed samples for determination the hydrolytic acidity, all with an aim that the accuracy of the spatial distribution of liming requirement is not significantly reduced. Soil pH (in KCl) and hydrolytic acidity (Y1) is determined in the 1004 samples (from 0-30 cm) randomized collected in agricultural fields near Orahovica in eastern Croatia. This study tested 14 univariate interpolation models (part of ArcGIS software package) in order to provide most accurate spatial map of hydrolytic acidity on a base of: all samples (Y1 100%), and the datasets with 15% (Y1 85%), 30% (Y1 70%) and 50% fewer samples (Y1 50%). Parallel to univariate interpolation methods, the precision of the spatial distribution of the Y1 was tested by the co-kriging method with exchangeable acidity (pH in KCl) as a covariate. The soils at studied area had an average pH (KCl) 4,81, while the average Y1 10,52 cmol+ kg-1. These data suggest that liming is necessary

  4. Hybrid RANS/LES method for high Reynolds numbers, applied to atmospheric flow over complex terrain

    DEFF Research Database (Denmark)

    Bechmann, Andreas; Sørensen, Niels N.; Johansen, Jeppe

    2007-01-01

      The use of Large-Eddy Simulation (LES) to predict wall-bounded flows has presently been limited to low Reynolds number flows. Since the number of computational grid points required to resolve the near-wall turbulent structures increase rapidly with Reynolds number, LES has been unattainable...... for flows at high Reynolds numbers. To reduce the computational cost of traditional LES a hybrid method is proposed in which the near-wall eddies are modelled in a Reynolds-averaged sense. Close to walls the flow is treated with the RANS-equations and this layer act as wall model for the outer flow handled...... by LES. The wellknown high Reynolds number two-equation k - ǫ turbulence model is used in the RANS layer and the model automatically switches to a two-equation k - ǫ subgrid-scale stress model in the LES region. The approach can be used for flow over rough walls. To demonstrate the ability...

  5. Validation of quantitative analysis method for triamcinolone in ternary complexes by UV-Vis spectrophotometry

    Directory of Open Access Journals (Sweden)

    GEORGE DARLOS A. AQUINO

    2011-06-01

    Full Text Available Triamcinolone (TRI, a drug widely used in the treatment of ocular inflammatory diseases, is practically insoluble in water, which limits its use in eye drops. Cyclodextrins (CDs have been used to increase the solubility or dissolution rate of drugs. The purpose of the present study was to validate a UV-Vis spectrophotometric method for quantitative analysis of TRI in inclusion complexes with beta-cyclodextrin (B-CD associated with triethanolamine (TEA (ternary complex. The proposed analytical method was validated with respect to the parameters established by the Brazilian regulatory National Agency of Sanitary Monitoring (ANVISA. The analytical measurements of absorbance were made at 242nm, at room temperature, in a 1-cm path-length cuvette. The precision and accuracy studies were performed at five concentration levels (4, 8, 12, 18 and 20μg.mL-1. The B-CD associated with TEA did not provoke any alteration in the photochemical behavior of TRI. The results for the measured analytical parameters showed the success of the method. The standard curve was linear (r2 > 0.999 in the concentration range from 2 to 24 μg.mL-1. The method achieved good precision levels in the inter-day (relative standard deviation-RSD <3.4% and reproducibility (RSD <3.8% tests. The accuracy was about 80% and the pH changes introduced in the robustness study did not reveal any relevant interference at any of the studied concentrations. The experimental results demonstrate a simple, rapid and affordable UV-Vis spectrophotometric method that could be applied to the quantitation of TRI in this ternary complex. Keywords: Validation. Triamcinolone. Beta-cyclodextrin. UV- Vis spectrophotometry. Ternary complexes. RESUMO Validação de método de análise quantitativa para a triancinolona a partir de complexo ternário por espectrofotometria de UV-Vis A triancinolona (TRI é um fármaco amplamente utilizado no tratamento de doenças inflamatórias do globo ocular e

  6. Stiffeners in variational-difference method for calculating shells with complex geometry

    Directory of Open Access Journals (Sweden)

    Ivanov Vyacheslav Nikolaevich

    2014-05-01

    Full Text Available We have already considered an introduction of reinforcements in the variational-difference method (VDM of shells analysis with complex shape. At the moment only ribbed shells of revolution and shallow shells can be calculated with the help of developed analytical and finite-difference methods. Ribbed shells of arbitrary shape can be calculated only using the finite element method (FEM. However there are problems, when using FEM, which are absent in finite- and variational-difference methods: rigid body motion; conforming trial functions; parameterization of a surface; independent stress strain state. In this regard stiffeners are entered in VDM. VDM is based on the Lagrange principle - the principle of minimum total potential energy. Stress-strain state of ribs is described by the Kirchhoff-Clebsch theory of curvilinear bars: tension, bending and torsion of ribs are taken into account. Stress-strain state of shells is described by the Kirchhoff-Love theory of thin elastic shells. A position of points of the middle surface is defined by curvilinear orthogonal coordinates α, β. Curved ribs are situated along coordinate lines. Strain energy of ribs is added into the strain energy to account for ribs. A matrix form of strain energy of ribs is formed similar to a matrix form of the strain energy of the shell. A matrix of geometrical characteristics of a rib is formed from components of matrices of geometric characteristics of a shell. A matrix of mechanical characteristics of a rib contains rib’s eccentricity and geometrical characteristics of a rib’s section. Derivatives of displacements in the strain vector are replaced with finite-difference relations after the middle surface of a shell gets covered with a grid (grid lines coincide with the coordinate lines of principal curvatures. By this case the total potential energy functional becomes a function of strain nodal displacements. Partial derivatives of unknown nodal displacements are

  7. The endosomal sorting complex required for transport (ESCRT) is required for the sensitivity of yeast cells to nickel ions in Saccharomyces cerevisiae.

    Science.gov (United States)

    Luo, Chong; Cao, Chunlei; Jiang, Linghuo

    2016-05-01

    Nickel is one of the toxic environment metal pollutants and is linked to various human diseases. In this study, through a functional genomics approach we have identified 16 nickel-sensitive and 22 nickel-tolerant diploid deletion mutants of budding yeast genes, many of which are novel players in the regulation of nickel homeostasis. The 16 nickel-sensitive mutants are of genes mainly involved in the protein folding, modification and destination and the cellular transport processes, while the 22 nickel-tolerant mutants are of genes encoding components of ESCRT complexes as well as protein factors involved in both the cell wall integrity maintenance and the vacuolar protein sorting process. In consistence with their phenotypes, most of these nickel-sensitive mutants show reduced intracellular nickel contents, while the majority of these nickel-tolerant mutants show elevated intracellular nickel contents, as compared to the wild type in response to nickel stress. Our data provides a basis for our understanding the regulation of nickel homeostasis and molecular mechanisms of nickel-induced human pathogenesis. © FEMS 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  8. The use of modern methods for complex studies of the hydrotechnical structures of the Barents Sea region

    Directory of Open Access Journals (Sweden)

    Mel'nikov N. N

    2017-03-01

    Full Text Available The Barents region hydrotechnical structures (HTS as part of the bulk earth dams and levees tailings of mining enterprises in terms of requirements of the responsible entities have been considered. The brief review of emergencies and accidents of HTS with an analysis of their causes and geo-ecological consequences has been fulfilled. The necessity and urgency of the application of modern techniques for comprehensive research and monitoring of the HTS state have been shown: geo-fluid mechanics computer modeling, subsurface georadar sensing, GPS geodetic measurements, optical and radar satellite imagery. Joint use of GPR and satellite imagery in combination with traditional engineering-geological, hydro-geological and geodetic studies allows obtaining a more complete picture of the HTS state taking into account local and regional geological and fluid dynamic processes. The system structure of HTS complex researches which creates a scientific and technical basis for researching the geologic-geophysical environment, shifts, deformations and power influence has been developed. This allows to reveal the hidden filtration and deformation zones in HTS at early stages of their formation and in due time to make the administrative decision on prevention and localization of any emergency. Application of modern methods for HTS complex researches will allow to receive operational information on their state, parametrical sizes of volume, angular and linear deformations and movements, intensity of natural and technogenic influence. The obtained data have to be integrated in the "Database and Parameters" geoportal by means of which their logical processing and comparison to standard and extreme values has to be carried out. On this basis expert assessment of the current and expected state of HTS is carried out and the operating decisions including on development, in case of need, of preventive and protective measures are made

  9. Complexes of Usher proteins preassemble at the endoplasmic reticulum and are required for trafficking and ER homeostasis

    Directory of Open Access Journals (Sweden)

    Bernardo Blanco-Sánchez

    2014-05-01

    Full Text Available Usher syndrome (USH, the leading cause of hereditary combined hearing and vision loss, is characterized by sensorineural deafness and progressive retinal degeneration. Mutations in several different genes produce USH, but the proximal cause of sensory cell death remains mysterious. We adapted a proximity ligation assay to analyze associations among three of the USH proteins, Cdh23, Harmonin and Myo7aa, and the microtubule-based transporter Ift88 in zebrafish inner ear mechanosensory hair cells. We found that the proteins are in close enough proximity to form complexes and that these complexes preassemble at the endoplasmic reticulum (ER. Defects in any one of the three USH proteins disrupt formation and trafficking of the complex and result in diminished levels of the other proteins, generalized trafficking defects and ER stress that triggers apoptosis. ER stress, thus, contributes to sensory hair cell loss and provides a new target to explore for protective therapies for USH.

  10. Simulations of Turbulent Flow Over Complex Terrain Using an Immersed-Boundary Method

    Science.gov (United States)

    DeLeon, Rey; Sandusky, Micah; Senocak, Inanc

    2018-02-01

    We present an immersed-boundary method to simulate high-Reynolds-number turbulent flow over the complex terrain of Askervein and Bolund Hills under neutrally-stratified conditions. We reconstruct both the velocity and the eddy-viscosity fields in the terrain-normal direction to produce turbulent stresses as would be expected from the application of a surface-parametrization scheme based on Monin-Obukhov similarity theory. We find that it is essential to be consistent in the underlying assumptions for the velocity reconstruction and the eddy-viscosity relation to produce good results. To this end, we reconstruct the tangential component of the velocity field using a logarithmic velocity profile and adopt the mixing-length model in the near-surface turbulence model. We use a linear interpolation to reconstruct the normal component of the velocity to enforce the impermeability condition. Our approach works well for both the Askervein and Bolund Hills when the flow is attached to the surface, but shows slight disagreement in regions of flow recirculation, despite capturing the flow reversal.

  11. Photoluminescent BaMoO4 nanopowders prepared by complex polymerization method (CPM)

    International Nuclear Information System (INIS)

    Azevedo Marques, Ana Paula de; Melo, Dulce M.A. de; Paskocimas, Carlos A.; Pizani, Paulo S.; Joya, Miryam R.; Leite, Edson R.; Longo, Elson

    2006-01-01

    The BaMoO 4 nanopowders were prepared by the Complex Polymerization Method (CPM). The structure properties of the BaMoO 4 powders were characterized by FTIR transmittance spectra, X-ray diffraction (XRD), Raman spectra, photoluminescence spectra (PL) and high-resolution scanning electron microscopy (HR-SEM). The XRD, FTIR and Raman data showed that BaMoO 4 at 300 deg. C was disordered. At 400 deg. C and higher temperature, BaMoO 4 crystalline scheelite-type phases could be identified, without the presence of additional phases, according to the XRD, FTIR and Raman data. The calculated average crystallite sizes, calculated by XRD, around 40 nm, showed the tendency to increase with the temperature. The crystallite sizes, obtained by HR-SEM, were around of 40-50 nm. The sample that presented the highest intensity of the red emission band was the one heat treated at 400 deg. C for 2 h, and the sample that displayed the highest intensity of the green emission band was the one heat treated at 700 deg. C for 2 h. The CPM was shown to be a low cost route for the production of BaMoO 4 nanopowders, with the advantages of lower temperature, smaller time and reduced cost. The optical properties observed for BaMoO 4 nanopowders suggested that this material is a highly promising candidate for photoluminescent applications

  12. Simulations of Turbulent Flow Over Complex Terrain Using an Immersed-Boundary Method

    Science.gov (United States)

    DeLeon, Rey; Sandusky, Micah; Senocak, Inanc

    2018-06-01

    We present an immersed-boundary method to simulate high-Reynolds-number turbulent flow over the complex terrain of Askervein and Bolund Hills under neutrally-stratified conditions. We reconstruct both the velocity and the eddy-viscosity fields in the terrain-normal direction to produce turbulent stresses as would be expected from the application of a surface-parametrization scheme based on Monin-Obukhov similarity theory. We find that it is essential to be consistent in the underlying assumptions for the velocity reconstruction and the eddy-viscosity relation to produce good results. To this end, we reconstruct the tangential component of the velocity field using a logarithmic velocity profile and adopt the mixing-length model in the near-surface turbulence model. We use a linear interpolation to reconstruct the normal component of the velocity to enforce the impermeability condition. Our approach works well for both the Askervein and Bolund Hills when the flow is attached to the surface, but shows slight disagreement in regions of flow recirculation, despite capturing the flow reversal.

  13. THE COMPLEX ANALYSIS METHOD OF SEMANTIC ASSOCIATIONS IN STUDYING THE STUDENTS’ CREATIVE ETHOS

    Directory of Open Access Journals (Sweden)

    P. A. Starikov

    2013-01-01

    Full Text Available The paper demonstrates the sociological research findings concerning the students’ ideas of creativity based on the questionnaires and testing of the students of the natural science, humanities and technical profiles at Siberian Federal University over the period of 2007-2011.The author suggests a new method of semantic association analysis in order to identify the latent groups of notions related to the concept of creativity. The range of students’ common opinions demonstrate the obvious trend for humanizing the idea of creativity, considering  it as the perfect mode of human existence, which coincide with the ideas of K. Rogers, A. Maslow and other scholars. Today’s students associate creativity primarily with pleasure, self-development, self-expression, inspiration, improvisation, spontaneity; and the resulting semantic complex incorporates such characteristics of creative work as goodness, abundance of energy, integrity, health, freedom and independence, self-development and spirituality.The obtained data prove the importance of the inspiration experience in creative pedagogy; the research outcomes along with the continuing monitoring of students attitude to creativity development can optimize the learning process. The author emphasizes the necessity of introducing some special courses, based on the integral approach (including social, philosophical, psychological, psycho-social and technical aspects, and aimed at developing students’ creative competence. 

  14. Whirlin and PDZ domain-containing 7 (PDZD7) proteins are both required to form the quaternary protein complex associated with Usher syndrome type 2.

    Science.gov (United States)

    Chen, Qian; Zou, Junhuang; Shen, Zuolian; Zhang, Weiping; Yang, Jun

    2014-12-26

    Usher syndrome (USH) is the leading genetic cause of combined hearing and vision loss. Among the three USH clinical types, type 2 (USH2) occurs most commonly. USH2A, GPR98, and WHRN are three known causative genes of USH2, whereas PDZD7 is a modifier gene found in USH2 patients. The proteins encoded by these four USH genes have been proposed to form a multiprotein complex, the USH2 complex, due to interactions found among some of these proteins in vitro, their colocalization in vivo, and mutual dependence of some of these proteins for their normal in vivo localizations. However, evidence showing the formation of the USH2 complex is missing, and details on how this complex is formed remain elusive. Here, we systematically investigated interactions among the intracellular regions of the four USH proteins using colocalization, yeast two-hybrid, and pull-down assays. We show that multiple domains of the four USH proteins interact among one another. Importantly, both WHRN and PDZD7 are required for the complex formation with USH2A and GPR98. In this USH2 quaternary complex, WHRN prefers to bind to USH2A, whereas PDZD7 prefers to bind to GPR98. Interaction between WHRN and PDZD7 is the bridge between USH2A and GPR98. Additionally, the USH2 quaternary complex has a variable stoichiometry. These findings suggest that a non-obligate, short term, and dynamic USH2 quaternary protein complex may exist in vivo. Our work provides valuable insight into the physiological role of the USH2 complex in vivo and informs possible reconstruction of the USH2 complex for future therapy. © 2014 by The American Society for Biochemistry and Molecular Biology, Inc.

  15. Mapping the environmental and biogeographic complexity of the Amazon basin using remote sensing methods

    Science.gov (United States)

    Streher, A. S.; Cordeiro, C. L. O.; Silva, T. S. F.

    2017-12-01

    Mapping environmental envelopes onto geographical space has been classically important for understanding biogeographical patterns. Knowing the biotic and abiotic limits defining these envelopes, we can better understand the requirements limiting species distributions. Most present efforts in this regard have focused on single-species distribution models, but the current breadth and accessibility of quantitative, spatially explicit environmental information can also be explored from an environment-first perspective. We thus used remote sensing to determine the occurrence of environmental discontinuities in the Amazon region and evaluated if such discontinuities may act as barriers to determine species distribution and range limits, forming clear environmental envelopes. We combined data on topography (SRTM), precipitation (CHIRPS), vegetation descriptors (PALSAR-1 backscattering, biomass, NDVI) and temperature (MODIS), using object-based image analysis and unsupervised learning to map environmental envelopes. We identified 14 environmental envelopes for the Amazon sensu latissimo region, mainly delimited by changes in vegetation, topography and precipitation. The resulting envelopes were compared to the distribution of 120 species of Trogonidae, Galbulidae, Bucconidae, Cebidae, Hylidae and Lecythidaceae, amounting to 22,649 occurrence records within the Amazonregion. We determined species prevalence in each envelope by calculating the ratio between species relative frequency per envelope and envelope relative frequency (area) in the complete map. Values closer to 1 indicate a high degree of prevalence. We found strong envelope associations (prevalence > 0.5) for 20 species (17% of analyzed taxa). Although several biogeographical and ecological factors will influence the distribution of a species, our results show that not only geographical barriers, but also modern environmental discontinuities may limit the distribution of some species., and may have also done so

  16. Scale Development and Initial Tests of the Multidimensional Complex Adaptive Leadership Scale for School Principals: An Exploratory Mixed Method Study

    Science.gov (United States)

    Özen, Hamit; Turan, Selahattin

    2017-01-01

    This study was designed to develop the scale of the Complex Adaptive Leadership for School Principals (CAL-SP) and examine its psychometric properties. This was an exploratory mixed method research design (ES-MMD). Both qualitative and quantitative methods were used to develop and assess psychometric properties of the questionnaire. This study…

  17. A method for the preparation of lipophilic macrocyclic technetium-99m complexes

    International Nuclear Information System (INIS)

    Troutner, D.E.; Volkert, W.A.

    1991-01-01

    A procedure for the preparation of technetium complexes applicable as diagnostic radiopharmaceuticals is suggested and documented with 27 examples. Technetium-99m is reacted with a suitable complexant selected from the class of alkylenamine oximes containing 2 or 3 carbon atoms in the alkylene group. The lipophilic macrocyclic complexes possess an amine, amide, carboxy, carboxy ester, hydroxy or alkoxy group or a suitable electron acceptor group. (M.D.). 7 tabs

  18. An auxiliary optimization method for complex public transit route network based on link prediction

    Science.gov (United States)

    Zhang, Lin; Lu, Jian; Yue, Xianfei; Zhou, Jialin; Li, Yunxuan; Wan, Qian

    2018-02-01

    Inspired by the missing (new) link prediction and the spurious existing link identification in link prediction theory, this paper establishes an auxiliary optimization method for public transit route network (PTRN) based on link prediction. First, link prediction applied to PTRN is described, and based on reviewing the previous studies, the summary indices set and its algorithms set are collected for the link prediction experiment. Second, through analyzing the topological properties of Jinan’s PTRN established by the Space R method, we found that this is a typical small-world network with a relatively large average clustering coefficient. This phenomenon indicates that the structural similarity-based link prediction will show a good performance in this network. Then, based on the link prediction experiment of the summary indices set, three indices with maximum accuracy are selected for auxiliary optimization of Jinan’s PTRN. Furthermore, these link prediction results show that the overall layout of Jinan’s PTRN is stable and orderly, except for a partial area that requires optimization and reconstruction. The above pattern conforms to the general pattern of the optimal development stage of PTRN in China. Finally, based on the missing (new) link prediction and the spurious existing link identification, we propose optimization schemes that can be used not only to optimize current PTRN but also to evaluate PTRN planning.

  19. Organizational Agility and Complex Enterprise System Innovations: A Mixed Methods Study of the Effects of Enterprise Systems on Organizational Agility

    Science.gov (United States)

    Kharabe, Amol T.

    2012-01-01

    Over the last two decades, firms have operated in "increasingly" accelerated "high-velocity" dynamic markets, which require them to become "agile." During the same time frame, firms have increasingly deployed complex enterprise systems--large-scale packaged software "innovations" that integrate and automate…

  20. Investigation of cerium (3) complexing with ethylenediaminediacetate and ethylenediaminetetraacetate by spectrophotometric method

    International Nuclear Information System (INIS)

    Kostromina, N.A.; Kholodnaya, G.S.; Tananaeva, N.N.; Beloshitskij, N.V.; Kirillov, A.I.

    1979-01-01

    Absorption spectra in the Ce-EDDA(E 2- ) and Ce-EDTA systems (B 4- ) are studied. Decomposition of spectra into individual gauss bands, which are related to different complexes is carried out. Formation of normal complexes of the composition 1:1 and 1:2 with EDDA and EDTA is established, their stability constants being determined: lgKsub(CeE)=7.66+-0.03; lgKsub(CeEsub(2))=4.75+-0.06; lgKsub(CeB)=16.66+-0.07. It is established that in the absorption spectra additivity of the bands is observed during complexing of the more complex composition