WorldWideScience

Sample records for resolution problem formulated

  1. Engaged Problem Formulation of IT Management in Danish Municipalities

    DEFF Research Database (Denmark)

    Nielsen, Peter Axel; Persson, John Stouby

    2012-01-01

    Municipalities’ effectiveness in managing information technology (IT) is increasingly important in adhering to their responsibilities for providing services to citizens. While the municipalities’ difficulty in managing IT has been well documented, it is more elusive what specific problems are most...... relevant in contemporary municipal IT management practice. On this basis, we present an engaged scholarship approach to formulate IT management problems together with municipalities - not for municipalities. We have come to understand such engaged problem formulation as joint researching and defining...... of a contemporary and complex problem by researchers and those who experience and know the problem. We present the formulated IT management problems and discuss the engaged problem formulation process in relation to engaged scholarship. Furthermore, we discuss how engaged problem formulation may contribute...

  2. Engaged Problem Formulation in IS Research

    DEFF Research Database (Denmark)

    Nielsen, Peter Axel; Persson, John Stouby

    2016-01-01

    problems requires a more substantial engagement with the different stakeholders, especially when their problems are ill structured and situated in complex organizational settings. On this basis, we present an engaged approach to formulating IS problems with, not for, IS practitioners. We have come...

  3. A new formulation of the linear sampling method: spatial resolution and post-processing

    International Nuclear Information System (INIS)

    Piana, M; Aramini, R; Brignone, M; Coyle, J

    2008-01-01

    A new formulation of the linear sampling method is described, which requires the regularized solution of a single functional equation set in a direct sum of L 2 spaces. This new approach presents the following notable advantages: it is computationally more effective than the traditional implementation, since time consuming samplings of the Tikhonov minimum problem and of the generalized discrepancy equation are avoided; it allows a quantitative estimate of the spatial resolution achievable by the method; it facilitates a post-processing procedure for the optimal selection of the scatterer profile by means of edge detection techniques. The formulation is described in a two-dimensional framework and in the case of obstacle scattering, although generalizations to three dimensions and penetrable inhomogeneities are straightforward

  4. Problem-formulation and problem-solving in self-organized communities

    DEFF Research Database (Denmark)

    Foss, Nicolai J.; Frederiksen, Lars; Rullani, Francesco

    2016-01-01

    Building on the problem-solving perspective, we study behaviors related to projects and the communication-based antecedents of such behaviors in the free open-source software (FOSS) community. We examine two kinds of problem/project-behaviors: Individuals can set up projects around the formulation...

  5. Ising formulations of many NP problems

    OpenAIRE

    Lucas, Andrew

    2013-01-01

    We provide Ising formulations for many NP-complete and NP-hard problems, including all of Karp's 21 NP-complete problems. This collects and extends mappings to the Ising model from partitioning, covering and satisfiability. In each case, the required number of spins is at most cubic in the size of the problem. This work may be useful in designing adiabatic quantum optimization algorithms.

  6. Ising formulations of many NP problems

    Directory of Open Access Journals (Sweden)

    Andrew eLucas

    2014-02-01

    Full Text Available We provide Ising formulations for many NP-complete and NP-hard problems, including all of Karp's 21 NP-complete problems. This collects and extends mappings to the Ising model from partitioning, covering and satisfiability. In each case, the required number of spins is at most cubic in the size of the problem. This work may be useful in designing adiabatic quantum optimization algorithms.

  7. Validation of flexible multibody dynamics beam formulations using benchmark problems

    Energy Technology Data Exchange (ETDEWEB)

    Bauchau, Olivier A., E-mail: obauchau@umd.edu [University of Maryland (United States); Betsch, Peter [Karlsruhe Institute of Technology (Germany); Cardona, Alberto [CIMEC (UNL/Conicet) (Argentina); Gerstmayr, Johannes [Leopold-Franzens Universität Innsbruck (Austria); Jonker, Ben [University of Twente (Netherlands); Masarati, Pierangelo [Politecnico di Milano (Italy); Sonneville, Valentin [Université de Liège (Belgium)

    2016-05-15

    As the need to model flexibility arose in multibody dynamics, the floating frame of reference formulation was developed, but this approach can yield inaccurate results when elastic displacements becomes large. While the use of three-dimensional finite element formulations overcomes this problem, the associated computational cost is overwhelming. Consequently, beam models, which are one-dimensional approximations of three-dimensional elasticity, have become the workhorse of many flexible multibody dynamics codes. Numerous beam formulations have been proposed, such as the geometrically exact beam formulation or the absolute nodal coordinate formulation, to name just two. New solution strategies have been investigated as well, including the intrinsic beam formulation or the DAE approach. This paper provides a systematic comparison of these various approaches, which will be assessed by comparing their predictions for four benchmark problems. The first problem is the Princeton beam experiment, a study of the static large displacement and rotation behavior of a simple cantilevered beam under a gravity tip load. The second problem, the four-bar mechanism, focuses on a flexible mechanism involving beams and revolute joints. The third problem investigates the behavior of a beam bent in its plane of greatest flexural rigidity, resulting in lateral buckling when a critical value of the transverse load is reached. The last problem investigates the dynamic stability of a rotating shaft. The predictions of eight independent codes are compared for these four benchmark problems and are found to be in close agreement with each other and with experimental measurements, when available.

  8. A displacement based FE formulation for steady state problems

    NARCIS (Netherlands)

    Yu, Y.

    2005-01-01

    In this thesis a new displacement based formulation is developed for elasto-plastic deformations in steady state problems. In this formulation the displacements are the primary variables, which is in contrast to the more common formulations in terms of the velocities as the primary variables. In a

  9. Bilevel formulation of a policy design problem considering multiple objectives and incomplete preferences

    Science.gov (United States)

    Hawthorne, Bryant; Panchal, Jitesh H.

    2014-07-01

    A bilevel optimization formulation of policy design problems considering multiple objectives and incomplete preferences of the stakeholders is presented. The formulation is presented for Feed-in-Tariff (FIT) policy design for decentralized energy infrastructure. The upper-level problem is the policy designer's problem and the lower-level problem is a Nash equilibrium problem resulting from market interactions. The policy designer has two objectives: maximizing the quantity of energy generated and minimizing policy cost. The stakeholders decide on quantities while maximizing net present value and minimizing capital investment. The Nash equilibrium problem in the presence of incomplete preferences is formulated as a stochastic linear complementarity problem and solved using expected value formulation, expected residual minimization formulation, and the Monte Carlo technique. The primary contributions in this article are the mathematical formulation of the FIT policy, the extension of computational policy design problems to multiple objectives, and the consideration of incomplete preferences of stakeholders for policy design problems.

  10. Engaged Problem Formulation in IS Research

    DEFF Research Database (Denmark)

    Nielsen, Peter Axel; Persson, John Stouby

    2016-01-01

    “Is this the problem?”: the question that haunts many information systems (IS) researchers when they pursue work relevant to both practice and research. Nevertheless, a deliberate answer to this question requires more than simply asking the involved IS practitioners. Deliberately formulating...

  11. A Comprehensive Review of Boundary Integral Formulations of Acoustic Scattering Problems

    Directory of Open Access Journals (Sweden)

    S.I. Zaman

    2000-12-01

    Full Text Available This is a review presenting an overview of the developments in boundary integral formulations of the acoustic scattering problems. Generally, the problem is formulated in one of two ways viz. Green’s representation formula, and the Layer-theoretic formulation utilizing either a simple-layer or a double-layer potential. The review presents and expounds the major contributions in this area over the last four decades. The need for a robust and improved formulation of the exterior scattering problem (Neumann or Dirichlet arose due to the fact that the classical formulation failed to yield a unique solution at (acoustic wave-numbers which correspond to eigenvalues (eigenfrequencies of the corresponding interior scattering problem. Moreover, this correlation becomes more pronounced as the wave-numbers become larger i.e. as the (acoustic frequency increases. The robust integral formulations which are discussed here yield Fredholms integral equations of the second kind which are more amenable to computation than the first kind. However, the integral equation involves a hypersingular kernel which creates ill-conditioning in the final matrix representation. This is circumvented by a regularisation technique. An extensive useful list of references is also presented here for researchers in this area.

  12. Very low resolution face recognition problem.

    Science.gov (United States)

    Zou, Wilman W W; Yuen, Pong C

    2012-01-01

    This paper addresses the very low resolution (VLR) problem in face recognition in which the resolution of the face image to be recognized is lower than 16 × 16. With the increasing demand of surveillance camera-based applications, the VLR problem happens in many face application systems. Existing face recognition algorithms are not able to give satisfactory performance on the VLR face image. While face super-resolution (SR) methods can be employed to enhance the resolution of the images, the existing learning-based face SR methods do not perform well on such a VLR face image. To overcome this problem, this paper proposes a novel approach to learn the relationship between the high-resolution image space and the VLR image space for face SR. Based on this new approach, two constraints, namely, new data and discriminative constraints, are designed for good visuality and face recognition applications under the VLR problem, respectively. Experimental results show that the proposed SR algorithm based on relationship learning outperforms the existing algorithms in public face databases.

  13. A duty-period-based formulation of the airline crew scheduling problem

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, K.

    1994-12-31

    We present a new formulation of the airline crew scheduling problem that explicitly considers the duty periods. We suggest an algorithm for solving the formulation by a column generation approach with branch-and-bound. Computational results are reported for a number of test problems.

  14. Protecting the fast breeders: Problem formulation and effects analysis

    International Nuclear Information System (INIS)

    Oughton, D.H.

    2003-01-01

    Recent debates on protection of the environment from ionising radiation have reached reasonable agreement over the ethical and philosophical basis of environmental protection and a recognition that a practical system of protection will need to support (at a minimum) the principles of sustainable development, biodiversity, and conservation. However, there is still some controversy over the use of dose assessment tools within risk evaluation and management. The paper uses the case of the Dounreay 'radioactive rabbits' to discuss the advantages and limitations of proposed systems, focusing primarily on the interaction between ecological risk assessment (ERA) and the reference flora and fauna approach. It concludes that the reference approach is a valuable tool for the analysis of environmental effects, but that there is a problem if it becomes the driving force of the protection framework. In particular, there is a need for a clearer focus on non-technical issues within the problem formulation stage of ERA, particularly the social, ethical, political and economic issues, and there should be a strong commitment to stakeholder involvement at this stage. The problem formulation stage should identify the relevant assessment tools; the assessment tool should dictate neither the problem formulation nor the risk management. (author)

  15. On a covariant 2+2 formulation of the initial value problem in general relativity

    International Nuclear Information System (INIS)

    Smallwood, J.

    1980-03-01

    The initial value problems in general relativity are considered from a geometrical standpoint with especial reference to the development of a covariant 2+2 formalism in which space-time is foliated by space-like 2-surfaces under the headings; the Cauchy problem in general relativity, the covariant 3+1 formulation of the Cauchy problem, characteristic and mixed initial value problems, on locally imbedding a family of null hypersurfaces, the 2+2 formalism, the 2+2 formulation of the Cauchy problem, the 2+2 formulation of the characteristic and mixed initial value problems, and a covariant Lagrangian 2+2 formulation. (U.K.)

  16. An Integer Programming Formulation of the Minimum Common String Partition Problem.

    Directory of Open Access Journals (Sweden)

    S M Ferdous

    Full Text Available We consider the problem of finding a minimum common string partition (MCSP of two strings, which is an NP-hard problem. The MCSP problem is closely related to genome comparison and rearrangement, an important field in Computational Biology. In this paper, we map the MCSP problem into a graph applying a prior technique and using this graph, we develop an Integer Linear Programming (ILP formulation for the problem. We implement the ILP formulation and compare the results with the state-of-the-art algorithms from the literature. The experimental results are found to be promising.

  17. Validation of flexible multibody dynamics beam formulations using benchmark problems

    NARCIS (Netherlands)

    Bauchau, O.A.; Wu, Genyong; Betsch, P.; Cardona, A.; Gerstmayr, J.; Jonker, Jan B.; Masarati, P.; Sonneville, V.

    2016-01-01

    As the need to model flexibility arose in multibody dynamics, the floating frame of reference formulation was developed, but this approach can yield inaccurate results when elastic displacements becomes large. While the use of three-dimensional finite element formulations overcomes this problem, the

  18. High Resolution Mass Spectrometry of Polyfluorinated Polyether-Based Formulation

    DEFF Research Database (Denmark)

    Dimzon, Ian Ken; Trier, Xenia; Frömel, Tobias

    2016-01-01

    High resolution mass spectrometry (HRMS) was successfully applied to elucidate the structure of a polyfluorinated polyether (PFPE)-based formulation. The mass spectrum generated from direct injection into the MS was examined by identifying the different repeating units manually and with the aid o......-fluorinated polymers. The information from MS is essential in studying the physico-chemical properties of PFPEs and can help in assessing the risks they pose to the environment and to human health. Graphical Abstract ᅟ....

  19. Not your problem? Exploring the relationship between problem formulation and social responsibility

    Directory of Open Access Journals (Sweden)

    Sveinung Jørgensen

    2011-05-01

    Full Text Available This article explores the relationship between organizationalproblem formulation and social responsibility. Thepurpose of the article is to illuminate how organizationalproblem formulations (1 determine the manner in whichthe organization attempts to solve the problem and (2involve the ascription of significance to a group of stakeholdersseen as relevant for the organization. This has implicationsfor the degree to which they assume responsibilityfor those stakeholders. We discuss three dimensions ofresponsible decision making – rationality in goal attainment,reverence for ethical norms, and respect for stakeholders.Thereby, we arrive at an understanding of how differentorganizations in the same sector conceive of, andattempt to solve fundamental problems in the sector, aswell as how their assumed responsibility is reflected therein.We present and discuss a case that discusses keysimilarities and differences between two organizations inthe drug sector – a pharmaceutical company that producesmedicine for the treatment of drug addiction and a foundationworking with drug rehabilitation. We illuminatehow the two organizations base their activities on divergentformulations of the drug problem and how this ismanifested in their approach to the problem. We argue that this ultimately translates into differences in the inclusion of various stakeholdersin their problem space, and thereby the degree to which they assumeresponsibility for key stakeholders. This contributes to the corporate socialresponsibility literature by providing an in depth account of how problem formulationsshape organizational activities and determine the practical inclusionof stakeholders’ interests in the decisions and activities of organizations.

  20. Formulated linear programming problems from game theory and its ...

    African Journals Online (AJOL)

    Formulated linear programming problems from game theory and its computer implementation using Tora package. ... Game theory, a branch of operations research examines the various concepts of decision ... AJOL African Journals Online.

  1. Flow Formulation-based Model for the Curriculum-based Course Timetabling Problem

    DEFF Research Database (Denmark)

    Bagger, Niels-Christian Fink; Kristiansen, Simon; Sørensen, Matias

    2015-01-01

    problem. This decreases the number of integer variables signicantly and improves the performance compared to the basic formulation. It also shows competitiveness with other approaches based on mixed integer programming from the literature and improves the currently best known lower bound on one data...... instance in the benchmark data set from the second international timetabling competition.......In this work we will present a new mixed integer programming formulation for the curriculum-based course timetabling problem. We show that the model contains an underlying network model by dividing the problem into two models and then connecting the two models back into one model using a maximum ow...

  2. Topology optimization of acoustic-structure interaction problems using a mixed finite element formulation

    DEFF Research Database (Denmark)

    Yoon, Gil Ho; Jensen, Jens Stissing; Sigmund, Ole

    2007-01-01

    given during the optimization process. In this paper we circumvent the explicit boundary representation by using a mixed finite element formulation with displacements and pressure as primary variables (a u/p-formulation). The Helmholtz equation is obtained as a special case of the mixed formulation...... for the elastic shear modulus equating to zero. Hence, by spatial variation of the mass density, shear and bulk moduli we are able to solve the coupled problem by the mixed formulation. Using this modelling approach, the topology optimization procedure is simply implemented as a standard density approach. Several...... two-dimensional acoustic-structure problems are optimized in order to verify the proposed method....

  3. Problem formulation as a discursive design activity

    DEFF Research Database (Denmark)

    Hansen, Claus Thorp; Dorst, Kees; Andreasen, Mogens Myrup

    2009-01-01

    In the design methodology literature, design is often described as a rational problem solving process. This approach has been very successful; it has lead to the creation of design process models, tools, methods and techniques. Design methods teaching along these lines has become an indispensable...... part of any engineering design education. Yet the assumptions behind the rational problem solving approach to design do not sit well with some of the experiences we have in design teaching and design practice. Problem formulation is one such area where we might have to look for a different way...... to describe what is happening in design, beyond the problem solving approach. In this paper an extensive educational case study will be used to see whether a framework for describing design as a discursive activity (based on the notions of ‘discourse’ and ‘paradox’) could be more appropriate to describe...

  4. High Resolution Mass Spectrometry of Polyfluorinated Polyether-Based Formulation

    Science.gov (United States)

    Dimzon, Ian Ken; Trier, Xenia; Frömel, Tobias; Helmus, Rick; Knepper, Thomas P.; de Voogt, Pim

    2016-02-01

    High resolution mass spectrometry (HRMS) was successfully applied to elucidate the structure of a polyfluorinated polyether (PFPE)-based formulation. The mass spectrum generated from direct injection into the MS was examined by identifying the different repeating units manually and with the aid of an instrument data processor. Highly accurate mass spectral data enabled the calculation of higher-order mass defects. The different plots of MW and the nth-order mass defects (up to n = 3) could aid in assessing the structure of the different repeating units and estimating their absolute and relative number per molecule. The three major repeating units were -C2H4O-, -C2F4O-, and -CF2O-. Tandem MS was used to identify the end groups that appeared to be phosphates, as well as the possible distribution of the repeating units. Reversed-phase HPLC separated of the polymer molecules on the basis of number of nonpolar repeating units. The elucidated structure resembles the structure in the published manufacturer technical data. This analytical approach to the characterization of a PFPE-based formulation can serve as a guide in analyzing not just other PFPE-based formulations but also other fluorinated and non-fluorinated polymers. The information from MS is essential in studying the physico-chemical properties of PFPEs and can help in assessing the risks they pose to the environment and to human health.

  5. Design of a Generic and Flexible Data Structure for Efficient Formulation of Large Scale Network Problems

    DEFF Research Database (Denmark)

    Quaglia, Alberto; Sarup, Bent; Sin, Gürkan

    2013-01-01

    structure for efficient formulation of enterprise-wide optimization problems is presented. Through the integration of the described data structure in our synthesis and design framework, the problem formulation workflow is automated in a software tool, reducing time and resources needed to formulate large......The formulation of Enterprise-Wide Optimization (EWO) problems as mixed integer nonlinear programming requires collecting, consolidating and systematizing large amount of data, coming from different sources and specific to different disciplines. In this manuscript, a generic and flexible data...... problems, while ensuring at the same time data consistency and quality at the application stage....

  6. Symmetry Breaking in MILP Formulations for Unit Commitment Problems

    KAUST Repository

    Lima, Ricardo

    2015-12-11

    This paper addresses the study of symmetry in Unit Commitment (UC) problems solved by Mixed Integer Linear Programming (MILP) formulations, and using Linear Programming based Branch & Bound MILP solvers. We propose three sets of symmetry breaking constraints for UC MILP formulations exhibiting symmetry, and its impact on three UC MILP models are studied. The case studies involve the solution of 24 instances by three widely used models in the literature, with and without symmetry breaking constraints. The results show that problems that could not be solved to optimality within hours can be solved with a relatively small computational burden if the symmetry breaking constraints are assumed. The proposed symmetry breaking constraints are also compared with the symmetry breaking methods included in two MILP solvers, and the symmetry breaking constraints derived in this work have a distinct advantage over the methods in the MILP solvers.

  7. Symmetry Breaking in MILP Formulations for Unit Commitment Problems

    KAUST Repository

    Lima, Ricardo; Novais, Augusto Q.

    2015-01-01

    This paper addresses the study of symmetry in Unit Commitment (UC) problems solved by Mixed Integer Linear Programming (MILP) formulations, and using Linear Programming based Branch & Bound MILP solvers. We propose three sets of symmetry breaking constraints for UC MILP formulations exhibiting symmetry, and its impact on three UC MILP models are studied. The case studies involve the solution of 24 instances by three widely used models in the literature, with and without symmetry breaking constraints. The results show that problems that could not be solved to optimality within hours can be solved with a relatively small computational burden if the symmetry breaking constraints are assumed. The proposed symmetry breaking constraints are also compared with the symmetry breaking methods included in two MILP solvers, and the symmetry breaking constraints derived in this work have a distinct advantage over the methods in the MILP solvers.

  8. Resolution and optimization methods for tour planning problems

    International Nuclear Information System (INIS)

    Vasserot, Jean-Pierre

    1976-12-01

    The aim of this study is to describe computerized methods for the resolution of the computer supported tour planning problem. After a presentation of this problem in operational research, the different existing methods of resolution are reviewed with the different approaches which have led to their elaboration. Different critics and comparisons are made on these methods and some improvements and new procedures are proposed, some of them allowing to solve more general problems. Finally, the structure of such a program, made at the CII to solve this kind of problem under multiple constraints is analysed [fr

  9. Problem formulation for risk assessment of combined exposures to chemicals and other stressors in humans.

    Science.gov (United States)

    Solomon, Keith R; Wilks, Martin F; Bachman, Ammie; Boobis, Alan; Moretto, Angelo; Pastoor, Timothy P; Phillips, Richard; Embry, Michelle R

    2016-11-01

    When the human health risk assessment/risk management paradigm was developed, it did not explicitly include a "problem formulation" phase. The concept of problem formulation was first introduced in the context of ecological risk assessment (ERA) for the pragmatic reason to constrain and focus ERAs on the key questions. However, this need also exists for human health risk assessment, particularly for cumulative risk assessment (CRA), because of its complexity. CRA encompasses the combined threats to health from exposure via all relevant routes to multiple stressors, including biological, chemical, physical and psychosocial stressors. As part of the HESI Risk Assessment in the 21st Century (RISK21) Project, a framework for CRA was developed in which problem formulation plays a critical role. The focus of this effort is primarily on a chemical CRA (i.e., two or more chemicals) with subsequent consideration of non-chemical stressors, defined as "modulating factors" (ModFs). Problem formulation is a systematic approach that identifies all factors critical to a specific risk assessment and considers the purpose of the assessment, scope and depth of the necessary analysis, analytical approach, available resources and outcomes, and overall risk management goal. There are numerous considerations that are specific to multiple stressors, and proper problem formulation can help to focus a CRA to the key factors in order to optimize resources. As part of the problem formulation, conceptual models for exposures and responses can be developed that address these factors, such as temporal relationships between stressors and consideration of the appropriate ModFs.

  10. Improvability of assembly systems I: Problem formulation and performance evaluation

    Directory of Open Access Journals (Sweden)

    S.-Y. Chiang

    2000-01-01

    Full Text Available This work develops improvability theory for assembly systems. It consists of two parts. Part I includes the problem formulation and the analysis technique. Part II presents the so-called improvability indicators and a case study.

  11. A new LP formulation of the admission control problem modelled as an MDP under average reward criterion

    Science.gov (United States)

    Pietrabissa, Antonio

    2011-12-01

    The admission control problem can be modelled as a Markov decision process (MDP) under the average cost criterion and formulated as a linear programming (LP) problem. The LP formulation is attractive in the present and future communication networks, which support an increasing number of classes of service, since it can be used to explicitly control class-level requirements, such as class blocking probabilities. On the other hand, the LP formulation suffers from scalability problems as the number C of classes increases. This article proposes a new LP formulation, which, even if it does not introduce any approximation, is much more scalable: the problem size reduction with respect to the standard LP formulation is O((C + 1)2/2 C ). Theoretical and numerical simulation results prove the effectiveness of the proposed approach.

  12. New formulations on the finite element method for boundary value problems with internal/external boundary layers

    International Nuclear Information System (INIS)

    Pereira, Luis Carlos Martins

    1998-06-01

    New Petrov-Galerkin formulations on the finite element methods for convection-diffusion problems with boundary layers are presented. Such formulations are based on a consistent new theory on discontinuous finite element methods. Existence and uniqueness of solutions for these problems in the new finite element spaces are demonstrated. Some numerical experiments shows how the new formulation operate and also their efficacy. (author)

  13. IRIS Assessment Plan for Uranium (Scoping and Problem Formulation Materials)

    Science.gov (United States)

    In January 2018, EPA released the IRIS Assessment Plan for Uranium (Oral Reference Dose) (Scoping and Problem Formulation Materials). An IRIS Assessment Plan (IAP) communicates to the public the plan for assessing each individual chemical and includes summary informatio...

  14. Three formulations of the multi-type capacitated facility location problem

    DEFF Research Database (Denmark)

    Klose, Andreas

    The "multi-type" or "modular" capacitated facility location problem is a discrete location model that addresses non-convex piecewise linear production costs as, for instance, staircase cost functions. The literature basically distinguishes three different ways to formulate non-convex piecewise...

  15. Problem Formulation in Knowledge Discovery via Data Analytics (KDDA) for Environmental Risk Management.

    Science.gov (United States)

    Li, Yan; Thomas, Manoj; Osei-Bryson, Kweku-Muata; Levy, Jason

    2016-12-15

    With the growing popularity of data analytics and data science in the field of environmental risk management, a formalized Knowledge Discovery via Data Analytics (KDDA) process that incorporates all applicable analytical techniques for a specific environmental risk management problem is essential. In this emerging field, there is limited research dealing with the use of decision support to elicit environmental risk management (ERM) objectives and identify analytical goals from ERM decision makers. In this paper, we address problem formulation in the ERM understanding phase of the KDDA process. We build a DM³ ontology to capture ERM objectives and to inference analytical goals and associated analytical techniques. A framework to assist decision making in the problem formulation process is developed. It is shown how the ontology-based knowledge system can provide structured guidance to retrieve relevant knowledge during problem formulation. The importance of not only operationalizing the KDDA approach in a real-world environment but also evaluating the effectiveness of the proposed procedure is emphasized. We demonstrate how ontology inferencing may be used to discover analytical goals and techniques by conceptualizing Hazardous Air Pollutants (HAPs) exposure shifts based on a multilevel analysis of the level of urbanization (and related economic activity) and the degree of Socio-Economic Deprivation (SED) at the local neighborhood level. The HAPs case highlights not only the role of complexity in problem formulation but also the need for integrating data from multiple sources and the importance of employing appropriate KDDA modeling techniques. Challenges and opportunities for KDDA are summarized with an emphasis on environmental risk management and HAPs.

  16. Problem Formulation in Knowledge Discovery via Data Analytics (KDDA) for Environmental Risk Management

    Science.gov (United States)

    Li, Yan; Thomas, Manoj; Osei-Bryson, Kweku-Muata; Levy, Jason

    2016-01-01

    With the growing popularity of data analytics and data science in the field of environmental risk management, a formalized Knowledge Discovery via Data Analytics (KDDA) process that incorporates all applicable analytical techniques for a specific environmental risk management problem is essential. In this emerging field, there is limited research dealing with the use of decision support to elicit environmental risk management (ERM) objectives and identify analytical goals from ERM decision makers. In this paper, we address problem formulation in the ERM understanding phase of the KDDA process. We build a DM3 ontology to capture ERM objectives and to inference analytical goals and associated analytical techniques. A framework to assist decision making in the problem formulation process is developed. It is shown how the ontology-based knowledge system can provide structured guidance to retrieve relevant knowledge during problem formulation. The importance of not only operationalizing the KDDA approach in a real-world environment but also evaluating the effectiveness of the proposed procedure is emphasized. We demonstrate how ontology inferencing may be used to discover analytical goals and techniques by conceptualizing Hazardous Air Pollutants (HAPs) exposure shifts based on a multilevel analysis of the level of urbanization (and related economic activity) and the degree of Socio-Economic Deprivation (SED) at the local neighborhood level. The HAPs case highlights not only the role of complexity in problem formulation but also the need for integrating data from multiple sources and the importance of employing appropriate KDDA modeling techniques. Challenges and opportunities for KDDA are summarized with an emphasis on environmental risk management and HAPs. PMID:27983713

  17. An integer programming formulation of the parsimonious loss of heterozygosity problem.

    Science.gov (United States)

    Catanzaro, Daniele; Labbé, Martine; Halldórsson, Bjarni V

    2013-01-01

    A loss of heterozygosity (LOH) event occurs when, by the laws of Mendelian inheritance, an individual should be heterozygote at a given site but, due to a deletion polymorphism, is not. Deletions play an important role in human disease and their detection could provide fundamental insights for the development of new diagnostics and treatments. In this paper, we investigate the parsimonious loss of heterozygosity problem (PLOHP), i.e., the problem of partitioning suspected polymorphisms from a set of individuals into a minimum number of deletion areas. Specifically, we generalize Halldórsson et al.'s work by providing a more general formulation of the PLOHP and by showing how one can incorporate different recombination rates and prior knowledge about the locations of deletions. Moreover, we show that the PLOHP can be formulated as a specific version of the clique partition problem in a particular class of graphs called undirected catch-point interval graphs and we prove its general $({\\cal NP})$-hardness. Finally, we provide a state-of-the-art integer programming (IP) formulation and strengthening valid inequalities to exactly solve real instances of the PLOHP containing up to 9,000 individuals and 3,000 SNPs. Our results give perspectives on the mathematics of the PLOHP and suggest new directions on the development of future efficient exact solution approaches.

  18. Problem Formulation in Knowledge Discovery via Data Analytics (KDDA for Environmental Risk Management

    Directory of Open Access Journals (Sweden)

    Yan Li

    2016-12-01

    Full Text Available With the growing popularity of data analytics and data science in the field of environmental risk management, a formalized Knowledge Discovery via Data Analytics (KDDA process that incorporates all applicable analytical techniques for a specific environmental risk management problem is essential. In this emerging field, there is limited research dealing with the use of decision support to elicit environmental risk management (ERM objectives and identify analytical goals from ERM decision makers. In this paper, we address problem formulation in the ERM understanding phase of the KDDA process. We build a DM3 ontology to capture ERM objectives and to inference analytical goals and associated analytical techniques. A framework to assist decision making in the problem formulation process is developed. It is shown how the ontology-based knowledge system can provide structured guidance to retrieve relevant knowledge during problem formulation. The importance of not only operationalizing the KDDA approach in a real-world environment but also evaluating the effectiveness of the proposed procedure is emphasized. We demonstrate how ontology inferencing may be used to discover analytical goals and techniques by conceptualizing Hazardous Air Pollutants (HAPs exposure shifts based on a multilevel analysis of the level of urbanization (and related economic activity and the degree of Socio-Economic Deprivation (SED at the local neighborhood level. The HAPs case highlights not only the role of complexity in problem formulation but also the need for integrating data from multiple sources and the importance of employing appropriate KDDA modeling techniques. Challenges and opportunities for KDDA are summarized with an emphasis on environmental risk management and HAPs.

  19. Systematic network synthesis and design: Problem formulation, superstructure generation, data management and solution

    DEFF Research Database (Denmark)

    Quaglia, Alberto; Gargalo, Carina L.; Chairakwongsa, Siwanat

    2015-01-01

    when large problems are considered. In an earlier work, we proposed a computer-aided framework for synthesis and design of process networks. In this contribution, we expand the framework by including methods and tools developed to structure, automate and simplify the mathematical formulation......The developments obtained in recent years in the field of mathematical programming considerably reduced the computational time and resources needed to solve large and complex Mixed Integer Non Linear Programming (MINLP) problems. Nevertheless, the application of these methods in industrial practice...... is still limited by the complexity associated with the mathematical formulation of some problems. In particular, the tasks of design space definition and representation as superstructure, as well as the data collection, validation and handling may become too complex and cumbersome to execute, especially...

  20. Performance of mixed formulations for the particle finite element method in soil mechanics problems

    Science.gov (United States)

    Monforte, Lluís; Carbonell, Josep Maria; Arroyo, Marcos; Gens, Antonio

    2017-07-01

    This paper presents a computational framework for the numerical analysis of fluid-saturated porous media at large strains. The proposal relies, on one hand, on the particle finite element method (PFEM), known for its capability to tackle large deformations and rapid changing boundaries, and, on the other hand, on constitutive descriptions well established in current geotechnical analyses (Darcy's law; Modified Cam Clay; Houlsby hyperelasticity). An important feature of this kind of problem is that incompressibility may arise either from undrained conditions or as a consequence of material behaviour; incompressibility may lead to volumetric locking of the low-order elements that are typically used in PFEM. In this work, two different three-field mixed formulations for the coupled hydromechanical problem are presented, in which either the effective pressure or the Jacobian are considered as nodal variables, in addition to the solid skeleton displacement and water pressure. Additionally, several mixed formulations are described for the simplified single-phase problem due to its formal similitude to the poromechanical case and its relevance in geotechnics, since it may approximate the saturated soil behaviour under undrained conditions. In order to use equal-order interpolants in displacements and scalar fields, stabilization techniques are used in the mass conservation equation of the biphasic medium and in the rest of scalar equations. Finally, all mixed formulations are assessed in some benchmark problems and their performances are compared. It is found that mixed formulations that have the Jacobian as a nodal variable perform better.

  1. A Theory of Name Resolution with extended Coverage and Proofs

    NARCIS (Netherlands)

    Neron, P.J.M.; Tolmach, A.P.; Visser, E.; Wachsmuth, G.

    2015-01-01

    We describe a language-independent theory for name binding and resolution, suitable for programming languages with complex scoping rules including both lexical scoping and modules. We formulate name resolution as a two stage problem. First a language-independent scope graph is constructed using

  2. Generic Entity Resolution in Relational Databases

    Science.gov (United States)

    Sidló, Csaba István

    Entity Resolution (ER) covers the problem of identifying distinct representations of real-world entities in heterogeneous databases. We consider the generic formulation of ER problems (GER) with exact outcome. In practice, input data usually resides in relational databases and can grow to huge volumes. Yet, typical solutions described in the literature employ standalone memory resident algorithms. In this paper we utilize facilities of standard, unmodified relational database management systems (RDBMS) to enhance the efficiency of GER algorithms. We study and revise the problem formulation, and propose practical and efficient algorithms optimized for RDBMS external memory processing. We outline a real-world scenario and demonstrate the advantage of algorithms by performing experiments on insurance customer data.

  3. A New Formulation for the Combined Maritime Fleet Deployment and Inventory Management Problem

    OpenAIRE

    Dong, Bo; Bektas, Tolga; Chandra, Saurabh; Christiansen, Marielle; Fagerholt, Kjetil

    2017-01-01

    This paper addresses the fleet deployment problem and in particular the treatment of inventory in the maritime case. A new model based on time-continuous formulation for the combined maritime fleet deployment and inventory management problem in Roll-on Roll-off shipping is presented. Tests based on realistic data from the Ro-Ro business show that the model yields good solutions to the combined problem within reasonable time.

  4. Collisional plasma transport: two-dimensional scalar formulation of the initial boundary value problem and quasi one-dimensional models

    International Nuclear Information System (INIS)

    Mugge, J.W.

    1979-10-01

    The collisional plasma transport problem is formulated as an initial boundary value problem for general characteristic boundary conditions. Starting from the full set of hydrodynamic and electrodynamic equations an expansion in the electron-ion mass ratio together with a multiple timescale method yields simplified equations on each timescale. On timescales where many collisions have taken place for the simplified equations the initial boundary value problem is formulated. Through the introduction of potentials a two-dimensional scalar formulation in terms of quasi-linear integro-differential equations of second order for a domain consisting of plasma and vacuum sub-domains is obtained. (Auth.)

  5. Formulations and algorithms for problems on rock mass and support deformation during mining

    Science.gov (United States)

    Seryakov, VM

    2018-03-01

    The analysis of problem formulations to calculate stress-strain state of mine support and surrounding rocks mass in rock mechanics shows that such formulations incompletely describe the mechanical features of joint deformation in the rock mass–support system. The present paper proposes an algorithm to take into account the actual conditions of rock mass and support interaction and the algorithm implementation method to ensure efficient calculation of stresses in rocks and support.

  6. Implementing Problem Resolution Models in Remedy

    CERN Document Server

    Marquina, M A; Ramos, R

    2000-01-01

    This paper defines the concept of Problem Resolution Model (PRM) and describes the current implementation made by the User Support unit at CERN. One of the main challenges of User Support services in any High Energy Physics institute/organization is to address solving of the computing-relatedproblems faced by their researchers. The User Support group at CERN is the IT unit in charge of modeling the operations of the Help Desk and acts as asecond level support to some of the support lines whose problems are receptioned at the Help Desk. The motivation behind the use of a PRM is to provide well defined procedures and methods to react in an efficient way to a request for solving a problem,providing advice, information etc. A PRM is materialized on a workflow which has a set of defined states in which a problem can be. Problems move from onestate to another according to actions as decided by the person who is handling them. A PRM can be implemented by a computer application, generallyreferred to as Problem Report...

  7. A Simple FEM Formulation Applied to Nonlinear Problems of Impact with Thermomechanical Coupling

    Directory of Open Access Journals (Sweden)

    João Paulo de Barros Cavalcante

    Full Text Available Abstract The thermal effects of problems involving deformable structures are essential to describe the behavior of materials in feasible terms. Verifying the transformation of mechanical energy into heat it is possible to predict the modifications of mechanical properties of materials due to its temperature changes. The current paper presents the numerical development of a finite element method suitable for nonlinear structures coupled with thermomechanical behavior; including impact problems. A simple and effective alternative formulation is presented, called FEM positional, to deal with the dynamic nonlinear systems. The developed numerical is based on the minimum potential energy written in terms of nodal positions instead of displacements. The effects of geometrical, material and thermal nonlinearities are considered. The thermodynamically consistent formulation is based on the laws of thermodynamics and the Helmholtz free-energy, used to describe the thermoelastic and the thermoplastic behaviors. The coupled thermomechanical model can result in secondary effects that cause redistributions of internal efforts, depending on the history of deformation and material properties. The numerical results of the proposed formulation are compared with examples found in the literature.

  8. Splitting method for the combined formulation of fluid-particle problem

    International Nuclear Information System (INIS)

    Choi, Hyung Gwon; Yoo, Jung Yul; Joseph, D. D.

    2000-01-01

    A splitting method for the direct numerical simulation of solid-liquid mixtures is presented, where a symmetric pressure equation is newly proposed. Through numerical experiment, it is found that the newly proposed splitting method works well with a matrix-free formulation for some bench mark problems avoiding an erroneous pressure field which appears when using the conventional pressure equation of a splitting method. When deriving a typical pressure equation of a splitting method, the motion of a solid particle has to be approximated by the 'intermediate velocity' instead of treating it as unknowns since it is necessary as a boundary condition. Therefore, the motion of a solid particle is treated in such an explicit way that a particle moves by the known form drag(pressure drag) that is calculated from the pressure equation in the previous step. From the numerical experiment, it was shown that this method gives an erroneous pressure field even for the very small time step size as a particle velocity increases. In this paper, coupling the unknowns of particle velocities in the pressure equation is proposed, where the resulting matrix is reduced to the symmetric one by applying the projector of the combined formulation. It has been tested over some bench mark problems and gives reasonable pressure fields

  9. A theoretical formulation of the electrophysiological inverse problem on the sphere.

    Science.gov (United States)

    Riera, Jorge J; Valdés, Pedro A; Tanabe, Kunio; Kawashima, Ryuta

    2006-04-07

    The construction of three-dimensional images of the primary current density (PCD) produced by neuronal activity is a problem of great current interest in the neuroimaging community, though being initially formulated in the 1970s. There exist even now enthusiastic debates about the authenticity of most of the inverse solutions proposed in the literature, in which low resolution electrical tomography (LORETA) is a focus of attention. However, in our opinion, the capabilities and limitations of the electro and magneto encephalographic techniques to determine PCD configurations have not been extensively explored from a theoretical framework, even for simple volume conductor models of the head. In this paper, the electrophysiological inverse problem for the spherical head model is cast in terms of reproducing kernel Hilbert spaces (RKHS) formalism, which allows us to identify the null spaces of the implicated linear integral operators and also to define their representers. The PCD are described in terms of a continuous basis for the RKHS, which explicitly separates the harmonic and non-harmonic components. The RKHS concept permits us to bring LORETA into the scope of the general smoothing splines theory. A particular way of calculating the general smoothing splines is illustrated, avoiding a brute force discretization prematurely. The Bayes information criterion is used to handle dissimilarities in the signal/noise ratios and physical dimensions of the measurement modalities, which could affect the estimation of the amount of smoothness required for that class of inverse solution to be well specified. In order to validate the proposed method, we have estimated the 3D spherical smoothing splines from two data sets: electric potentials obtained from a skull phantom and magnetic fields recorded from subjects performing an experiment of human faces recognition.

  10. A new formulation for the 2-echelon capacitated vehicle routing problem

    DEFF Research Database (Denmark)

    Jepsen, Mads Kehlet; Røpke, Stefan; Spoorendonk, Simon

    The 2-echelon capacitated vehicle routing problem (2E-CVRP) is a transportation and distribution problem where goods are transported from a depot to a set of customers possible via optional satellite facilities. The 2E-CVRP is relevant in city-logistic applications where legal restrictions make...... it infeasible to use large trucks within the center of large cities. We propose a new mathematical formulation for the 2E-CVRP with much fewer variables than the previously proposed but with several constraint sets of exponential size. The strength of the model is implied by the facts that many cutting planes...

  11. SEACAS Theory Manuals: Part 1. Problem Formulation in Nonlinear Solid Mechancis

    Energy Technology Data Exchange (ETDEWEB)

    Attaway, S.W.; Laursen, T.A.; Zadoks, R.I.

    1998-08-01

    This report gives an introduction to the basic concepts and principles involved in the formulation of nonlinear problems in solid mechanics. By way of motivation, the discussion begins with a survey of some of the important sources of nonlinearity in solid mechanics applications, using wherever possible simple one dimensional idealizations to demonstrate the physical concepts. This discussion is then generalized by presenting generic statements of initial/boundary value problems in solid mechanics, using linear elasticity as a template and encompassing such ideas as strong and weak forms of boundary value problems, boundary and initial conditions, and dynamic and quasistatic idealizations. The notational framework used for the linearized problem is then extended to account for finite deformation of possibly inelastic solids, providing the context for the descriptions of nonlinear continuum mechanics, constitutive modeling, and finite element technology given in three companion reports.

  12. A game-theoretic formulation of the homogeneous self-reconfiguration problem

    KAUST Repository

    Pickem, Daniel; Egerstedt, Magnus; Shamma, Jeff S.

    2015-01-01

    In this paper we formulate the homogeneous two- and three-dimensional self-reconfiguration problem over discrete grids as a constrained potential game. We develop a game-theoretic learning algorithm based on the Metropolis-Hastings algorithm that solves the self-reconfiguration problem in a globally optimal fashion. Both a centralized and a fully decentralized algorithm are presented and we show that the only stochastically stable state is the potential function maximizer, i.e. the desired target configuration. These algorithms compute transition probabilities in such a way that even though each agent acts in a self-interested way, the overall collective goal of self-reconfiguration is achieved. Simulation results confirm the feasibility of our approach and show convergence to desired target configurations.

  13. A game-theoretic formulation of the homogeneous self-reconfiguration problem

    KAUST Repository

    Pickem, Daniel

    2015-12-15

    In this paper we formulate the homogeneous two- and three-dimensional self-reconfiguration problem over discrete grids as a constrained potential game. We develop a game-theoretic learning algorithm based on the Metropolis-Hastings algorithm that solves the self-reconfiguration problem in a globally optimal fashion. Both a centralized and a fully decentralized algorithm are presented and we show that the only stochastically stable state is the potential function maximizer, i.e. the desired target configuration. These algorithms compute transition probabilities in such a way that even though each agent acts in a self-interested way, the overall collective goal of self-reconfiguration is achieved. Simulation results confirm the feasibility of our approach and show convergence to desired target configurations.

  14. Domain decomposition methods for the mixed dual formulation of the critical neutron diffusion problem; Methodes de decomposition de domaine pour la formulation mixte duale du probleme critique de la diffusion des neutrons

    Energy Technology Data Exchange (ETDEWEB)

    Guerin, P

    2007-12-15

    The neutronic simulation of a nuclear reactor core is performed using the neutron transport equation, and leads to an eigenvalue problem in the steady-state case. Among the deterministic resolution methods, diffusion approximation is often used. For this problem, the MINOS solver based on a mixed dual finite element method has shown his efficiency. In order to take advantage of parallel computers, and to reduce the computing time and the local memory requirement, we propose in this dissertation two domain decomposition methods for the resolution of the mixed dual form of the eigenvalue neutron diffusion problem. The first approach is a component mode synthesis method on overlapping sub-domains. Several Eigenmodes solutions of a local problem solved by MINOS on each sub-domain are taken as basis functions used for the resolution of the global problem on the whole domain. The second approach is a modified iterative Schwarz algorithm based on non-overlapping domain decomposition with Robin interface conditions. At each iteration, the problem is solved on each sub domain by MINOS with the interface conditions deduced from the solutions on the adjacent sub-domains at the previous iteration. The iterations allow the simultaneous convergence of the domain decomposition and the eigenvalue problem. We demonstrate the accuracy and the efficiency in parallel of these two methods with numerical results for the diffusion model on realistic 2- and 3-dimensional cores. (author)

  15. Scenario Analysis In The Calculation Of Investment Efficiency–The Problem Of Formulating Assumptions

    Directory of Open Access Journals (Sweden)

    Dittmann Iwona

    2015-09-01

    Full Text Available This article concerns the problem of formulating assumptions in scenario analysis for investments which consist of the renting out of an apartment. The article attempts to indicate the foundations for the formulation of assumptions on the basis of observed retrospective regularities. It includes theoretical considerations regarding scenario design, as well as the results of studies on the formulation, in the past, of quantities which determined or were likely to bring closer estimate the value of the individual explanatory variables for a chosen measure of investment profitability (MIRRFCFE. The dynamics of and correlation between the variables were studied. The research was based on quarterly data from local residential real estate markets in Poland (in the six largest cities in the years 2006 – 2014, as well as on data from the financial market.

  16. Problem Resolution through Electronic Mail: A Five-Step Model.

    Science.gov (United States)

    Grandgenett, Neal; Grandgenett, Don

    2001-01-01

    Discusses the use of electronic mail within the general resolution and management of administrative problems and emphasizes the need for careful attention to problem definition and clarity of language. Presents a research-based five-step model for the effective use of electronic mail based on experiences at the University of Nebraska at Omaha.…

  17. Optimal Water-Power Flow Problem: Formulation and Distributed Optimal Solution

    Energy Technology Data Exchange (ETDEWEB)

    Dall-Anese, Emiliano [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zhao, Changhong [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zamzam, Admed S. [University of Minnesota; Sidiropoulos, Nicholas D. [University of Minnesota; Taylor, Josh A. [University of Toronto

    2018-01-12

    This paper formalizes an optimal water-power flow (OWPF) problem to optimize the use of controllable assets across power and water systems while accounting for the couplings between the two infrastructures. Tanks and pumps are optimally managed to satisfy water demand while improving power grid operations; {for the power network, an AC optimal power flow formulation is augmented to accommodate the controllability of water pumps.} Unfortunately, the physics governing the operation of the two infrastructures and coupling constraints lead to a nonconvex (and, in fact, NP-hard) problem; however, after reformulating OWPF as a nonconvex, quadratically-constrained quadratic problem, a feasible point pursuit-successive convex approximation approach is used to identify feasible and optimal solutions. In addition, a distributed solver based on the alternating direction method of multipliers enables water and power operators to pursue individual objectives while respecting the couplings between the two networks. The merits of the proposed approach are demonstrated for the case of a distribution feeder coupled with a municipal water distribution network.

  18. Measurement of the Retention Time of Different Ophthalmic Formulations with Ultrahigh-Resolution Optical Coherence Tomography.

    Science.gov (United States)

    Gagliano, Caterina; Papa, Vincenzo; Amato, Roberta; Malaguarnera, Giulia; Avitabile, Teresio

    2018-04-01

    Purpose/aim of the study: The purpose of this study was to measure the pre-corneal retention time of two marketed formulations (eye drops and eye gel) of a steroid-antibiotic fixed combination (FC) containing 0.1% dexamethasone and 0.3% netilmicin. Pre-corneal retention time was evaluated in 16 healthy subjects using an ultrahigh-resolution anterior segment spectral domain optical coherence tomography (OCT). All subjects randomly received both formulations of the FC (Netildex, SIFI, Italy). Central tear film thickness (CTFT) was measured before instillation (time 0) and then after 1, 10, 20, 30, 40 50, 60 and 120 min. The pre-corneal retention time was calculated by plotting CTFT as a function of time. Differences between time points and groups were analyzed by Student's t-test. CTFT increased significantly after the instillation of the eye gel formulation (p < 0.001). CTFT reached its maximum value 1 min after instillation and returned to baseline after 60 min. No effect on CTFT was observed after the instillation of eye drops. The difference between the two formulations was statistically significant at time 1 min (p < 0.0001), 10 min (p < 0.001) and 20 min (p < 0.01). The FC formulated as eye gel was retained on the ocular surface longer than the corresponding eye drop solution. Consequently, the use of the eye gel might extend the interval between instillations and decrease the frequency of administration.

  19. Domain decomposition methods for the mixed dual formulation of the critical neutron diffusion problem

    International Nuclear Information System (INIS)

    Guerin, P.

    2007-12-01

    The neutronic simulation of a nuclear reactor core is performed using the neutron transport equation, and leads to an eigenvalue problem in the steady-state case. Among the deterministic resolution methods, diffusion approximation is often used. For this problem, the MINOS solver based on a mixed dual finite element method has shown his efficiency. In order to take advantage of parallel computers, and to reduce the computing time and the local memory requirement, we propose in this dissertation two domain decomposition methods for the resolution of the mixed dual form of the eigenvalue neutron diffusion problem. The first approach is a component mode synthesis method on overlapping sub-domains. Several Eigenmodes solutions of a local problem solved by MINOS on each sub-domain are taken as basis functions used for the resolution of the global problem on the whole domain. The second approach is a modified iterative Schwarz algorithm based on non-overlapping domain decomposition with Robin interface conditions. At each iteration, the problem is solved on each sub domain by MINOS with the interface conditions deduced from the solutions on the adjacent sub-domains at the previous iteration. The iterations allow the simultaneous convergence of the domain decomposition and the eigenvalue problem. We demonstrate the accuracy and the efficiency in parallel of these two methods with numerical results for the diffusion model on realistic 2- and 3-dimensional cores. (author)

  20. Photonic Band Structure of Dispersive Metamaterials Formulated as a Hermitian Eigenvalue Problem

    KAUST Repository

    Raman, Aaswath

    2010-02-26

    We formulate the photonic band structure calculation of any lossless dispersive photonic crystal and optical metamaterial as a Hermitian eigenvalue problem. We further show that the eigenmodes of such lossless systems provide an orthonormal basis, which can be used to rigorously describe the behavior of lossy dispersive systems in general. © 2010 The American Physical Society.

  1. Photonic Band Structure of Dispersive Metamaterials Formulated as a Hermitian Eigenvalue Problem

    KAUST Repository

    Raman, Aaswath; Fan, Shanhui

    2010-01-01

    We formulate the photonic band structure calculation of any lossless dispersive photonic crystal and optical metamaterial as a Hermitian eigenvalue problem. We further show that the eigenmodes of such lossless systems provide an orthonormal basis, which can be used to rigorously describe the behavior of lossy dispersive systems in general. © 2010 The American Physical Society.

  2. Rival framings: A framework for discovering how problem formulation uncertainties shape risk management trade-offs in water resources systems

    Science.gov (United States)

    Quinn, J. D.; Reed, P. M.; Giuliani, M.; Castelletti, A.

    2017-08-01

    Managing water resources systems requires coordinated operation of system infrastructure to mitigate the impacts of hydrologic extremes while balancing conflicting multisectoral demands. Traditionally, recommended management strategies are derived by optimizing system operations under a single problem framing that is assumed to accurately represent the system objectives, tacitly ignoring the myriad of effects that could arise from simplifications and mathematical assumptions made when formulating the problem. This study illustrates the benefits of a rival framings framework in which analysts instead interrogate multiple competing hypotheses of how complex water management problems should be formulated. Analyzing rival framings helps discover unintended consequences resulting from inherent biases of alternative problem formulations. We illustrate this on the monsoonal Red River basin in Vietnam by optimizing operations of the system's four largest reservoirs under several different multiobjective problem framings. In each rival framing, we specify different quantitative representations of the system's objectives related to hydropower production, agricultural water supply, and flood protection of the capital city of Hanoi. We find that some formulations result in counterintuitive behavior. In particular, policies designed to minimize expected flood damages inadvertently increase the risk of catastrophic flood events in favor of hydropower production, while min-max objectives commonly used in robust optimization provide poor representations of system tradeoffs due to their instability. This study highlights the importance of carefully formulating and evaluating alternative mathematical abstractions of stakeholder objectives describing the multisectoral water demands and risks associated with hydrologic extremes.

  3. FORMULATION OF MATHEMATICAL PROBLEM DESCRIBING PHYSICAL AND CHEMICAL PROCESSES AT CONCRETE CORROSION

    Directory of Open Access Journals (Sweden)

    Sergey V. Fedosov

    2017-06-01

    Full Text Available The article deals with the relevance of new scientific research focused on modeling of physical and chemical processes occurring in the cement concrete at their exploitation. The basic types of concrete corrosion are described. The problem of mass transfer processes in a flat reinforced concrete wall at concrete corrosion of the first and the second types has been mathematically formulated.

  4. Impact of polymer formulations on neointimal proliferation after zotarolimus-eluting stent with different polymers: insights from the RESOLUTE trial.

    Science.gov (United States)

    Waseda, Katsuhisa; Ako, Junya; Yamasaki, Masao; Koizumi, Tomomi; Sakurai, Ryota; Hongo, Yoichiro; Koo, Bon-Kwon; Ormiston, John; Worthley, Stephen G; Whitbourn, Robert J; Walters, Darren L; Meredith, Ian T; Fitzgerald, Peter J; Honda, Yasuhiro

    2011-06-01

    Polymer formulation may affect the efficacy of drug-eluting stents. Resolute, Endeavor, and ZoMaxx are zotarolimus-eluting stents with different stent platforms and different polymer coatings and have been tested in clinical trials. The aim of this analysis was to compare the efficacy of zotarolimus-eluting stents with different polymers. Data were obtained from the first-in man trial or first randomized trials of each stent, The Clinical RESpOnse EvaLUation of the MedTronic Endeavor CR ABT-578 Eluting Coronary Stent System in De Novo Native Coronary Artery Lesions (RESOLUTE), Randomized Controlled Trial to Evaluate the Safety and Efficacy of the Medtronic AVE ABT-578 Eluting Driver Coronary Stent in De Novo Native Coronary Artery Lesions (ENDEAVOR II), and ZoMaxx I trials. Follow-up intravascular ultrasound analyses (8 to 9 months of follow-up) were possible in 353 patients (Resolute: 88, Endeavor: 98, ZoMaxx: 82, Driver: 85). Volume index (volume/stent length) was obtained for vessel, stent, lumen, peristent plaque, and neointima. Cross-sectional narrowing was defined as neointimal area divided by stent area (%). Neointima-free frame ratio was calculated as the number of frames without intravascular ultrasound-detectable neointima divided by the total number of frames within the stent. At baseline, vessel, lumen, and peristent plaque volume index were not significantly different among the 4 stent groups. At follow-up, percent neointimal obstruction was significantly lower in Resolute compared with Endeavor, ZoMaxx, and Driver (Resolute: 3.7±4.0, Endeavor: 17.5±10.1, ZoMaxx: 14.6±8.1, Driver: 29.4±17.2%; Ppolymer used in Resolute independently correlated with neointimal suppression among 3 zotarolimus-eluting stents. The different polymer formulations significantly affect the relative amount of neointima for zotarolimus-eluting stents. URL: http://www.clinicaltrials.gov. Unique identifier: NCT00248079.

  5. Metaheuristics applied to vehicle routing. A case study. Parte 1: formulating the problem

    Directory of Open Access Journals (Sweden)

    Guillermo González Vargas

    2006-09-01

    Full Text Available This paper deals with VRP (vehicle routing problem mathematical formulation and presents some methodologies used by different authors to solve VRP variation. This paper is presented as the springboard for introducing future papers about a manufacturing company’s location decisions based on the total distance traveled to distribute its product.

  6. On Newton-Raphson formulation and algorithm for displacement based structural dynamics problem with quadratic damping nonlinearity

    Directory of Open Access Journals (Sweden)

    Koh Kim Jie

    2017-01-01

    Full Text Available Quadratic damping nonlinearity is challenging for displacement based structural dynamics problem as the problem is nonlinear in time derivative of the primitive variable. For such nonlinearity, the formulation of tangent stiffness matrix is not lucid in the literature. Consequently, ambiguity related to kinematics update arises when implementing the time integration-iterative algorithm. In present work, an Euler-Bernoulli beam vibration problem with quadratic damping nonlinearity is addressed as the main source of quadratic damping nonlinearity arises from drag force estimation, which is generally valid only for slender structures. Employing Newton-Raphson formulation, tangent stiffness components associated with quadratic damping nonlinearity requires velocity input for evaluation purpose. For this reason, two mathematically equivalent algorithm structures with different kinematics arrangement are tested. Both algorithm structures result in the same accuracy and convergence characteristic of solution.

  7. A Mathematical Formulation of the SCOLE Control Problem. Part 2: Optimal Compensator Design

    Science.gov (United States)

    Balakrishnan, A. V.

    1988-01-01

    The study initiated in Part 1 of this report is concluded and optimal feedback control (compensator) design for stability augmentation is considered, following the mathematical formulation developed in Part 1. Co-located (rate) sensors and (force and moment) actuators are assumed, and allowing for both sensor and actuator noise, stabilization is formulated as a stochastic regulator problem. Specializing the general theory developed by the author, a complete, closed form solution (believed to be new with this report) is obtained, taking advantage of the fact that the inherent structural damping is light. In particular, it is possible to solve in closed form the associated infinite-dimensional steady-state Riccati equations. The SCOLE model involves associated partial differential equations in a single space variable, but the compensator design theory developed is far more general since it is given in the abstract wave equation formulation. The results thus hold for any multibody system so long as the basic model is linear.

  8. Global optimization for overall HVAC systems - Part I problem formulation and analysis

    International Nuclear Information System (INIS)

    Lu Lu; Cai Wenjian; Chai, Y.S.; Xie Lihua

    2005-01-01

    This paper presents the global optimization technologies for overall heating, ventilating and air conditioning (HVAC) systems. The objective function of global optimization and constraints are formulated based on mathematical models of the major components. All these models are associated with power consumption components and heat exchangers for transferring cooling load. The characteristics of all the major components are briefly introduced by models, and the interactions between them are analyzed and discussed to show the complications of the problem. According to the characteristics of the operating components, the complicated original optimization problem for overall HVAC systems is transformed and simplified into a compact form ready for optimization

  9. A note on the uniqueness of 2D elastostatic problems formulated by different types of potential functions

    Science.gov (United States)

    Guerrero, José Luis Morales; Vidal, Manuel Cánovas; Nicolás, José Andrés Moreno; López, Francisco Alhama

    2018-05-01

    New additional conditions required for the uniqueness of the 2D elastostatic problems formulated in terms of potential functions for the derived Papkovich-Neuber representations, are studied. Two cases are considered, each of them formulated by the scalar potential function plus one of the rectangular non-zero components of the vector potential function. For these formulations, in addition to the original (physical) boundary conditions, two new additional conditions are required. In addition, for the complete Papkovich-Neuber formulation, expressed by the scalar potential plus two components of the vector potential, the additional conditions established previously for the three-dimensional case in z-convex domain can be applied. To show the usefulness of these new conditions in a numerical scheme two applications are numerically solved by the network method for the three cases of potential formulations.

  10. Ordinal Regression Based Subpixel Shift Estimation for Video Super-Resolution

    Directory of Open Access Journals (Sweden)

    Petrovic Nemanja

    2007-01-01

    Full Text Available We present a supervised learning-based approach for subpixel motion estimation which is then used to perform video super-resolution. The novelty of this work is the formulation of the problem of subpixel motion estimation in a ranking framework. The ranking formulation is a variant of classification and regression formulation, in which the ordering present in class labels namely, the shift between patches is explicitly taken into account. Finally, we demonstrate the applicability of our approach on superresolving synthetically generated images with global subpixel shifts and enhancing real video frames by accounting for both local integer and subpixel shifts.

  11. Between universalism and relativism: a conceptual exploration of problems in formulating and applying international biomedical ethical guidelines.

    Science.gov (United States)

    Tangwa, G B

    2004-02-01

    In this paper, the author attempts to explore some of the problems connected with the formulation and application of international biomedical ethical guidelines, with particular reference to Africa. Recent attempts at revising and updating some international medical ethical guidelines have been bedevilled by intractable controversies and wrangling regarding both the content and formulation. From the vantage position of relative familiarity with both African and Western contexts, and the privilege of having been involved in the revision and updating of one of the international ethical guidelines, the author reflects broadly on these issues and attempts prescribing an approach from both the theoretical and practical angles liable to mitigate, if not completely eliminate, some of the problems and difficulties.

  12. Two- and three-index formulations of the minimum cost multicommodity k-splittable flow problem

    DEFF Research Database (Denmark)

    Gamst, Mette; Jensen, Peter Neergaard; Pisinger, David

    2010-01-01

    The multicommodity flow problem (MCFP) considers the efficient routing of commodities from their origins to their destinations subject to capacity restrictions and edge costs. Baier et al. [G. Baier, E. Köhler, M. Skutella, On the k-splittable flow problem, in: 10th Annual European Symposium...... of commodities has to be satisfied at the lowest possible cost. The problem has applications in transportation problems where a number of commodities must be routed, using a limited number of distinct transportation units for each commodity. Based on a three-index formulation by Truffot et al. [J. Truffot, C...... on Algorithms, 2002, 101–113] introduced the maximum flow multicommodity k-splittable flow problem (MCkFP) where each commodity may use at most k paths between its origin and its destination. This paper studies the -hard minimum cost multicommodity k-splittable flow problem (MCMCkFP) in which a given flow...

  13. Putting problem formulation at the forefront of GMO risk analysis.

    Science.gov (United States)

    Tepfer, Mark; Racovita, Monica; Craig, Wendy

    2013-01-01

    When applying risk assessment and the broader process of risk analysis to decisions regarding the dissemination of genetically modified organisms (GMOs), the process has a tendency to become remarkably complex. Further, as greater numbers of countries consider authorising the large-scale dissemination of GMOs, and as GMOs with more complex traits reach late stages of development, there has been increasing concern about the burden posed by the complexity of risk analysis. We present here an improved approach for GMO risk analysis that gives a central role to problem formulation. Further, the risk analysis strategy has been clarified and simplified in order to make rigorously scientific risk assessment and risk analysis more broadly accessible to diverse stakeholder groups.

  14. Numerical convergence for a sewage disposal problem

    OpenAIRE

    Alvarez-Vázquez, L.J.; Martínez, A.; Rodríguez, C.; Vázquez-Méndez, M.E.

    2001-01-01

    The management of sewage disposal and the design of wastewater treatment systems can be formulated as a constrained pointwise optimal control problem. In this paper we study the convergence of the numerical resolution for the corresponding state system by means of a characteristics Galerkin method. The main difficulty of the problem is due to the existence of Radon measures in the right-hand side of the state system. Finally, we present numerical results for a realistic problem posed in a ria...

  15. The formulation of local values-based recovery program (learning from the experience of the provincial government of East Nusa Tenggara-Indonesia

    Directory of Open Access Journals (Sweden)

    Nursalam

    2014-01-01

    Full Text Available The research aims to determine how the social design of policy formulation in recovery after disasters and social conflicts. The method used in the study is a qualitative research design, data collection techniques through documentation and interviews with key informants. The recovery of the public life after reconstruction due to natural disasters and social conflicts demanding greater government attention to solve bersoalan arising through the formulation of programs oriented to local values. The importance of such a policy given that the public is the target to be met interests, and basically in their social life with values that serve as a guide in achieving a common goal. The Formulations recovery program based value is essentially a process of public policy formulation design also is social design process that relies on the dimension; (1 a value appreciation of relevant actors; (2 an orientation toward conflict resolution, problem solving, and change. The first dimension concerns an administrator's to listen to other voices, share in others experiences, and gain new knowledge. The second describes the administrator's orientation toward conflict resolution, problem solving, and change; the administrator's actions can range from proactive to reactive. Research indicates that, the experience of local government in the province of East Nusa Tenggara post-disaster and conflict should be a lesson for having successfully implemented local values-based formulation design in formulating regional development programs.

  16. Flow Formulations for Curriculum-based Course Timetabling

    DEFF Research Database (Denmark)

    Bagger, Niels-Christian Fink; Kristiansen, Simon; Sørensen, Matias

    2017-01-01

    lower bound on one data instance in the benchmark data set from the second international timetabling competition. Regarding upper bounds, the formulation based on the minimum cost flow problem performs better on average than other mixed integer programming approaches for the CTT.......In this paper we present two mixed-integer programming formulations for the Curriculum based Course Timetabling Problem (CTT). We show that the formulations contain underlying network structures by dividing the CTT into two separate models and then connect the two models using flow formulation...... techniques. The first mixed-integer programming formulation is based on an underlying minimum cost flow problem, which decreases the number of integer variables significantly and improves the performance compared to an intuitive mixed-integer programming formulation. The second formulation is based...

  17. Formulation and solution of the classical seashell problem

    International Nuclear Information System (INIS)

    Illert, C.

    1987-01-01

    Despite an extensive scholarly literature dating back to classical times, seashell geometries have hiterto resisted rigorous theoretical analysis, leaving applied scientists to adopt a directionless empirical approach toward classification. The voluminousness of recent paleontological literature demonstrates the importance of this problem to applied scientists, but in no way reflects corresponding conceptual or theoretical advances beyond the XIX century thinking which was so ably summarized by Sir D'Arcy Wentworth Thompson in 1917. However, in this foundation paper for the newly emerging science of theoretical conchology, unifying theoretical considerations for the first time, permits a rigorous formulation and a complete solution of the problem of biological shell geometries. Shell coiling about the axis of symmetry can be deduced from first principles using energy considerations associated with incremental growth. The present paper shows that those shell apertures which are incurved (''cowrielike''), outflared (''stromblike'') or even backturned (''Opisthostomoidal'') are merely special cases of a much broader spectrum of ''allowable'' energy-efficient growth trajectories (tensile elastic clockspring spirals), many of which were widely used by Cretaceous ammonites. Energy considerations also dictate shell growth along the axis of symmetry, thus seashell spires can be understood in terms of certain special figures of revolution (Moebius elastic conoids), the better-known coeloconoidal and cyrtoconoidal shell spires being only two special cases arising from a whole class of topologically possible, energy efficient and biologically observed geometries. The ''wires'' and ''conoids'' of the present paper are instructive conceptual simplifications sufficient for present purposes. A second paper will later deal with generalized tubular surfaces in thre

  18. Coping with cannabis in a Caribbean country : from problem formulation to going public

    Directory of Open Access Journals (Sweden)

    Hymie Rubenstein

    1998-07-01

    Full Text Available Analyzes the dialectic between problem discovery and formulation, ethical considerations, and the public dissemination of research results. Author describes his personal experience of fieldwork, the moral-ethical dilemmas it involved, and the circulation of research findings on cannabis production and consumption in St. Vincent. He became frustrated that his academic publications were only accessible to a tiny portion of St. Vincent's population and therefore decided to publish about cannabis in the local media.

  19. Stimulating technological innovation : problem identification and intervention formulation with the technological innovation systems framework

    OpenAIRE

    Kieft, A.C.

    2017-01-01

    The technological innovation systems (TIS) framework provides a theory to understand under what conditions technological innovations are successfully developed and implemented. The objective of this dissertation is to further strengthen this TIS intervention framework, which is the part of the TIS theoretical framework that facilitates the identification of inhibiting problems and the formulation of interventions. Theoretical adaptations and extensions are proposed and their merits subsequent...

  20. Multimode Preemptive Resource Investment Problem Subject to Due Dates for Activities: Formulation and Solution Procedure

    Directory of Open Access Journals (Sweden)

    Behrouz Afshar-Nadjafi

    2014-01-01

    Full Text Available The preemptive Multimode resource investment problem is investigated. The Objective is to minimize the total renewable/nonrenewable resource costs and earliness-tardiness costs by a given project deadline and due dates for activities. In this problem setting preemption is allowed with no setup cost or time. The project contains activities interrelated by finish-start type precedence relations with a time lag of zero, which require a set of renewable and nonrenewable resources. The problem formed in this way is an NP-hard. A mixed integer programming formulation is proposed for the problem and parameters tuned genetic algorithm (GA is proposed to solve it. To evaluate the performance of the proposed algorithm, 120 test problems are used. Comparative statistical results reveal that the proposed GA is efficient and effective in terms of the objective function and computational times.

  1. An exact approach for aggregated formulations

    DEFF Research Database (Denmark)

    Gamst, Mette; Spoorendonk, Simon; Røpke, Stefan

    Aggregating formulations is a powerful approach for problems to take on tractable forms. Aggregation may lead to loss of information, i.e. the aggregated formulation may be an approximation of the original problem. In branch-and-bound context, aggregation can also complicate branching, e.g. when...... optimality cannot be guaranteed by branching on aggregated variables. We present a generic exact solution method to remedy the drawbacks of aggregation. It combines the original and aggregated formulations and applies Benders' decomposition. We apply the method to the Split Delivery Vehicle Routing Problem....

  2. Hyper-resolution hydrological modeling: Completeness of Formulation, Appropriateness of Descritization, and Physical LImits of Predictability

    Science.gov (United States)

    Ogden, F. L.

    2017-12-01

    HIgh performance computing and the widespread availabilities of geospatial physiographic and forcing datasets have enabled consideration of flood impact predictions with longer lead times and more detailed spatial descriptions. We are now considering multi-hour flash flood forecast lead times at the subdivision level in so-called hydroblind regions away from the National Hydrography network. However, the computational demands of such models are high, necessitating a nested simulation approach. Research on hyper-resolution hydrologic modeling over the past three decades have illustrated some fundamental limits on predictability that are simultaneously related to runoff generation mechanism(s), antecedent conditions, rates and total amounts of precipitation, discretization of the model domain, and complexity or completeness of the model formulation. This latter point is an acknowledgement that in some ways hydrologic understanding in key areas related to land use, land cover, tillage practices, seasonality, and biological effects has some glaring deficiencies. This presentation represents a review of what is known related to the interacting effects of precipitation amount, model spatial discretization, antecedent conditions, physiographic characteristics and model formulation completeness for runoff predictions. These interactions define a region in multidimensional forcing, parameter and process space where there are in some cases clear limits on predictability, and in other cases diminished uncertainty.

  3. Approximation in generalized Hardy classes and resolution of inverse problems for tokamaks

    International Nuclear Information System (INIS)

    Fisher, Y.

    2011-11-01

    This thesis concerns both the theoretical and constructive resolution of inverse problems for isotropic diffusion equation in planar domains, simply and doubly connected. From partial Cauchy boundary data (potential, flux), we look for those quantities on the remaining part of the boundary, where no information is available, as well as inside the domain. The proposed approach proceeds by considering solutions to the diffusion equation as real parts of complex valued solutions to some conjugated Beltrami equation. These particular generalized analytic functions allow to introduce Hardy classes, where the inverse problem is stated as a best constrained approximation issue (bounded extrema problem), and thereby is regularized. Hence, existence and smoothness properties, together with density results of traces on the boundary, ensure well-posedness. An application is studied, to a free boundary problem for a magnetically confined plasma in the tokamak Tore Supra (CEA Cadarache France). The resolution of the approximation problem on a suitable basis of functions (toroidal harmonics) leads to a qualification criterion for the estimated plasma boundary. A descent algorithm makes it decrease, and refines the estimations. The method does not require any integration of the solution in the overall domain. It furnishes very accurate numerical results, and could be extended to other devices, like JET or ITER. (author)

  4. Two-point boundary value and Cauchy formulations in an axisymmetrical MHD equilibrium problem

    International Nuclear Information System (INIS)

    Atanasiu, C.V.; Subbotin, A.A.

    1999-01-01

    In this paper we present two equilibrium solvers for axisymmetrical toroidal configurations, both based on the expansion in poloidal angle method. The first one has been conceived as a two-point boundary value solver in a system of coordinates with straight field lines, while the second one uses a well-conditioned Cauchy formulation of the problem in a general curvilinear coordinate system. In order to check the capability of our moment methods to describe equilibrium accurately, a comparison of the moment solutions with analytical solutions obtained for a Solov'ev equilibrium has been performed. (author)

  5. Applications of high-resolution spatial discretization scheme and Jacobian-free Newton–Krylov method in two-phase flow problems

    International Nuclear Information System (INIS)

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2015-01-01

    Highlights: • Using high-resolution spatial scheme in solving two-phase flow problems. • Fully implicit time integrations scheme. • Jacobian-free Newton–Krylov method. • Analytical solution for two-phase water faucet problem. - Abstract: The majority of the existing reactor system analysis codes were developed using low-order numerical schemes in both space and time. In many nuclear thermal–hydraulics applications, it is desirable to use higher-order numerical schemes to reduce numerical errors. High-resolution spatial discretization schemes provide high order spatial accuracy in smooth regions and capture sharp spatial discontinuity without nonphysical spatial oscillations. In this work, we adapted an existing high-resolution spatial discretization scheme on staggered grids in two-phase flow applications. Fully implicit time integration schemes were also implemented to reduce numerical errors from operator-splitting types of time integration schemes. The resulting nonlinear system has been successfully solved using the Jacobian-free Newton–Krylov (JFNK) method. The high-resolution spatial discretization and high-order fully implicit time integration numerical schemes were tested and numerically verified for several two-phase test problems, including a two-phase advection problem, a two-phase advection with phase appearance/disappearance problem, and the water faucet problem. Numerical results clearly demonstrated the advantages of using such high-resolution spatial and high-order temporal numerical schemes to significantly reduce numerical diffusion and therefore improve accuracy. Our study also demonstrated that the JFNK method is stable and robust in solving two-phase flow problems, even when phase appearance/disappearance exists

  6. Robust Branch-Cut-and-Price for the Capacitated Minimum Spanning Tree Problem over a Large Extended Formulation

    DEFF Research Database (Denmark)

    Uchoa, Eduardo; Fukasawa, Ricardo; Lysgaard, Jens

    This paper presents a robust branch-cut-and-price algorithm for the Capacitated Minimum Spanning Tree Problem (CMST). The variables are associated to q-arbs, a structure that arises from a relaxation of the capacitated prize-collecting arborescence problem in order to make it solvable in pseudo......-polynomial time. Traditional inequalities over the arc formulation, like Capacity Cuts, are also used. Moreover, a novel feature is introduced in such kind of algorithms. Powerful new cuts expressed over a very large set of variables could be added, without increasing the complexity of the pricing subproblem...

  7. Quasi-Eulerian formulation for fluid-structure interaction

    International Nuclear Information System (INIS)

    Kennedy, J.M.; Belytschko, T.B.

    1979-01-01

    In this paper, recent developments of a quasi-Eulerian finite element formulation for the treatment of the fluid in fluid-structure interaction problems are described. The present formulation is applicable both to plane two-dimensional and axisymmetric three-dimensional problems. In order to reduce the noise associated with the convection terms, an amplification factor is used to implement an up-winding type scheme. The application of the method is illustrated in two problems which are of importance in nuclear reactor safety: 1. A two-dimensional model of a cross section of a subassembly configuration, where the quasi-Eulerian formulation is used to model the fluid adjacent to the structures and in the channel between the subassemblies. 2. Pressure transients in a straight pipe, where the axisymmetric formulation is used to model the fluid in the pipe. These results are compared to experimental results for these problems and compare quite well. The major problem in the application of these methods appears to be the automation of the scheme for moving the fluid nodes. Several alternative schemes are used in the problems described here, and a more general scheme which appears to offer a reasonable (orig.)

  8. General Relativity without paradigm of space-time covariance, and resolution of the problem of time

    Science.gov (United States)

    Soo, Chopin; Yu, Hoi-Lai

    2014-01-01

    The framework of a theory of gravity from the quantum to the classical regime is presented. The paradigm shift from full space-time covariance to spatial diffeomorphism invariance, together with clean decomposition of the canonical structure, yield transparent physical dynamics and a resolution of the problem of time. The deep divide between quantum mechanics and conventional canonical formulations of quantum gravity is overcome with a Schrödinger equation for quantum geometrodynamics that describes evolution in intrinsic time. Unitary time development with gauge-invariant temporal ordering is also viable. All Kuchar observables become physical; and classical space-time, with direct correlation between its proper times and intrinsic time intervals, emerges from constructive interference. The framework not only yields a physical Hamiltonian for Einstein's theory, but also prompts natural extensions and improvements towards a well behaved quantum theory of gravity. It is a consistent canonical scheme to discuss Horava-Lifshitz theories with intrinsic time evolution, and of the many possible alternatives that respect 3-covariance (rather than the more restrictive 4-covariance of Einstein's theory), Horava's "detailed balance" form of the Hamiltonian constraint is essentially pinned down by this framework. Issues in quantum gravity that depend on radiative corrections and the rigorous definition and regularization of the Hamiltonian operator are not addressed in this work.

  9. Mathematical modelling and numerical resolution of multi-phase compressible fluid flows problems

    International Nuclear Information System (INIS)

    Lagoutiere, Frederic

    2000-01-01

    This work deals with Eulerian compressible multi-species fluid dynamics, the species being either mixed or separated (with interfaces). The document is composed of three parts. The first parts devoted to the numerical resolution of model problems: advection equation, Burgers equation, and Euler equations, in dimensions one and two. The goal is to find a precise method, especially for discontinuous initial conditions, and we develop non dissipative algorithms. They are based on a downwind finite-volume discretization under some stability constraints. The second part treats of the mathematical modelling of fluids mixtures. We construct and analyse a set of multi-temperature and multi-pressure models that are entropy, symmetrizable, hyperbolic, not ever conservative. In the third part, we apply the ideas developed in the first part (downwind discretization) to the numerical resolution of the partial differential problems we have constructed for fluids mixtures in the second part. We present some numerical results in dimensions one and two. (author) [fr

  10. A review of scheduling problem and resolution methods in flexible flow shop

    Directory of Open Access Journals (Sweden)

    Tian-Soon Lee

    2019-01-01

    Full Text Available The Flexible flow shop (FFS is defined as a multi-stage flow shops with multiple parallel machines. FFS scheduling problem is a complex combinatorial problem which has been intensively studied in many real world industries. This review paper gives a comprehensive exploration review on the FFS scheduling problem and guides the reader by considering and understanding different environmental assumptions, system constraints and objective functions for future research works. The published papers are classified into two categories. First is the FFS system characteristics and constraints including the problem differences and limitation defined by different studies. Second, the scheduling performances evaluation are elaborated and categorized into time, job and multi related objectives. In addition, the resolution approaches that have been used to solve FFS scheduling problems are discussed. This paper gives a comprehensive guide for the reader with respect to future research work on the FFS scheduling problem.

  11. Robust isotropic super-resolution by maximizing a Laplace posterior for MRI volumes

    Science.gov (United States)

    Han, Xian-Hua; Iwamoto, Yutaro; Shiino, Akihiko; Chen, Yen-Wei

    2014-03-01

    Magnetic resonance imaging can only acquire volume data with finite resolution due to various factors. In particular, the resolution in one direction (such as the slice direction) is much lower than others (such as the in-plane direction), yielding un-realistic visualizations. This study explores to reconstruct MRI isotropic resolution volumes from three orthogonal scans. This proposed super- resolution reconstruction is formulated as a maximum a posterior (MAP) problem, which relies on the generation model of the acquired scans from the unknown high-resolution volumes. Generally, the deviation ensemble of the reconstructed high-resolution (HR) volume from the available LR ones in the MAP is represented as a Gaussian distribution, which usually results in some noise and artifacts in the reconstructed HR volume. Therefore, this paper investigates a robust super-resolution by formulating the deviation set as a Laplace distribution, which assumes sparsity in the deviation ensemble based on the possible insight of the appeared large values only around some unexpected regions. In addition, in order to achieve reliable HR MRI volume, we integrates the priors such as bilateral total variation (BTV) and non-local mean (NLM) into the proposed MAP framework for suppressing artifacts and enriching visual detail. We validate the proposed robust SR strategy using MRI mouse data with high-definition resolution in two direction and low-resolution in one direction, which are imaged in three orthogonal scans: axial, coronal and sagittal planes. Experiments verifies that the proposed strategy can achieve much better HR MRI volumes than the conventional MAP method even with very high-magnification factor: 10.

  12. A novel approach for multiple mobile objects path planning: Parametrization method and conflict resolution strategy

    International Nuclear Information System (INIS)

    Ma, Yong; Wang, Hongwei; Zamirian, M.

    2012-01-01

    We present a new approach containing two steps to determine conflict-free paths for mobile objects in two and three dimensions with moving obstacles. Firstly, the shortest path of each object is set as goal function which is subject to collision-avoidance criterion, path smoothness, and velocity and acceleration constraints. This problem is formulated as calculus of variation problem (CVP). Using parametrization method, CVP is converted to time-varying nonlinear programming problems (TNLPP) and then resolved. Secondly, move sequence of object is assigned by priority scheme; conflicts are resolved by multilevel conflict resolution strategy. Approach efficiency is confirmed by numerical examples. -- Highlights: ► Approach with parametrization method and conflict resolution strategy is proposed. ► Approach fits for multi-object paths planning in two and three dimensions. ► Single object path planning and multi-object conflict resolution are orderly used. ► Path of each object obtained with parameterization method in the first phase. ► Conflict-free paths gained by multi-object conflict resolution in the second phase.

  13. Multilevel fast multipole method based on a potential formulation for 3D electromagnetic scattering problems.

    Science.gov (United States)

    Fall, Mandiaye; Boutami, Salim; Glière, Alain; Stout, Brian; Hazart, Jerome

    2013-06-01

    A combination of the multilevel fast multipole method (MLFMM) and boundary element method (BEM) can solve large scale photonics problems of arbitrary geometry. Here, MLFMM-BEM algorithm based on a scalar and vector potential formulation, instead of the more conventional electric and magnetic field formulations, is described. The method can deal with multiple lossy or lossless dielectric objects of arbitrary geometry, be they nested, in contact, or dispersed. Several examples are used to demonstrate that this method is able to efficiently handle 3D photonic scatterers involving large numbers of unknowns. Absorption, scattering, and extinction efficiencies of gold nanoparticle spheres, calculated by the MLFMM, are compared with Mie's theory. MLFMM calculations of the bistatic radar cross section (RCS) of a gold sphere near the plasmon resonance and of a silica coated gold sphere are also compared with Mie theory predictions. Finally, the bistatic RCS of a nanoparticle gold-silver heterodimer calculated with MLFMM is compared with unmodified BEM calculations.

  14. Local normal vector field formulation for periodic scattering problems formulated in the spectral domain

    NARCIS (Netherlands)

    van Beurden, M.C.; Setija, Irwan

    2017-01-01

    We present two adapted formulations, one tailored to isotropic media and one for general anisotropic media, of the normal vector field framework previously introduced to improve convergence near arbitrarily shaped material interfaces in spectral simulation methods for periodic scattering geometries.

  15. SOLVE THE PROBLEM OF THE STRETCHING OF A HOLLOW CYLINDER WITH A DEFECT IN THE FORM OF A CAVITY WITH A CRACK IN THE ELASTIC-PLASTIC FORMULATION

    OpenAIRE

    KharchenkoV.V.; Ban’koS.N.; KobelskyS.V.; KravchenkoV.I.

    2014-01-01

    The results of calculating the stress state of a hollow cylinder with a defect in the form of cracks, which is at the top of the cavity in the elastic-plastic formulation is presented. The calculation results are compared with the results of solving this problem in the elastic formulation vand with the results of solving the problem of the stretching cylinder with a crack.

  16. Mass-flux subgrid-scale parameterization in analogy with multi-component flows: a formulation towards scale independence

    Directory of Open Access Journals (Sweden)

    J.-I. Yano

    2012-11-01

    Full Text Available A generalized mass-flux formulation is presented, which no longer takes a limit of vanishing fractional areas for subgrid-scale components. The presented formulation is applicable to a~situation in which the scale separation is still satisfied, but fractional areas occupied by individual subgrid-scale components are no longer small. A self-consistent formulation is presented by generalizing the mass-flux formulation under the segmentally-constant approximation (SCA to the grid–scale variabilities. The present formulation is expected to alleviate problems arising from increasing resolutions of operational forecast models without invoking more extensive overhaul of parameterizations.

    The present formulation leads to an analogy of the large-scale atmospheric flow with multi-component flows. This analogy allows a generality of including any subgrid-scale variability into the mass-flux parameterization under SCA. Those include stratiform clouds as well as cold pools in the boundary layer.

    An important finding under the present formulation is that the subgrid-scale quantities are advected by the large-scale velocities characteristic of given subgrid-scale components (large-scale subcomponent flows, rather than by the total large-scale flows as simply defined by grid-box average. In this manner, each subgrid-scale component behaves as if like a component of multi-component flows. This formulation, as a result, ensures the lateral interaction of subgrid-scale variability crossing the grid boxes, which are missing in the current parameterizations based on vertical one-dimensional models, and leading to a reduction of the grid-size dependencies in its performance. It is shown that the large-scale subcomponent flows are driven by large-scale subcomponent pressure gradients. The formulation, as a result, furthermore includes a self-contained description of subgrid-scale momentum transport.

    The main purpose of the present paper

  17. Generic Mathematical Programming Formulation and Solution for Computer-Aided Molecular Design

    DEFF Research Database (Denmark)

    Zhang, Lei; Cignitti, Stefano; Gani, Rafiqul

    2015-01-01

    This short communication presents a generic mathematical programming formulation for Computer-Aided Molecular Design (CAMD). A given CAMD problem, based on target properties, is formulated as a Mixed Integer Linear/Non-Linear Program (MILP/MINLP). The mathematical programming model presented here......, which is formulated as an MILP/MINLP problem, considers first-order and second-order molecular groups for molecular structure representation and property estimation. It is shown that various CAMD problems can be formulated and solved through this model....

  18. SOLVE THE PROBLEM OF THE STRETCHING OF A HOLLOW CYLINDER WITH A DEFECT IN THE FORM OF A CAVITY WITH A CRACK IN THE ELASTIC-PLASTIC FORMULATION

    Directory of Open Access Journals (Sweden)

    KharchenkoV.V.

    2014-12-01

    Full Text Available The results of calculating the stress state of a hollow cylinder with a defect in the form of cracks, which is at the top of the cavity in the elastic-plastic formulation is presented. The calculation results are compared with the results of solving this problem in the elastic formulation vand with the results of solving the problem of the stretching cylinder with a crack.

  19. New formulations on the finite element method for boundary value problems with internal/external boundary layers; Novas formulacoes de elementos finitos para problemas de valor de contorno com camadas limite interna/externa

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, Luis Carlos Martins

    1998-06-15

    New Petrov-Galerkin formulations on the finite element methods for convection-diffusion problems with boundary layers are presented. Such formulations are based on a consistent new theory on discontinuous finite element methods. Existence and uniqueness of solutions for these problems in the new finite element spaces are demonstrated. Some numerical experiments shows how the new formulation operate and also their efficacy. (author)

  20. Mixed-hybrid finite element method for the transport equation and diffusion approximation of transport problems; Resolution de l'equation du transport par une methode d'elements finis mixtes-hybrides et approximation par la diffusion de problemes de transport

    Energy Technology Data Exchange (ETDEWEB)

    Cartier, J

    2006-04-15

    This thesis focuses on mathematical analysis, numerical resolution and modelling of the transport equations. First of all, we deal with numerical approximation of the solution of the transport equations by using a mixed-hybrid scheme. We derive and study a mixed formulation of the transport equation, then we analyse the related variational problem and present the discretization and the main properties of the scheme. We particularly pay attention to the behavior of the scheme and we show its efficiency in the diffusion limit (when the mean free path is small in comparison with the characteristic length of the physical domain). We present academical benchmarks in order to compare our scheme with other methods in many physical configurations and validate our method on analytical test cases. Unstructured and very distorted meshes are used to validate our scheme. The second part of this thesis deals with two transport problems. The first one is devoted to the study of diffusion due to boundary conditions in a transport problem between two plane plates. The second one consists in modelling and simulating radiative transfer phenomenon in case of the industrial context of inertial confinement fusion. (author)

  1. Explicit formulation of a nodal transport method for discrete ordinates calculations in two-dimensional fixed-source problems

    Energy Technology Data Exchange (ETDEWEB)

    Tres, Anderson [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Matematica Aplicada; Becker Picoloto, Camila [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica; Prolo Filho, Joao Francisco [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Inst de Matematica, Estatistica e Fisica; Dias da Cunha, Rudnei; Basso Barichello, Liliane [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Inst de Matematica

    2014-04-15

    In this work a study of two-dimensional fixed-source neutron transport problems, in Cartesian geometry, is reported. The approach reduces the complexity of the multidimensional problem using a combination of nodal schemes and the Analytical Discrete Ordinates Method (ADO). The unknown leakage terms on the boundaries that appear from the use of the derivation of the nodal scheme are incorporated to the problem source term, such as to couple the one-dimensional integrated solutions, made explicit in terms of the x and y spatial variables. The formulation leads to a considerable reduction of the order of the associated eigenvalue problems when combined with the usual symmetric quadratures, thereby providing solutions that have a higher degree of computational efficiency. Reflective-type boundary conditions are introduced to represent the domain on a simpler form than that previously considered in connection with the ADO method. Numerical results obtained with the technique are provided and compared to those present in the literature. (orig.)

  2. An exact approach for aggregated formulations

    DEFF Research Database (Denmark)

    Gamst, Mette; Spoorendonk, Simon

    Aggregating formulations is a powerful approach for transforming problems into taking more tractable forms. Aggregated formulations can, though, have drawbacks: some information may get lost in the aggregation and { put in a branch-and-bound context { branching may become very di_cult and even....... The paper includes general considerations on types of problems for which the method is of particular interest. Furthermore, we prove the correctness of the procedure and consider how to include extensions such as cutting planes and advanced branching strategies....

  3. Need for appropriate formulations for children: the national institute of child health and human development-pediatric formulations initiative, part 1.

    Science.gov (United States)

    Giacoia, George P; Taylor-Zapata, Perdita; Mattison, Donald

    2007-01-01

    The development and compounding of pharmacotherapeutic formulations that are suitable for infants and young children can be a challenging problem. This problem results from the lack of knowledge on the acceptability of different dosage forms and formulations in children in relation to age and developmental status, as well as the lack of reliable documentation of formulations used in pediatric clinical trials. As part of its mandate under the Best Pharmaceuticals for Children Act to improve pediatric therapeutics, the National Institute of Child Health and Human Development has sponsored the Pediatric Formulation Initiative. The goal of this ongoing initiative is to address the issues and concnerns associated with pediatric therapeutics by convening groups of researchers and experts in pediatric formulations from academia, pharmaceutical companies, the National Institutes of Health, and the U.S. Food and Drug Administration.

  4. Audits of radiopharmaceutical formulations

    International Nuclear Information System (INIS)

    Castronovo, F.P. Jr.

    1992-01-01

    A procedure for auditing radiopharmaceutical formulations is described. To meet FDA guidelines regarding the quality of radiopharmaceuticals, institutional radioactive drug research committees perform audits when such drugs are formulated away from an institutional pharmacy. All principal investigators who formulate drugs outside institutional pharmacies must pass these audits before they can obtain a radiopharmaceutical investigation permit. The audit team meets with the individual who performs the formulation at the site of drug preparation to verify that drug formulations meet identity, strength, quality, and purity standards; are uniform and reproducible; and are sterile and pyrogen free. This team must contain an expert knowledgeable in the preparation of radioactive drugs; a radiopharmacist is the most qualified person for this role. Problems that have been identified by audits include lack of sterility and apyrogenicity testing, formulations that are open to the laboratory environment, failure to use pharmaceutical-grade chemicals, inadequate quality control methods or records, inadequate training of the person preparing the drug, and improper unit dose preparation. Investigational radiopharmaceutical formulations, including nonradiolabeled drugs, must be audited before they are administered to humans. A properly trained pharmacist should be a member of the audit team

  5. Operations research problems statements and solutions

    CERN Document Server

    Poler, Raúl; Díaz-Madroñero, Manuel

    2014-01-01

    The objective of this book is to provide a valuable compendium of problems as a reference for undergraduate and graduate students, faculty, researchers and practitioners of operations research and management science. These problems can serve as a basis for the development or study of assignments and exams. Also, they can be useful as a guide for the first stage of the model formulation, i.e. the definition of a problem. The book is divided into 11 chapters that address the following topics: Linear programming, integer programming, non linear programming, network modeling, inventory theory, queue theory, tree decision, game theory, dynamic programming and markov processes. Readers are going to find a considerable number of statements of operations research applications for management decision-making. The solutions of these problems are provided in a concise way although all topics start with a more developed resolution. The proposed problems are based on the research experience of the authors in real-world com...

  6. Divergence identities in curved space-time. A resolution of the stress-energy problem

    International Nuclear Information System (INIS)

    Yilmaz, H.; Tufts Univ., Medford, MA

    1989-01-01

    It is noted that the joint use of two basic differential identities in curved space-time, namely. 1) the Einstein-Hilbert identity (1915), and 2) the identity of P. Freud (1939), permits a viable alternative to general relativity and a resolution of the field stress-energy' problem of the gravitational theory. (orig.)

  7. An MPCC Formulation and Its Smooth Solution Algorithm for Continuous Network Design Problem

    Directory of Open Access Journals (Sweden)

    Guangmin Wang

    2017-12-01

    Full Text Available Continuous network design problem (CNDP is searching for a transportation network configuration to minimize the sum of the total system travel time and the investment cost of link capacity expansions by considering that the travellers follow a traditional Wardrop user equilibrium (UE to choose their routes. In this paper, the CNDP model can be formulated as mathematical programs with complementarity constraints (MPCC by describing UE as a non-linear complementarity problem (NCP. To address the difficulty resulting from complementarity constraints in MPCC, they are substituted by the Fischer-Burmeister (FB function, which can be smoothed by the introduction of the smoothing parameter. Therefore, the MPCC can be transformed into a well-behaved non-linear program (NLP by replacing the complementarity constraints with a smooth equation. Consequently, the solver such as LINDOGLOBAL in GAMS can be used to solve the smooth approximate NLP to obtain the solution to MPCC for modelling CNDP. The numerical experiments on the example from the literature demonstrate that the proposed algorithm is feasible.

  8. Integer programming formulation and variable neighborhood search metaheuristic for the multiproduct pipeline scheduling problem

    Energy Technology Data Exchange (ETDEWEB)

    Souza Filho, Erito M.; Bahiense, Laura; Ferreira Filho, Virgilio J.M. [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE); Lima, Leonardo [Centro Federal de Educacao Tecnologica Celso Sukow da Fonseca (CEFET-RJ), Rio de Janeiro, RJ (Brazil)

    2008-07-01

    Pipeline are known as the most reliable and economical mode of transportation for petroleum and its derivatives, especially when large amounts of products have to be pumped for large distances. In this work we address the short-term schedule of a pipeline system comprising the distribution of several petroleum derivatives from a single oil refinery to several depots, connected to local consumer markets, through a single multi-product pipeline. We propose an integer linear programming formulation and a variable neighborhood search meta-heuristic in order to compare the performances of the exact and heuristic approaches to the problem. Computational tests in C language and MOSEL/XPRESS-MP language are performed over a real Brazilian pipeline system. (author)

  9. Mobile Robots Path Planning Using the Overall Conflict Resolution and Time Baseline Coordination

    Directory of Open Access Journals (Sweden)

    Yong Ma

    2014-01-01

    Full Text Available This paper aims at resolving the path planning problem in a time-varying environment based on the idea of overall conflict resolution and the algorithm of time baseline coordination. The basic task of the introduced path planning algorithms is to fulfill the automatic generation of the shortest paths from the defined start poses to their end poses with consideration of generous constraints for multiple mobile robots. Building on this, by using the overall conflict resolution, within the polynomial based paths, we take into account all the constraints including smoothness, motion boundary, kinematics constraints, obstacle avoidance, and safety constraints among robots together. And time baseline coordination algorithm is proposed to process the above formulated problem. The foremost strong point is that much time can be saved with our approach. Numerical simulations verify the effectiveness of our approach.

  10. Mixed-hybrid finite element method for the transport equation and diffusion approximation of transport problems; Resolution de l'equation du transport par une methode d'elements finis mixtes-hybrides et approximation par la diffusion de problemes de transport

    Energy Technology Data Exchange (ETDEWEB)

    Cartier, J

    2006-04-15

    This thesis focuses on mathematical analysis, numerical resolution and modelling of the transport equations. First of all, we deal with numerical approximation of the solution of the transport equations by using a mixed-hybrid scheme. We derive and study a mixed formulation of the transport equation, then we analyse the related variational problem and present the discretization and the main properties of the scheme. We particularly pay attention to the behavior of the scheme and we show its efficiency in the diffusion limit (when the mean free path is small in comparison with the characteristic length of the physical domain). We present academical benchmarks in order to compare our scheme with other methods in many physical configurations and validate our method on analytical test cases. Unstructured and very distorted meshes are used to validate our scheme. The second part of this thesis deals with two transport problems. The first one is devoted to the study of diffusion due to boundary conditions in a transport problem between two plane plates. The second one consists in modelling and simulating radiative transfer phenomenon in case of the industrial context of inertial confinement fusion. (author)

  11. Main formulations of the finite element method for the problems of structural mechanics. Part 2

    Directory of Open Access Journals (Sweden)

    Ignat’ev Aleksandr Vladimirovich

    Full Text Available The author offers a classification of Finite Element formulations, which allows orienting in a great number of the published and continuing to be published works on the problem of raising the efficiency of this widespread numerical method. The second part of the article offers examination of straight formulations of FEM in the form of displacement approach, area method and classical mixed-mode method. The question of solution convergence according to FEM in the form of classical mixed-mode method is considered on the example of single-input single-output system of a beam in case of finite element grid refinement. The author draws a conclusion, that extinction of algebraic equations system of FEM in case of passage to the limit is not a peculiar feature of this method in general, but manifests itself only in some particular cases. At the same time the obtained results prove that FEM in mixed-mode form provides obtaining more stable results in case of finite element grid refinement in comparison with FEM in the form of displacement approach. It is quite obvious that the same qualities will appear also in two-dimensional systems.

  12. Need for appropriate formulations for children: the national institute of child health and human development-pediatric formulations initiative, part 2.

    Science.gov (United States)

    Giacoia, George P; Taylor-Zapata, Perdita; Mattison, Donald

    2007-01-01

    The development and compounding of pharmacotherapeutic formulations that are suitable for infants and young children can be a challenging problem. This problem results from the lack of knowledge on the acceptability of different dosage forms and formulations to children in relation to age and developmental status, as well as the lack of reliable documentation of formulations used in pediatric clinical trials. As part of its mandate under the Best Pharmaceuticals for Children Act to improve pediatric therapeutics, the National Institute of Child Health and Human Development has sponsored the Pediatric Formulations Initiative. The goal of this ongoing initiative is to address the issues and concerns associated with pediatric therapeutics by convening groups of researchers and experts in pediatric formulations from academia, pharmaceutical companies, the National Institutes of Health, and the U.S. Food and Drug Administration. In this second part of a two-part article, the activities of the various groups that constitute the Pediatric Formulations Initiative are discussed, in addition the Initiative's future activities and plans are outlined.

  13. Robust branch-cut-and-price for the Capacitated Minimum Spanning Tree problem over a large extended formulation

    DEFF Research Database (Denmark)

    Uchoa, Eduardo; Fukasawa, Ricardo; Lysgaard, Jens

    2008-01-01

    -polynomial time. Traditional inequalities over the arc formulation, like Capacity Cuts, are also used. Moreover, a novel feature is introduced in such kind of algorithms: powerful new cuts expressed over a very large set of variables are added, without increasing the complexity of the pricing subproblem......This paper presents a robust branch-cut-and-price algorithm for the Capacitated Minimum Spanning Tree Problem (CMST). The variables are associated to q-arbs, a structure that arises from a relaxation of the capacitated prize-collecting arborescence problem in order to make it solvable in pseudo...... or the size of the LPs that are actually solved. Computational results on benchmark instances from the OR-Library show very significant improvements over previous algorithms. Several open instances could be solved to optimality....

  14. Integration of a Portfolio-based Approach to Evaluate Aerospace R and D Problem Formulation Into a Parametric Synthesis Tool

    Science.gov (United States)

    Oza, Amit R.

    The focus of this study is to improve R&D effectiveness towards aerospace and defense planning in the early stages of the product development lifecycle. Emphasis is on: correct formulation of a decision problem, with special attention to account for data relationships between the individual design problem and the system capability required to size the aircraft, understanding of the meaning of the acquisition strategy objective and subjective data requirements that are required to arrive at a balanced analysis and/or "correct" mix of technology projects, understanding the meaning of the outputs that can be created from the technology analysis, and methods the researcher can use at effectively support decisions at the acquisition and conceptual design levels through utilization of a research and development portfolio strategy. The primary objectives of this study are to: (1) determine what strategy should be used to initialize conceptual design parametric sizing processes during requirements analysis for the materiel solution analysis stage of the product development lifecycle when utilizing data already constructed in the latter phase when working with a generic database management system synthesis tool integration architecture for aircraft design , and (2) assess how these new data relationships can contribute for innovative decision-making when solving acquisition hardware/technology portfolio problems. As such, an automated composable problem formulation system is developed to consider data interactions for the system architecture that manages acquisition pre-design concept refinement portfolio management, and conceptual design parametric sizing requirements. The research includes a way to: • Formalize the data storage and implement the data relationship structure with a system architecture automated through a database management system. • Allow for composable modeling, in terms of level of hardware abstraction, for the product model, mission model, and

  15. The formulation of dynamical contact problems with friction in the case of systems of rigid bodies and general discrete mechanical systems—Painlevé and Kane paradoxes revisited

    Science.gov (United States)

    Charles, Alexandre; Ballard, Patrick

    2016-08-01

    The dynamics of mechanical systems with a finite number of degrees of freedom (discrete mechanical systems) is governed by the Lagrange equation which is a second-order differential equation on a Riemannian manifold (the configuration manifold). The handling of perfect (frictionless) unilateral constraints in this framework (that of Lagrange's analytical dynamics) was undertaken by Schatzman and Moreau at the beginning of the 1980s. A mathematically sound and consistent evolution problem was obtained, paving the road for many subsequent theoretical investigations. In this general evolution problem, the only reaction force which is involved is a generalized reaction force, consistently with the virtual power philosophy of Lagrange. Surprisingly, such a general formulation was never derived in the case of frictional unilateral multibody dynamics. Instead, the paradigm of the Coulomb law applying to reaction forces in the real world is generally invoked. So far, this paradigm has only enabled to obtain a consistent evolution problem in only some very few specific examples and to suggest numerical algorithms to produce computational examples (numerical modeling). In particular, it is not clear what is the evolution problem underlying the computational examples. Moreover, some of the few specific cases in which this paradigm enables to write down a precise evolution problem are known to show paradoxes: the Painlevé paradox (indeterminacy) and the Kane paradox (increase in kinetic energy due to friction). In this paper, we follow Lagrange's philosophy and formulate the frictional unilateral multibody dynamics in terms of the generalized reaction force and not in terms of the real-world reaction force. A general evolution problem that governs the dynamics is obtained for the first time. We prove that all the solutions are dissipative; that is, this new formulation is free of Kane paradox. We also prove that some indeterminacy of the Painlevé paradox is fixed in this

  16. A general real-time formulation for multi-rate mass transfer problems

    Directory of Open Access Journals (Sweden)

    O. Silva

    2009-08-01

    Full Text Available Many flow and transport phenomena, ranging from delayed storage in pumping tests to tailing in river or aquifer tracer breakthrough curves or slow kinetics in reactive transport, display non-equilibrium (NE behavior. These phenomena are usually modeled by non-local in time formulations, such as multi-porosity, multiple processes non equilibrium, continuous time random walk, memory functions, integro-differential equations, fractional derivatives or multi-rate mass transfer (MRMT, among others. We present a MRMT formulation that can be used to represent all these models of non equilibrium. The formulation can be extended to non-linear phenomena. Here, we develop an algorithm for linear mass transfer, which is accurate, computationally inexpensive and easy to implement in existing groundwater or river flow and transport codes. We illustrate this approach by application to published data involving NE groundwater flow and solute transport in rivers and aquifers.

  17. Multiple excitation of supports - Part 1. Formulation

    International Nuclear Information System (INIS)

    Galeao, A.C.N.R.; Barbosa, H.J.C.

    1980-12-01

    The formulation and the solution of a simple specific problem of support movement are presented. The formulation is extended to the general case of infinitesimal elasticity where the approximated solutions are obtained by the variational formulation with spatial discretization by Finite Element Method. Finally, the present usual numerical techniques for the treatment of the resulting ordinary differential equations system are discused: Direct integration, Modal overlap, Spectral response. (E.G.) [pt

  18. Resolution function in neutron diffractometry

    International Nuclear Information System (INIS)

    Popa, N.

    1987-01-01

    The resolution function in the neutron diffractometry is defined, on base of generalizing the resolution formerly formulated for the double axis neutron spectrometer. A polemical discussion is raised concerning an approach to this function existent in literature. The actual approach is concretized for the DN-2 time-of-flight diffractometer installed at the IBR-2 reactor

  19. Eddy current imaging. Simplifying the direct problem. Analysis of a 2D case with formulations

    International Nuclear Information System (INIS)

    Spineanu, A.; Zorgati, R.

    1995-01-01

    Eddy current non-destructive testing is used by EDF to detect faults affecting conductive objects such as steam generator tubes. A new technique, known as eddy current imaging, is being developed to facilitate diagnosis in this context. The first stage in this work, discussed in the present paper, consists in solving the direct problem. This entails determining the measurable quantities, on the basis of a thorough knowledge of the material considered. This was done by formulating the direct problem in terms of eddy currents in general 3D geometry context, applying distribution theory and Maxwell equations. Since no direct problem code was available we resorted to simplified situations. Taking care not to interfere with previous developments or those to be attempted in an inversion context, we studied the case of a flaw affecting a 2D structure, illuminated by a plane wave type probe. For this configuration, we studied the exact model and compared results with those of a linearized simplified model. This study emphasizes the ill-posed situation of the eddy current inverse problem related with the severe electromagnetic field attenuation. This means that regularization of the inverse problem, although absolutely necessary, will not be sufficient. Owing to the simplicity of the models available and implemented during the inversion process, processing real data would not yet be possible. We must first focus all our efforts on the direct 3 D problem, in conformity with the requirements of the inverse procedure ad describing a realistic eddy current NDT situation. At the same time, consideration should be given to the design of a specific probe customized for eddy current imaging. (authors). 9 refs., 5 figs., 3 appends

  20. Obesity and internalized weight stigma: a formulation model for an emerging psychological problem.

    Science.gov (United States)

    Ratcliffe, Denise; Ellison, Nell

    2015-03-01

    Obese individuals frequently experience weight stigma and this is associated with psychological distress and difficulties. The process of external devaluation can lead to negative self-perception and evaluation and some obese individuals develop "internalized weight stigma". The prevalence of weight stigma is well established but there is a lack of information about the interplay between external and internal weight stigma. To synthesize the literature on the psychological effects of weight stigma into a formulation model that addresses the maintenance of internalized weight stigma. Current research on the psychological impact of weight stigma was reviewed. We identify cognitive, behavioural and attentional processes that maintain psychological conditions where self-evaluation plays a central role. A model was developed based on clinical utility. The model focuses on identifying factors that influence and maintain internalized weight stigma. We highlight the impact of negative societal and interpersonal experiences of weight stigma on how individuals view themselves as an obese person. Processing the self as a stigmatized individual is at the core of the model. Maintenance factors include negative self-judgements about the meaning of being an obese individual, attentional and mood shifts, and avoidance and safety behaviours. In addition, eating and weight management behaviours become deregulated and maintain both obesity and weight stigma. As obesity increases, weight stigma and the associated psychological effects are likely to increase. We provide a framework for formulating and intervening with internalized weight stigma as well as making therapists aware of the applicability and transferability of strategies that they may already use with other presenting problems.

  1. OPERATOR-RELATED FORMULATION OF THE EIGENVALUE PROBLEM FOR THE BOUNDARY PROBLEM OF ANALYSIS OF A THREE-DIMENSIONAL STRUCTURE WITH PIECEWISE-CONSTANT PHYSICAL AND GEOMETRICAL PARAMETERS ALONGSIDE THE BASIC DIRECTION WITHIN THE FRAMEWORK OF THE DISCRETE-CON

    Directory of Open Access Journals (Sweden)

    Akimov Pavel Alekseevich

    2012-10-01

    Full Text Available The proposed paper covers the operator-related formulation of the eigenvalue problem of analysis of a three-dimensional structure that has piecewise-constant physical and geometrical parameters alongside the so-called basic direction within the framework of a discrete-continual approach (a discrete-continual finite element method, a discrete-continual variation method. Generally, discrete-continual formulations represent contemporary mathematical models that become available for computer implementation. They make it possible for a researcher to consider the boundary effects whenever particular components of the solution represent rapidly varying functions. Another feature of discrete-continual methods is the absence of any limitations imposed on lengths of structures. The three-dimensional problem of elasticity is used as the design model of a structure. In accordance with the so-called method of extended domain, the domain in question is embordered by an extended one of an arbitrary shape. At the stage of numerical implementation, relative key features of discrete-continual methods include convenient mathematical formulas, effective computational patterns and algorithms, simple data processing, etc. The authors present their formulation of the problem in question for an isotropic medium with allowance for supports restrained by elastic elements while standard boundary conditions are also taken into consideration.

  2. Tackling wicked problems: how theories of agency can provide new insights.

    Science.gov (United States)

    Varpio, Lara; Aschenbrener, Carol; Bates, Joanna

    2017-04-01

    This paper reviews why and how theories of agency can be used as analytical lenses to help health professions education (HPE) scholars address our community's wicked problems. Wicked problems are those that resist clear problem statements, defy traditional analysis approaches, and refuse definitive resolution (e.g. student remediation, assessments of professionalism, etc.). We illustrate how theories of agency can provide new insights into such challenges by examining the application of these theories to one particular wicked problem in HPE: interprofessional education (IPE). After searching the HPE literature and finding that theories of agency had received little attention, we borrowed techniques from narrative literature reviews to search databases indexing a broad scope of disciplines (i.e. ERIC, Web of Science, Scopus, MEDLINE and PubMed) for publications (1994-2014) that: (i) examined agency, or (ii) incorporated an agency-informed analytical perspective. The lead author identified the theories of agency used in these articles, and reviewed the texts on agency cited therein and the original sources of each theory. We identified 10 theories of agency that we considered to be applicable to HPE's wicked problems. To select a subset of theories for presentation in this paper, we discussed each theory in relation to some of HPE's wicked problems. Through debate and reflection, we unanimously agreed on the applicability of a subset of theories for illuminating HPE's wicked problems. This subset is described in this paper. We present four theories of agency: Butler's post-structural formulation; Giddens' sociological formulation; cultural historical activity theory's formulation, and Bandura's social cognitive psychology formulation. We introduce each theory and apply each to the challenges of engaging in IPE. Theories of agency can inform HPE scholarship in novel and generative ways. Each theory offers new insights into the roots of wicked problems and means for

  3. Element free Galerkin formulation of composite beam with longitudinal slip

    Energy Technology Data Exchange (ETDEWEB)

    Ahmad, Dzulkarnain; Mokhtaram, Mokhtazul Haizad [Department of Civil Engineering, Universiti Selangor, Bestari Jaya, Selangor (Malaysia); Badli, Mohd Iqbal; Yassin, Airil Y. Mohd [Faculty of Civil Engineering, Universiti Teknologi Malaysia, Skudai, Johor (Malaysia)

    2015-05-15

    Behaviour between two materials in composite beam is assumed partially interact when longitudinal slip at its interfacial surfaces is considered. Commonly analysed by the mesh-based formulation, this study used meshless formulation known as Element Free Galerkin (EFG) method in the beam partial interaction analysis, numerically. As meshless formulation implies that the problem domain is discretised only by nodes, the EFG method is based on Moving Least Square (MLS) approach for shape functions formulation with its weak form is developed using variational method. The essential boundary conditions are enforced by Langrange multipliers. The proposed EFG formulation gives comparable results, after been verified by analytical solution, thus signify its application in partial interaction problems. Based on numerical test results, the Cubic Spline and Quartic Spline weight functions yield better accuracy for the EFG formulation, compares to other proposed weight functions.

  4. Hamiltonian formulation of anomaly free chiral bosons

    International Nuclear Information System (INIS)

    Abdalla, E.; Abdalla, M.C.B.; Devecchi, F.P.; Zadra, A.

    1988-01-01

    Starting out of an anomaly free Lagrangian formulation for chiral scalars, which a Wess-Zumino Term (to cancel the anomaly), we formulate the corresponding hamiltonian problem. Ther we use the (quantum) Siegel invariance to choose a particular, which turns out coincide with the obtained by Floreanini and Jackiw. (author) [pt

  5. Summary compilation of shell element performance versus formulation.

    Energy Technology Data Exchange (ETDEWEB)

    Heinstein, Martin Wilhelm; Hales, Jason Dean (Idaho National Laboratory, Idaho Falls, ID); Breivik, Nicole L.; Key, Samuel W. (FMA Development, LLC, Great Falls, MT)

    2011-07-01

    This document compares the finite element shell formulations in the Sierra Solid Mechanics code. These are finite elements either currently in the Sierra simulation codes Presto and Adagio, or expected to be added to them in time. The list of elements are divided into traditional two-dimensional, plane stress shell finite elements, and three-dimensional solid finite elements that contain either modifications or additional terms designed to represent the bending stiffness expected to be found in shell formulations. These particular finite elements are formulated for finite deformation and inelastic material response, and, as such, are not based on some of the elegant formulations that can be found in an elastic, infinitesimal finite element setting. Each shell element is subjected to a series of 12 verification and validation test problems. The underlying purpose of the tests here is to identify the quality of both the spatially discrete finite element gradient operator and the spatially discrete finite element divergence operator. If the derivation of the finite element is proper, the discrete divergence operator is the transpose of the discrete gradient operator. An overall summary is provided from which one can rank, at least in an average sense, how well the individual formulations can be expected to perform in applications encountered year in and year out. A letter grade has been assigned albeit sometimes subjectively for each shell element and each test problem result. The number of A's, B's, C's, et cetera assigned have been totaled, and a grade point average (GPA) has been computed, based on a 4.0-system. These grades, combined with a comparison between the test problems and the application problem, can be used to guide an analyst to select the element with the best shell formulation.

  6. Simultaneously Exploiting Two Formulations: an Exact Benders Decomposition Approach

    DEFF Research Database (Denmark)

    Lusby, Richard Martin; Gamst, Mette; Spoorendonk, Simon

    When modelling a given problem using linear programming techniques several possibilities often exist, and each results in a different mathematical formulation of the problem. Usually, advantages and disadvantages can be identified in any single formulation. In this paper we consider mixed integer...... to the standard branch-and-price approach from the literature, the method shows promising performance and appears to be an attractive alternative....

  7. Inverse problems of geophysics

    International Nuclear Information System (INIS)

    Yanovskaya, T.B.

    2003-07-01

    This report gives an overview and the mathematical formulation of geophysical inverse problems. General principles of statistical estimation are explained. The maximum likelihood and least square fit methods, the Backus-Gilbert method and general approaches for solving inverse problems are discussed. General formulations of linearized inverse problems, singular value decomposition and properties of pseudo-inverse solutions are given

  8. Development of formulation device for periodontal disease.

    Science.gov (United States)

    Sato, Yasuhiko; Oba, Takuma; Watanabe, Norio; Danjo, Kazumi

    2012-01-01

    In addition to providing standard surgical treatment that removes the plaque and infected tissues, medications that can regenerate periodontal tissue are also required in the treatment of periodontal disease. As a form of regenerative medication, various growth factors are expected to be used while treating periodontal disease. A protein-like growth factor is often developed as a lyophilized product with dissolution liquid, considering its instability in the solution state. We have clarified that the formulation for periodontal disease needs to be viscous. When the lyophilized product was dissolved using a sticky solution, various problems were encountered, difficulty in dissolving and air bubbles, for example, and some efforts were needed to prepare the formulation. In this research, to identify the problem of preparing a viscous formulation, a lyophilized product (placebo) and sticky liquid were prepared by using vial and ampoule as the conventional containers. Based on these problems, a prototype administration device was developed, and its functionality was confirmed. As a result, it was suggested that the device with a useful mixing system that could shorten the preparation time was developed.

  9. An Evolutionary Formulation of the Crossing Number Problem

    Directory of Open Access Journals (Sweden)

    Che Sheng Gan

    2009-01-01

    Full Text Available A graph drawing algorithm is presented which results in complete graphs having minimum crossings equal to that of Guy's conjecture. It is then generalized and formulated in an evolutionary algorithm (EA to perform constrained search for the crossing numbers. The main objective of this work is to present a suitable two-dimensional scheme which can greatly reduce the complexity of finding crossing numbers by using computer. Program performance criteria are presented and discussed. It is shown that the EA implementation provides good confirmation of the predicted crossing numbers.

  10. An extension of implicit Monte Carlo diffusion: Multigroup and the difference formulation

    International Nuclear Information System (INIS)

    Cleveland, Mathew A.; Gentile, Nick A.; Palmer, Todd S.

    2010-01-01

    Implicit Monte Carlo (IMC) and Implicit Monte Carlo Diffusion (IMD) are approaches to the numerical solution of the equations of radiative transfer. IMD was previously derived and numerically tested on grey, or frequency-integrated problems . In this research, we extend Implicit Monte Carlo Diffusion (IMD) to account for frequency dependence, and we implement the difference formulation as a source manipulation variance reduction technique. We derive the relevant probability distributions and present the frequency dependent IMD algorithm, with and without the difference formulation. The IMD code with and without the difference formulation was tested using both grey and frequency dependent benchmark problems. The Su and Olson semi-analytic Marshak wave benchmark was used to demonstrate the validity of the code for grey problems . The Su and Olson semi-analytic picket fence benchmark was used for the frequency dependent problems . The frequency dependent IMD algorithm reproduces the results of both Su and Olson benchmark problems. Frequency group refinement studies indicate that the computational cost of refining the group structure is likely less than that of group refinement in deterministic solutions of the radiation diffusion methods. Our results show that applying the difference formulation to the IMD algorithm can result in an overall increase in the figure of merit for frequency dependent problems. However, the creation of negatively weighted particles from the difference formulation can cause significant numerical instabilities in regions of the problem with sharp spatial gradients in the solution. An adaptive implementation of the difference formulation may be necessary to focus its use in regions that are at or near thermal equilibrium.

  11. A new continuous-time formulation for scheduling crude oil operations

    International Nuclear Information System (INIS)

    Reddy, P. Chandra Prakash; Karimi, I.A.; Srinivasan, R.

    2004-01-01

    In today's competitive business climate characterized by uncertain oil markets, responding effectively and speedily to market forces, while maintaining reliable operations, is crucial to a refinery's bottom line. Optimal crude oil scheduling enables cost reduction by using cheaper crudes intelligently, minimizing crude changeovers, and avoiding ship demurrage. So far, only discrete-time formulations have stood up to the challenge of this important, nonlinear problem. A continuous-time formulation would portend numerous advantages, however, existing work in this area has just begun to scratch the surface. In this paper, we present the first complete continuous-time mixed integer linear programming (MILP) formulation for the short-term scheduling of operations in a refinery that receives crude from very large crude carriers via a high-volume single buoy mooring pipeline. This novel formulation accounts for real-world operational practices. We use an iterative algorithm to eliminate the crude composition discrepancy that has proven to be the Achilles heel for existing formulations. While it does not guarantee global optimality, the algorithm needs only MILP solutions and obtains excellent maximum-profit schedules for industrial problems with up to 7 days of scheduling horizon. We also report the first comparison of discrete- vs. continuous-time formulations for this complex problem. (Author)

  12. Main formulations of the finite element method for the problems of structural mechanics. Part 3

    Directory of Open Access Journals (Sweden)

    Ignat’ev Aleksandr Vladimirovich

    2015-01-01

    Full Text Available In this paper the author offers is the classification of the formulae of Finite Element Method. This classification help to orient in a huge number of published articles, as well as those to be published, which are dedicated to the problem of enhancing the efficiency of the most commonly used method. The third part of the article considers the variation formulations of FEM and the energy principles lying in the basis of it. If compared to the direct method, which is applied only to finite elements of a simple geometrical type, the variation formulations of FEM are applicable to the elements of any type. All the variation methods can be conventionally divided into two groups. The methods of the first group are based on the principle of energy functional stationarity - a potential system energy, additional energy or on the basis of these energies, which means the full energy. The methods of the second group are based on the variants of mathematical methods of weighted residuals for solving the differential equations, which in some cases can be handled according to the principle of possible displacements or extreme energy principles. The most widely used and multipurpose is the approach based on the use of energy principles coming from the energy conservation law: principle of possible changes in stress state, principle of possible change in stress-strain state.

  13. Duality in constrained location problems

    DEFF Research Database (Denmark)

    Juel, Henrik; Love, Robert F.

    1987-01-01

    The dual of a facility location problem with general norms, distance constraints, and linear constraints is formulated.......The dual of a facility location problem with general norms, distance constraints, and linear constraints is formulated....

  14. Iterative Reconstruction Methods for Inverse Problems in Tomography with Hybrid Data

    DEFF Research Database (Denmark)

    Sherina, Ekaterina

    . The goal of these modalities is to quantify physical parameters of materials or tissues inside an object from given interior data, which is measured everywhere inside the object. The advantage of these modalities is that large variations in physical parameters can be resolved and therefore, they have...... data is precisely the reason why reconstructions with a high contrast and a high resolution can be expected. The main contributions of this thesis consist in formulating the underlying mathematical problems with interior data as nonlinear operator equations, theoretically analysing them within...... iteration and the Levenberg-Marquardt method are employed for solving the problems. The first problem considered in this thesis is a problem of conductivity estimation from interior measurements of the power density, known as Acousto-Electrical Tomography. A special case of limited angle tomography...

  15. Super-resolution

    DEFF Research Database (Denmark)

    Nasrollahi, Kamal; Moeslund, Thomas B.

    2014-01-01

    Super-resolution, the process of obtaining one or more high-resolution images from one or more low-resolution observations, has been a very attractive research topic over the last two decades. It has found practical applications in many real world problems in different fields, from satellite...

  16. Empathy deficit in antisocial personality disorder: a psychodynamic formulation.

    Science.gov (United States)

    Malancharuvil, Joseph M

    2012-09-01

    Empathic difficulty is a highly consequential characteristic of antisocial personality structure. The origin, maintenance, and possible resolution of this profound deficit are not very clear. While reconstructing empathic ability is of primary importance in the treatment of antisocial personality, not many proven procedures are in evidence. In this article, the author offers a psychodynamic formulation of the origin, character, and maintenance of the empathic deficiency in antisocial personality. The author discusses some of the treatment implications from this dynamic formulation.

  17. The Apache Longbow-Hellfire Missile Test at Yuma Proving Ground: Introduction and Problem Formulation for a Multiple Stressor Risk Assessment

    International Nuclear Information System (INIS)

    Efroymson, Rebecca Ann; Peterson, Mark J.; Jones, Daniel Steven; Suter, Glenn

    2008-01-01

    An ecological risk assessment was conducted at Yuma Proving Ground, Arizona, as a demonstration of the Military Ecological Risk Assessment Framework (MERAF). The focus of the assessment was a testing program at Cibola Range, which involved an Apache Longbow helicopter firing Hellfire missiles at moving targets, i.e., M60-A1 tanks. The problem formulation for the assessment included conceptual models for three component activities of the test, helicopter overflight, missile firing, and tracked vehicle movement, and two ecological endpoint entities, woody desert wash communities and desert mule deer (Odocoileus hemionus crooki) populations. An activity-specific risk assessment framework was available to provide guidance for assessing risks associated with aircraft overflights. Key environmental features of the study area include barren desert pavement and tree-lined desert washes. The primary stressors associated with helicopter overflights were sound and the view of the aircraft. The primary stressor associated with Hellfire missile firing was sound. The principal stressor associated with tracked vehicle movement was soil disturbance, and a resulting, secondary stressor was hydrological change. Water loss to washes and wash vegetation was expected to result from increased ponding, infiltration and/or evaporation associated with disturbances to desert pavement. A plan for estimating integrated risks from the three military activities was included in the problem formulation

  18. Formulation of marketing information and communication strategies in Taiwan tourism industry

    OpenAIRE

    Lee, Tzong-Ru; Kuo, Yu-Hsuan; Hilletofth, Per

    2013-01-01

    Purpose: The purpose of this research is to formulate marketing information and communication (ICT) strategies for Taiwan tourism industry. Design/methodology/approach: This research uses a literature review to identify problems and solutions of Taiwan’s tourism industry. One of the identified problems is used as an example to formulate marketing ICT strategies. Findings: This research has identified twenty-five main problems and forty-eight solutions of Taiwan’s tourism industry and formulat...

  19. Formulation and solution of the classical seashell problem. Pt. 1. Seashell geometry

    Energy Technology Data Exchange (ETDEWEB)

    Illert, C.

    1987-07-01

    Despite an extensive scholarly literature dating back to classical times, seashell geometries have hitherto resisted rigorous theoretical analysis, leaving applied scientists to adopt a directionless empirical approach toward classification. The voluminousness of recent palaeontological literature demonstrates the importance of this problem to applied scientists, but in no way reflects corresponding conceptual or theoretical advances beyond the XIX century thinking which was so ably summarized by Sir D'Arcy Wentworth Thompson in 1917. However, in this foundation paper for the newly emerging science of theoretical conchology, unifying theoretical considerations for the first time, permits a rigorous formulation and a complete solution of the problem of biological shell geometries. Shell coiling about the axis of symmetry can be deduced from first principles using energy considerations associated with incremental growth. The present paper shows that those shell apertures which are incurved ('cowrielike'), outflared ('stromblike') or even backturned ('opisthostomoidal') are merely special cases of a much broader spectrum of 'allowable' energy-efficient growth trajectories (tensile elastic clockspring spirals), many of which were widely used by Cretaceous ammonites. Energy considerations also dictate shell growth along the axis of symmetry, thus seashell spires can be understood in terms of certain special figures of revolution (Moebius elastic conoids), the better-known coeloconoidal and cyrtoconoidal shell spires being only two special cases arising from a whole class of topologically possible, energy efficient and biologically observed geometries. The 'wires' and 'conoids' of the present paper are instructive conceptual simplifications sufficient for present purposes. A second paper will later deal with generalized tubular surfaces in three dimensions.

  20. Generalized variational formulations for extended exponentially fractional integral

    Directory of Open Access Journals (Sweden)

    Zuo-Jun Wang

    2016-01-01

    Full Text Available Recently, the fractional variational principles as well as their applications yield a special attention. For a fractional variational problem based on different types of fractional integral and derivatives operators, corresponding fractional Lagrangian and Hamiltonian formulation and relevant Euler–Lagrange type equations are already presented by scholars. The formulations of fractional variational principles still can be developed more. We make an attempt to generalize the formulations for fractional variational principles. As a result we obtain generalized and complementary fractional variational formulations for extended exponentially fractional integral for example and corresponding Euler–Lagrange equations. Two illustrative examples are presented. It is observed that the formulations are in exact agreement with the Euler–Lagrange equations.

  1. Complex Sequencing Problems and Local Search Heuristics

    NARCIS (Netherlands)

    Brucker, P.; Hurink, Johann L.; Osman, I.H.; Kelly, J.P.

    1996-01-01

    Many problems can be formulated as complex sequencing problems. We will present problems in flexible manufacturing that have such a formulation and apply local search methods like iterative improvement, simulated annealing and tabu search to solve these problems. Computational results are reported.

  2. Hydrogen atom in the phase-space formulation of quantum mechanics

    International Nuclear Information System (INIS)

    Gracia-Bondia, J.M.

    1984-01-01

    Using a coordinate transformation which regularizes the classical Kepler problem, we show that the hydrogen-atom case may be analytically solved via the phase-space formulation of nonrelativistic quantum mechanics. The problem is essentially reduced to that of a four-dimensional oscillator whose treatment in the phase-space formulation is developed. Furthermore, the method allows us to calculate the Green's function for the H atom in a surprisingly simple way

  3. Adjoint-consistent formulations of slip models for coupled electroosmotic flow systems

    KAUST Repository

    Garg, Vikram V

    2014-09-27

    Background Models based on the Helmholtz `slip\\' approximation are often used for the simulation of electroosmotic flows. The objectives of this paper are to construct adjoint-consistent formulations of such models, and to develop adjoint-based numerical tools for adaptive mesh refinement and parameter sensitivity analysis. Methods We show that the direct formulation of the `slip\\' model is adjoint inconsistent, and leads to an ill-posed adjoint problem. We propose a modified formulation of the coupled `slip\\' model, which is shown to be well-posed, and therefore automatically adjoint-consistent. Results Numerical examples are presented to illustrate the computation and use of the adjoint solution in two-dimensional microfluidics problems. Conclusions An adjoint-consistent formulation for Helmholtz `slip\\' models of electroosmotic flows has been proposed. This formulation provides adjoint solutions that can be reliably used for mesh refinement and sensitivity analysis.

  4. Mixed finite-element formulations in piezoelectricity and flexoelectricity.

    Science.gov (United States)

    Mao, Sheng; Purohit, Prashant K; Aravas, Nikolaos

    2016-06-01

    Flexoelectricity, the linear coupling of strain gradient and electric polarization, is inherently a size-dependent phenomenon. The energy storage function for a flexoelectric material depends not only on polarization and strain, but also strain-gradient. Thus, conventional finite-element methods formulated solely on displacement are inadequate to treat flexoelectric solids since gradients raise the order of the governing differential equations. Here, we introduce a computational framework based on a mixed formulation developed previously by one of the present authors and a colleague. This formulation uses displacement and displacement-gradient as separate variables which are constrained in a 'weighted integral sense' to enforce their known relation. We derive a variational formulation for boundary-value problems for piezo- and/or flexoelectric solids. We validate this computational framework against available exact solutions. Our new computational method is applied to more complex problems, including a plate with an elliptical hole, stationary cracks, as well as tension and shear of solids with a repeating unit cell. Our results address several issues of theoretical interest, generate predictions of experimental merit and reveal interesting flexoelectric phenomena with potential for application.

  5. Dynamic psychiatry and the psychodynamic formulation

    African Journals Online (AJOL)

    processes and psychiatric disorders are biological, the range ... The formulation furthermore helps with the initial orientation towards the patient: it anticipates and predicts how the patient ..... contributed to problems with his sexual identity.

  6. A Generalized Formulation of Demand Response under Market Environments

    Science.gov (United States)

    Nguyen, Minh Y.; Nguyen, Duc M.

    2015-06-01

    This paper presents a generalized formulation of Demand Response (DR) under deregulated electricity markets. The problem is scheduling and controls the consumption of electrical loads according to the market price to minimize the energy cost over a day. Taking into account the modeling of customers' comfort (i.e., preference), the formulation can be applied to various types of loads including what was traditionally classified as critical loads (e.g., air conditioning, lights). The proposed DR scheme is based on Dynamic Programming (DP) framework and solved by DP backward algorithm in which the stochastic optimization is used to treat the uncertainty, if any occurred in the problem. The proposed formulation is examined with the DR problem of different loads, including Heat Ventilation and Air Conditioning (HVAC), Electric Vehicles (EVs) and a newly DR on the water supply systems of commercial buildings. The result of simulation shows significant saving can be achieved in comparison with their traditional (On/Off) scheme.

  7. The effectiveness of crisis resolution/home treatment teams for older people with mental health problems: a systematic review and scoping exercise.

    Science.gov (United States)

    Toot, Sandeep; Devine, Mike; Orrell, Martin

    2011-12-01

    To assess the effectiveness of crisis resolution/home treatment services for older people with mental health problems. A systematic review was conducted to report on the effectiveness of crisis resolution/home treatment teams (CRHTTs) for older people with mental health problems. As part of the review, we also carried out a scoping exercise to assess the typologies of older people's CRHTTs in practice, and to review these in the context of policy and research findings. The literature contains Grade C evidence, according to the Oxford Centre of Evidence Based Medicine (CEBM) guidelines, that CRHTTs are effective in reducing numbers of admissions to hospitals. Outcomes such as length of hospital stay and maintenance of community residence were reviewed but evidence was inadequate for drawing conclusions. The scoping exercise defined three types of home treatment service model: generic home treatment teams; specialist older adults home treatment teams; and intermediate care services. These home treatment teams seemed to be effectively managing crises and reducing admissions. This review has shown a lack of evidence for the efficacy of crisis resolution/home treatment teams in supporting older people with mental health problems to remain at home. There is clearly a need for a randomised controlled trial to establish the efficacy of crisis resolution/home treatment services for older people with mental health problems, as well as a more focussed assessment of the different home treatment service models which have developed in the UK. Copyright © 2011 John Wiley & Sons, Ltd.

  8. Implicit solvers for large-scale nonlinear problems

    International Nuclear Information System (INIS)

    Keyes, David E; Reynolds, Daniel R; Woodward, Carol S

    2006-01-01

    Computational scientists are grappling with increasingly complex, multi-rate applications that couple such physical phenomena as fluid dynamics, electromagnetics, radiation transport, chemical and nuclear reactions, and wave and material propagation in inhomogeneous media. Parallel computers with large storage capacities are paving the way for high-resolution simulations of coupled problems; however, hardware improvements alone will not prove enough to enable simulations based on brute-force algorithmic approaches. To accurately capture nonlinear couplings between dynamically relevant phenomena, often while stepping over rapid adjustments to quasi-equilibria, simulation scientists are increasingly turning to implicit formulations that require a discrete nonlinear system to be solved for each time step or steady state solution. Recent advances in iterative methods have made fully implicit formulations a viable option for solution of these large-scale problems. In this paper, we overview one of the most effective iterative methods, Newton-Krylov, for nonlinear systems and point to software packages with its implementation. We illustrate the method with an example from magnetically confined plasma fusion and briefly survey other areas in which implicit methods have bestowed important advantages, such as allowing high-order temporal integration and providing a pathway to sensitivity analyses and optimization. Lastly, we overview algorithm extensions under development motivated by current SciDAC applications

  9. The Radiation Problem from a Vertical Hertzian Dipole Antenna above Flat and Lossy Ground: Novel Formulation in the Spectral Domain with Closed-Form Analytical Solution in the High Frequency Regime

    Directory of Open Access Journals (Sweden)

    K. Ioannidi

    2014-01-01

    Full Text Available We consider the problem of radiation from a vertical short (Hertzian dipole above flat lossy ground, which represents the well-known “Sommerfeld radiation problem” in the literature. The problem is formulated in a novel spectral domain approach, and by inverse three-dimensional Fourier transformation the expressions for the received electric and magnetic (EM field in the physical space are derived as one-dimensional integrals over the radial component of wavevector, in cylindrical coordinates. This formulation appears to have inherent advantages over the classical formulation by Sommerfeld, performed in the spatial domain, since it avoids the use of the so-called Hertz potential and its subsequent differentiation for the calculation of the received EM field. Subsequent use of the stationary phase method in the high frequency regime yields closed-form analytical solutions for the received EM field vectors, which coincide with the corresponding reflected EM field originating from the image point. In this way, we conclude that the so-called “space wave” in the literature represents the total solution of the Sommerfeld problem in the high frequency regime, in which case the surface wave can be ignored. Finally, numerical results are presented, in comparison with corresponding numerical results based on Norton’s solution of the problem.

  10. Proximal methods for the resolution of inverse problems: application to positron emission tomography

    International Nuclear Information System (INIS)

    Pustelnik, N.

    2010-12-01

    The objective of this work is to propose reliable, efficient and fast methods for minimizing convex criteria, that are found in inverse problems for imagery. We focus on restoration/reconstruction problems when data is degraded with both a linear operator and noise, where the latter is not assumed to be necessarily additive.The reliability of the method is ensured through the use of proximal algorithms, the convergence of which is guaranteed when a convex criterion is considered. Efficiency is sought through the choice of criteria adapted to the noise characteristics, the linear operators and the image specificities. Of particular interest are regularization terms based on total variation and/or sparsity of signal frame coefficients. As a consequence of the use of frames, two approaches are investigated, depending on whether the analysis or the synthesis formulation is chosen. Fast processing requirements lead us to consider proximal algorithms with a parallel structure. Theoretical results are illustrated on several large size inverse problems arising in image restoration, stereoscopy, multi-spectral imagery and decomposition into texture and geometry components. We focus on a particular application, namely Positron Emission Tomography (PET), which is particularly difficult because of the presence of a projection operator combined with Poisson noise, leading to highly corrupted data. To optimize the quality of the reconstruction, we make use of the spatio-temporal characteristics of brain tissue activity. (author)

  11. Planar multibody dynamics formulation, programming and applications

    CERN Document Server

    Nikravesh, Parviz E

    2007-01-01

    Introduction Multibody Mechanical Systems Types of Analyses Methods of Formulation Computer Programming Application Examples Unit System Remarks Preliminaries Reference Axes Scalars and Vectors Matrices Vector, Array, and Matrix Differentiation Equations and Expressions Remarks Problems Fundamentals of Kinematics A Particle Kinematics of a Rigid Body Definitions Remarks Problems Fundamentals of Dynamics Newton's Laws of Motion Dynamics of a Body Force Elements Applied Forces Reaction Force Remarks Problems Point-Coordinates: Kinematics Multipoint

  12. Petrov-Galerkin mixed formulations for bidimensional elasticity

    International Nuclear Information System (INIS)

    Toledo, E.M.; Loula, A.F.D.; Guerreiro, J.N.C.

    1989-10-01

    A new formulation for two-dimensional elasticity in stress and displacements is presented. Consistently adding to the Galerkin classical formulation residuals forms of constitutive and equilibrium equations, the original saddle point is transformed into a minimization problem without any restrictions. We also propose a stress post processing technique using both equilibrium and constitutive equations. Numerical analysis error estimates and numerical results are presented confirming the predicted rates of convergence. (A.C.A.S.) [pt

  13. Modelling low Reynolds number vortex-induced vibration problems with a fixed mesh fluid-solid interaction formulation

    Science.gov (United States)

    González Cornejo, Felipe A.; Cruchaga, Marcela A.; Celentano, Diego J.

    2017-11-01

    The present work reports a fluid-rigid solid interaction formulation described within the framework of a fixed-mesh technique. The numerical analysis is focussed on the study of a vortex-induced vibration (VIV) of a circular cylinder at low Reynolds number. The proposed numerical scheme encompasses the fluid dynamics computation in an Eulerian domain where the body is embedded using a collection of markers to describe its shape, and the rigid solid's motion is obtained with the well-known Newton's law. The body's velocity is imposed on the fluid domain through a penalty technique on the embedded fluid-solid interface. The fluid tractions acting on the solid are computed from the fluid dynamic solution of the flow around the body. The resulting forces are considered to solve the solid motion. The numerical code is validated by contrasting the obtained results with those reported in the literature using different approaches for simulating the flow past a fixed circular cylinder as a benchmark problem. Moreover, a mesh convergence analysis is also done providing a satisfactory response. In particular, a VIV problem is analyzed, emphasizing the description of the synchronization phenomenon.

  14. An efficient formulation for linear and geometric non-linear membrane elements

    Directory of Open Access Journals (Sweden)

    Mohammad Rezaiee-Pajand

    Full Text Available Utilizing the straingradient notation process and the free formulation, an efficient way of constructing membrane elements will be proposed. This strategy can be utilized for linear and geometric non-linear problems. In the suggested formulation, the optimization constraints of insensitivity to distortion, rotational invariance and not having parasitic shear error are employed. In addition, the equilibrium equations will be established based on some constraints among the strain states. The authors' technique can easily separate the rigid body motions, and those belong to deformational motions. In this article, a novel triangular element, named SST10, is formulated. This element will be used in several plane problems having irregular mesh and complicated geometry with linear and geometrically nonlinear behavior. The numerical outcomes clearly demonstrate the efficiency of the new formulation.

  15. Multi-resolution Shape Analysis via Non-Euclidean Wavelets: Applications to Mesh Segmentation and Surface Alignment Problems.

    Science.gov (United States)

    Kim, Won Hwa; Chung, Moo K; Singh, Vikas

    2013-01-01

    The analysis of 3-D shape meshes is a fundamental problem in computer vision, graphics, and medical imaging. Frequently, the needs of the application require that our analysis take a multi-resolution view of the shape's local and global topology, and that the solution is consistent across multiple scales. Unfortunately, the preferred mathematical construct which offers this behavior in classical image/signal processing, Wavelets, is no longer applicable in this general setting (data with non-uniform topology). In particular, the traditional definition does not allow writing out an expansion for graphs that do not correspond to the uniformly sampled lattice (e.g., images). In this paper, we adapt recent results in harmonic analysis, to derive Non-Euclidean Wavelets based algorithms for a range of shape analysis problems in vision and medical imaging. We show how descriptors derived from the dual domain representation offer native multi-resolution behavior for characterizing local/global topology around vertices. With only minor modifications, the framework yields a method for extracting interest/key points from shapes, a surprisingly simple algorithm for 3-D shape segmentation (competitive with state of the art), and a method for surface alignment (without landmarks). We give an extensive set of comparison results on a large shape segmentation benchmark and derive a uniqueness theorem for the surface alignment problem.

  16. A direct comparison of physical block occupancy versus timed block occupancy in train timetabling formulations

    DEFF Research Database (Denmark)

    Harrod, Steven; Schlechte, Thomas

    2013-01-01

    Two fundamental mathematical formulations for railway timetabling are compared on a common set of sample problems, representing both multiple track high density services in Europe and single track bidirectional operations in North America. One formulation, ACP, enforces against conflicts by const......Two fundamental mathematical formulations for railway timetabling are compared on a common set of sample problems, representing both multiple track high density services in Europe and single track bidirectional operations in North America. One formulation, ACP, enforces against conflicts...

  17. Static and kinematic formulation of planar reciprocal assemblies

    DEFF Research Database (Denmark)

    Parigi, Dario; Sassone, Mario; Kirkegaard, Poul Henning

    2014-01-01

    Planar reciprocal frames are two dimensional structures formed by elements joined together according to the principle of structural reciprocity. In this paper a rigorous formulation of the static and kinematic problem is proposed and developed extending the theory of pin-jointed assemblies....... This formulation is used to evaluate the static and kinematic determinacy of reciprocal assemblies from the properties of their equilibrium and kinematic matrices...

  18. An overview of the formulation, existence and uniqueness issues for the initial value problem raised by the dynamics of discrete systems with unilateral contact and dry friction

    Science.gov (United States)

    Ballard, Patrick; Charles, Alexandre

    2018-03-01

    In the end of the seventies, Schatzman and Moreau undertook to revisit the venerable dynamics of rigid bodies with contact and dry friction in the light of more recent mathematics. One claimed objective was to reach, for the first time, a mathematically consistent formulation of an initial value problem associated with the dynamics. The purpose of this article is to make a review of the today state-of-art concerning not only the formulation, but also the issues of existence and uniqueness of solution. xml:lang="fr"

  19. Effects of Interactive Voice Response Self-Monitoring on Natural Resolution of Drinking Problems: Utilization and Behavioral Economic Factors

    Science.gov (United States)

    Tucker, Jalie A.; Roth, David L.; Huang, Jin; Scott Crawford, M.; Simpson, Cathy A.

    2012-01-01

    Objective: Most problem drinkers do not seek help, and many recover on their own. A randomized controlled trial evaluated whether supportive interactive voice response (IVR) self-monitoring facilitated such “natural” resolutions. Based on behavioral economics, effects on drinking outcomes were hypothesized to vary with drinkers’ baseline “time horizons,” reflecting preferences among commodities of different value available over different delays and with their IVR utilization. Method: Recently resolved untreated problem drinkers were randomized to a 24-week IVR self-monitoring program (n = 87) or an assessment-only control condition (n = 98). Baseline interviews assessed outcome predictors including behavioral economic measures of reward preferences (delay discounting, pre-resolution monetary allocation to alcohol vs. savings). Six-month outcomes were categorized as resolved abstinent, resolved nonabstinent, unresolved, or missing. Complier average causal effect (CACE) models examined IVR self-monitoring effects. Results: IVR self-monitoring compliers (≥70% scheduled calls completed) were older and had greater pre-resolution drinking control and lower discounting than noncompliers (moderation than abstinent resolutions compared with predicted compliers in the control group with shorter time horizons and with all noncompliers. Intention-to-treat analytical models revealed no IVR-related effects. More balanced spending on savings versus alcohol predicted moderation in both approaches. Conclusions: IVR interventions should consider factors affecting IVR utilization and drinking outcomes, including person-specific behavioral economic variables. CACE models provide tools to evaluate interventions involving extended participation. PMID:22630807

  20. Boundary-integral equation formulation for time-dependent inelastic deformation in metals

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, V; Mukherjee, S

    1977-01-01

    The mathematical structure of various constitutive relations proposed in recent years for representing time-dependent inelastic deformation behavior of metals at elevated temperatues has certain features which permit a simple formulation of the three-dimensional inelasticity problem in terms of real time rates. A direct formulation of the boundary-integral equation method in terms of rates is discussed for the analysis of time-dependent inelastic deformation of arbitrarily shaped three-dimensional metallic bodies subjected to arbitrary mechanical and thermal loading histories and obeying constitutive relations of the kind mentioned above. The formulation is based on the assumption of infinitesimal deformations. Several illustrative examples involving creep of thick-walled spheres, long thick-walled cylinders, and rotating discs are discussed. The implementation of the method appears to be far easier than analogous BIE formulations that have been suggested for elastoplastic problems.

  1. LP formulation of asymmetric zero-sum stochastic games

    KAUST Repository

    Li, Lichun

    2014-12-15

    This paper provides an efficient linear programming (LP) formulation of asymmetric two player zero-sum stochastic games with finite horizon. In these stochastic games, only one player is informed of the state at each stage, and the transition law is only controlled by the informed player. Compared with the LP formulation of extensive stochastic games whose size grows polynomially with respect to the size of the state and the size of the uninformed player\\'s actions, our proposed LP formulation has its size to be linear with respect to the size of the state and the size of the uninformed player, and hence greatly reduces the computational complexity. A travelling inspector problem is used to demonstrate the efficiency of the proposed LP formulation.

  2. LP formulation of asymmetric zero-sum stochastic games

    KAUST Repository

    Li, Lichun; Shamma, Jeff S.

    2014-01-01

    This paper provides an efficient linear programming (LP) formulation of asymmetric two player zero-sum stochastic games with finite horizon. In these stochastic games, only one player is informed of the state at each stage, and the transition law is only controlled by the informed player. Compared with the LP formulation of extensive stochastic games whose size grows polynomially with respect to the size of the state and the size of the uninformed player's actions, our proposed LP formulation has its size to be linear with respect to the size of the state and the size of the uninformed player, and hence greatly reduces the computational complexity. A travelling inspector problem is used to demonstrate the efficiency of the proposed LP formulation.

  3. Resolution enhancement of robust Bayesian pre-stack inversion in the frequency domain

    Science.gov (United States)

    Yin, Xingyao; Li, Kun; Zong, Zhaoyun

    2016-10-01

    AVO/AVA (amplitude variation with an offset or angle) inversion is one of the most practical and useful approaches to estimating model parameters. So far, publications on AVO inversion in the Fourier domain have been quite limited in view of its poor stability and sensitivity to noise compared with time-domain inversion. For the resolution and stability of AVO inversion in the Fourier domain, a novel robust Bayesian pre-stack AVO inversion based on the mixed domain formulation of stationary convolution is proposed which could solve the instability and achieve superior resolution. The Fourier operator will be integrated into the objective equation and it avoids the Fourier inverse transform in our inversion process. Furthermore, the background constraints of model parameters are taken into consideration to improve the stability and reliability of inversion which could compensate for the low-frequency components of seismic signals. Besides, the different frequency components of seismic signals can realize decoupling automatically. This will help us to solve the inverse problem by means of multi-component successive iterations and the convergence precision of the inverse problem could be improved. So, superior resolution compared with the conventional time-domain pre-stack inversion could be achieved easily. Synthetic tests illustrate that the proposed method could achieve high-resolution results with a high degree of agreement with the theoretical model and verify the quality of anti-noise. Finally, applications on a field data case demonstrate that the proposed method could obtain stable inversion results of elastic parameters from pre-stack seismic data in conformity with the real logging data.

  4. Solving phase appearance/disappearance two-phase flow problems with high resolution staggered grid and fully implicit schemes by the Jacobian-free Newton–Krylov Method

    Energy Technology Data Exchange (ETDEWEB)

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2016-04-01

    The phase appearance/disappearance issue presents serious numerical challenges in two-phase flow simulations. Many existing reactor safety analysis codes use different kinds of treatments for the phase appearance/disappearance problem. However, to our best knowledge, there are no fully satisfactory solutions. Additionally, the majority of the existing reactor system analysis codes were developed using low-order numerical schemes in both space and time. In many situations, it is desirable to use high-resolution spatial discretization and fully implicit time integration schemes to reduce numerical errors. In this work, we adapted a high-resolution spatial discretization scheme on staggered grid mesh and fully implicit time integration methods (such as BDF1 and BDF2) to solve the two-phase flow problems. The discretized nonlinear system was solved by the Jacobian-free Newton Krylov (JFNK) method, which does not require the derivation and implementation of analytical Jacobian matrix. These methods were tested with a few two-phase flow problems with phase appearance/disappearance phenomena considered, such as a linear advection problem, an oscillating manometer problem, and a sedimentation problem. The JFNK method demonstrated extremely robust and stable behaviors in solving the two-phase flow problems with phase appearance/disappearance. No special treatments such as water level tracking or void fraction limiting were used. High-resolution spatial discretization and second- order fully implicit method also demonstrated their capabilities in significantly reducing numerical errors.

  5. "What constitutes a 'problem'?" Producing 'alcohol problems' through online counselling encounters.

    Science.gov (United States)

    Savic, Michael; Ferguson, Nyssa; Manning, Victoria; Bathish, Ramez; Lubman, Dan I

    2017-08-01

    Typically, health policy, practice and research views alcohol and other drug (AOD) 'problems' as objective things waiting to be detected, diagnosed and treated. However, this approach to policy development and treatment downplays the role of clinical practices, tools, discourses, and systems in shaping how AOD use is constituted as a 'problem'. For instance, people might present to AOD treatment with multiple psycho-social concerns, but usually only a singular AOD-associated 'problem' is considered serviceable. As the assumed nature of 'the serviceable problem' influences what treatment responses people receive, and how they may come to be enacted as 'addicted' or 'normal' subjects, it is important to subject clinical practices of problem formulation to critical analysis. Given that the reach of AOD treatment has expanded via the online medium, in this article we examine how 'problems' are produced in online alcohol counselling encounters involving people aged 55 and over. Drawing on poststructural approaches to problematisation, we not only trace how and what 'problems' are produced, but also what effects these give rise to. We discuss three approaches to problem formulation: (1) Addiction discourses at work; (2) Moving between concerns and alcohol 'problems'; (3) Making 'problems' complex and multiple. On the basis of this analysis, we argue that online AOD counselling does not just respond to pre-existing 'AOD problems'. Rather, through the social and clinical practices of formulation at work in clinical encounters, online counselling also produces them. Thus, given a different set of circumstances, practices and relations, 'problems' might be defined or emerge differently-perhaps not as 'problems' at all or perhaps as different kinds of concerns. We conclude by highlighting the need for a critical reflexivity in AOD treatment and policy in order to open up possibilities for different ways of engaging with, and responding to, people's needs in their complexity

  6. Finite-element formulations for the thermal stress analysis of two- and three-dimensional thin reactor structures

    International Nuclear Information System (INIS)

    Kulak, R.F.; Kennedy, J.M.; Belytschko, T.B.; Schoeberle, D.F.

    1977-01-01

    This paper describes finite-element formulations for the thermal stress analysis of LMFBR structures. The first formulation is applicable to large displacement rotation problems in which the strains are small. For this formulation, a general temperature-dependent constituent relationship is derived from a Gibbs potential and a temperature dependent surface. A second formulation is presented for problems characterized by both large displacement-rotations and large strains. Here a set of large strain hypoelastic-plastic relationships are developed to linearly relate the rate of stress to the rate of deformation. These developments were incorporated into two ANL developed finite-element computer codes: the implicit version of STRAW and the 3D Implicit Structural Analaysis code. A set of problems is presented to validate both the 3D and 2D programs and to illustrate their applicability to a variety of problems. (Auth.)

  7. Governing through problems: the formulation of policy on amphetamine-type stimulants (ATS) in Australia.

    Science.gov (United States)

    Fraser, Suzanne; Moore, David

    2011-11-01

    Producing and implementing credible and effective policies on illicit drug use is generally seen as an important aspect of health governance in the West. Yet the controversy surrounding illicit drug use means this is no easy task. With public opinion perceived by policy makers to be set against illicit drug use, and understandings of its effects tending towards generalisation and pathologisation, the need for timely and rational responses is considered self evident. These responses are, however, regularly criticised as driven as much by electoral politics and expedience as by research findings or expert opinion. Destined to receive close critical scrutiny from all sides, these policies, and the processes undertaken to develop them, are obliged to negotiate a complex political domain. Despite this scrutiny, and the pressure it brings to bear on the policy-making process, little scholarly attention has been paid to the area to date. In this article, we examine in detail one important area of illicit drug policy - the use of amphetamine-type stimulants (ATS) in Australia. We draw on the international critical literature on the ATS problem to situate our analysis. We note that ideas of 'panic', including Cohen's notion of moral panic, have been used here to good effect, but, aiming to acknowledge the complexities of policy, we turn to poststructuralist methods of policy analysis to pursue a different approach. Following Bacchi's observation that 'we are governed through problematisations rather than policies' (2009, p. xi), we ask how the problem of ATS use has been formulated in policy. We examine key state and national policy documents, and two central themes found in them - causation and evidence - to identify the specific strategies used to authorise the recommendations and measures presented as following from the problem of ATS use. In doing so, we clarify important ways in which policy may at times work to obscure the limits of its legitimacy. Copyright © 2011

  8. Variational formulation and projectional methods for the second order transport equation

    International Nuclear Information System (INIS)

    Borysiewicz, M.; Stankiewicz, R.

    1979-01-01

    Herein the variational problem for a second-order boundary value problem for the neutron transport equation is formulated. The projectional methods solving the problem are examined. The approach is compared with that based on the original untransformed form of the neutron transport equation

  9. Micosoft Excel Sensitivity Analysis for Linear and Stochastic Program Feed Formulation

    Science.gov (United States)

    Sensitivity analysis is a part of mathematical programming solutions and is used in making nutritional and economic decisions for a given feed formulation problem. The terms, shadow price and reduced cost, are familiar linear program (LP) terms to feed formulators. Because of the nonlinear nature of...

  10. Applications of functional analysis to optimal control problems

    International Nuclear Information System (INIS)

    Mizukami, K.

    1976-01-01

    Some basic concepts in functional analysis, a general norm, the Hoelder inequality, functionals and the Hahn-Banach theorem are described; a mathematical formulation of two optimal control problems is introduced by the method of functional analysis. The problem of time-optimal control systems with both norm constraints on control inputs and on state variables at discrete intermediate times is formulated as an L-problem in the theory of moments. The simplex method is used for solving a non-linear minimizing problem inherent in the functional analysis solution to this problem. Numerical results are presented for a train operation. The second problem is that of optimal control of discrete linear systems with quadratic cost functionals. The problem is concerned with the case of unconstrained control and fixed endpoints. This problem is formulated in terms of norms of functionals on suitable Banach spaces. (author)

  11. Evolution of magnetic field and atmospheric response. I - Three-dimensional formulation by the method of projected characteristics. II - Formulation of proper boundary equations. [stellar magnetohydrodynamics

    Science.gov (United States)

    Nakagawa, Y.

    1981-01-01

    The method described as the method of nearcharacteristics by Nakagawa (1980) is renamed the method of projected characteristics. Making full use of properties of the projected characteristics, a new and simpler formulation is developed. As a result, the formulation for the examination of the general three-dimensional problems is presented. It is noted that since in practice numerical solutions must be obtained, the final formulation is given in the form of difference equations. The possibility of including effects of viscous and ohmic dissipations in the formulation is considered, and the physical interpretation is discussed. A systematic manner is then presented for deriving physically self-consistent, time-dependent boundary equations for MHD initial boundary problems. It is demonstrated that the full use of the compatibility equations (differential equations relating variations at two spatial locations and times) is required in determining the time-dependent boundary conditions. In order to provide a clear physical picture as an example, the evolution of axisymmetric global magnetic field by photospheric differential rotation is considered.

  12. Resolution for the Loviisa benchmark problem

    International Nuclear Information System (INIS)

    Garcia, C.R.; Quintero, R.; Milian, D.

    1992-01-01

    In the present paper, the Loviisa benchmark problem for cycles 11 and 8, and reactor blocks 1 and 2 from Loviisa NPP, is calculated. This problem user law leakage reload patterns and was posed at the second thematic group of TIC meeting held in Rheinsberg GDR, march 1989. SPPS-1 coarse mesh code has been used for the calculations

  13. On fictitious domain formulations for Maxwell's equations

    DEFF Research Database (Denmark)

    Dahmen, W.; Jensen, Torben Klint; Urban, K.

    2003-01-01

    We consider fictitious domain-Lagrange multiplier formulations for variational problems in the space H(curl: Omega) derived from Maxwell's equations. Boundary conditions and the divergence constraint are imposed weakly by using Lagrange multipliers. Both the time dependent and time harmonic formu...

  14. Global Energy-Optimal Redundancy Resolution of Hydraulic Manipulators: Experimental Results for a Forestry Manipulator

    Directory of Open Access Journals (Sweden)

    Jarmo Nurmi

    2017-05-01

    Full Text Available This paper addresses the energy-inefficiency problem of four-degrees-of-freedom (4-DOF hydraulic manipulators through redundancy resolution in robotic closed-loop controlled applications. Because conventional methods typically are local and have poor performance for resolving redundancy with respect to minimum hydraulic energy consumption, global energy-optimal redundancy resolution is proposed at the valve-controlled actuator and hydraulic power system interaction level. The energy consumption of the widely popular valve-controlled load-sensing (LS and constant-pressure (CP systems is effectively minimised through cost functions formulated in a discrete-time dynamic programming (DP approach with minimum state representation. A prescribed end-effector path and important actuator constraints at the position, velocity and acceleration levels are also satisfied in the solution. Extensive field experiments performed on a forestry hydraulic manipulator demonstrate the performance of the proposed solution. Approximately 15–30% greater hydraulic energy consumption was observed with the conventional methods in the LS and CP systems. These results encourage energy-optimal redundancy resolution in future robotic applications of hydraulic manipulators.

  15. About solution of multipoint boundary problem of static analysis of deep beam with the use of combined application of finite element method and discrete-continual finite element method. part 1: formulation of the problem and general principles of approximation

    Directory of Open Access Journals (Sweden)

    Lyakhovich Leonid

    2017-01-01

    Full Text Available This paper is devoted to formulation and general principles of approximation of multipoint boundary problem of static analysis of deep beam with the use of combined application of finite element method (FEM discrete-continual finite element method (DCFEM. The field of application of DCFEM comprises structures with regular physical and geometrical parameters in some dimension (“basic” dimension. DCFEM presupposes finite element approximation for non-basic dimension while in the basic dimension problem remains continual. DCFEM is based on analytical solutions of resulting multipoint boundary problems for systems of ordinary differential equations with piecewise-constant coefficients.

  16. Clarification process: Resolution of decision-problem conditions

    Science.gov (United States)

    Dieterly, D. L.

    1980-01-01

    A model of a general process which occurs in both decisionmaking and problem-solving tasks is presented. It is called the clarification model and is highly dependent on information flow. The model addresses the possible constraints of individual indifferences and experience in achieving success in resolving decision-problem conditions. As indicated, the application of the clarification process model is only necessary for certain classes of the basic decision-problem condition. With less complex decision problem conditions, certain phases of the model may be omitted. The model may be applied across a wide range of decision problem conditions. The model consists of two major components: (1) the five-phase prescriptive sequence (based on previous approaches to both concepts) and (2) the information manipulation function (which draws upon current ideas in the areas of information processing, computer programming, memory, and thinking). The two components are linked together to provide a structure that assists in understanding the process of resolving problems and making decisions.

  17. States in the Hilbert space formulation and in the phase space formulation of quantum mechanics

    International Nuclear Information System (INIS)

    Tosiek, J.; Brzykcy, P.

    2013-01-01

    We consider the problem of testing whether a given matrix in the Hilbert space formulation of quantum mechanics or a function considered in the phase space formulation of quantum theory represents a quantum state. We propose several practical criteria for recognising states in these two versions of quantum physics. After minor modifications, they can be applied to check positivity of any operators acting in a Hilbert space or positivity of any functions from an algebra with a ∗-product of Weyl type. -- Highlights: ► Methods of testing whether a given matrix represents a quantum state. ► The Stratonovich–Weyl correspondence on an arbitrary symplectic manifold. ► Criteria for checking whether a function on a symplectic space is a Wigner function

  18. Automated Conflict Resolution For Air Traffic Control

    Science.gov (United States)

    Erzberger, Heinz

    2005-01-01

    The ability to detect and resolve conflicts automatically is considered to be an essential requirement for the next generation air traffic control system. While systems for automated conflict detection have been used operationally by controllers for more than 20 years, automated resolution systems have so far not reached the level of maturity required for operational deployment. Analytical models and algorithms for automated resolution have been traffic conditions to demonstrate that they can handle the complete spectrum of conflict situations encountered in actual operations. The resolution algorithm described in this paper was formulated to meet the performance requirements of the Automated Airspace Concept (AAC). The AAC, which was described in a recent paper [1], is a candidate for the next generation air traffic control system. The AAC's performance objectives are to increase safety and airspace capacity and to accommodate user preferences in flight operations to the greatest extent possible. In the AAC, resolution trajectories are generated by an automation system on the ground and sent to the aircraft autonomously via data link .The algorithm generating the trajectories must take into account the performance characteristics of the aircraft, the route structure of the airway system, and be capable of resolving all types of conflicts for properly equipped aircraft without requiring supervision and approval by a controller. Furthermore, the resolution trajectories should be compatible with the clearances, vectors and flight plan amendments that controllers customarily issue to pilots in resolving conflicts. The algorithm described herein, although formulated specifically to meet the needs of the AAC, provides a generic engine for resolving conflicts. Thus, it can be incorporated into any operational concept that requires a method for automated resolution, including concepts for autonomous air to air resolution.

  19. Statistical formulation of gravitational radiation reaction

    International Nuclear Information System (INIS)

    Schutz, B.F.

    1980-01-01

    A new formulation of the radiation-reaction problem is proposed, which is simpler than alternatives which have been used before. The new approach is based on the initial-value problem, uses approximations which need be uniformly valid only in compact regions of space-time, and makes no time-asymmetric assumptions (no a priori introduction of retarded potentials or outgoing-wave asymptotic conditions). It defines radiation reaction to be the expected evolution of a source obtained by averaging over a statistical ensemble of initial conditions. The ensemble is chosen to reflect one's complete lack of information (in real systems) about the initial data for the radiation field. The approach is applied to the simple case of a weak-field, slow-motion source in general relativity, where it yields the usual expressions for radiation reaction when the gauge is chosen properly. There is a discussion of gauge freedom, and another of the necessity of taking into account reaction corrections to the particle-conservation equation. The analogy with the second law of thermodynamics is very close, and suggests that the electromagnetic and thermodynamic arrows of time are the same. Because the formulation is based on the usual initial-value problem, it has no spurious ''runaway'' solutions

  20. Classifying IS Project Problems

    DEFF Research Database (Denmark)

    Munk-Madsen, Andreas

    2006-01-01

    The literature contains many lists of IS project problems, often in the form of risk factors. The problems sometimes appear unordered and overlapping, which reduces their usefulness to practitioners as well as theoreticians. This paper proposes a list of criteria for formulating project problems...

  1. Path inequalities for the vehicle routing problem with time windows

    DEFF Research Database (Denmark)

    Kallehauge, Brian; Boland, Natashia; Madsen, Oli B.G.

    2007-01-01

    In this paper we introduce a new formulation of the vehicle routing problem with time windows (VRPTW) involving only binary variables. The new formulation is based on the formulation of the asymmetric traveling salesman problem with time windows by Ascheuer et al. (Networks 36 (2000) 69-79) and has...

  2. Case Formulation in Psychotherapy: Revitalizing Its Usefulness as a Clinical Tool

    Science.gov (United States)

    Sim, Kang; Gwee, Kok Peng; Bateman, Anthony

    2005-01-01

    Objective: Case formulation has been recognized to be a useful conceptual and clinical tool in psychotherapy as diagnosis itself does not focus on the underlying causes of a patient's problems. Case formulation can fill the gap between diagnosis and treatment, with the potential to provide insights into the integrative, explanatory, prescriptive,…

  3. Variational principles are a powerful tool also for formulating field theories

    OpenAIRE

    Dell'Isola , Francesco; Placidi , Luca

    2012-01-01

    Variational principles and calculus of variations have always been an important tool for formulating mathematical models for physical phenomena. Variational methods give an efficient and elegant way to formulate and solve mathematical problems that are of interest for scientists and engineers and are the main tool for the axiomatization of physical theories

  4. Linearized inversion frameworks toward high-resolution seismic imaging

    KAUST Repository

    Aldawood, Ali

    2016-09-01

    Seismic exploration utilizes controlled sources, which emit seismic waves that propagate through the earth subsurface and get reflected off subsurface interfaces and scatterers. The reflected and scattered waves are recorded by recording stations installed along the earth surface or down boreholes. Seismic imaging is a powerful tool to map these reflected and scattered energy back to their subsurface scattering or reflection points. Seismic imaging is conventionally based on the single-scattering assumption, where only energy that bounces once off a subsurface scatterer and recorded by a receiver is projected back to its subsurface position. The internally multiply scattered seismic energy is considered as unwanted noise and is usually suppressed or removed from the recorded data. Conventional seismic imaging techniques yield subsurface images that suffer from low spatial resolution, migration artifacts, and acquisition fingerprint due to the limited acquisition aperture, number of sources and receivers, and bandwidth of the source wavelet. Hydrocarbon traps are becoming more challenging and considerable reserves are trapped in stratigraphic and pinch-out traps, which require highly resolved seismic images to delineate them. This thesis focuses on developing and implementing new advanced cost-effective seismic imaging techniques aiming at enhancing the resolution of the migrated images by exploiting the sparseness of the subsurface reflectivity distribution and utilizing the multiples that are usually neglected when imaging seismic data. I first formulate the seismic imaging problem as a Basis pursuit denoise problem, which I solve using an L1-minimization algorithm to obtain the sparsest migrated image corresponding to the recorded data. Imaging multiples may illuminate subsurface zones, which are not easily illuminated by conventional seismic imaging using primary reflections only. I then develop an L2-norm (i.e. least-squares) inversion technique to image

  5. Formulation and stability testing of photolabile drugs.

    Science.gov (United States)

    Tønnesen, H H

    2001-08-28

    Exposure of a drug to irradiation can influence the stability of the formulation, leading to changes in the physicochemical properties of the product. The influence of excipients of frequently used stabilizers is often difficult to predict and, therefore, stability testing of the final preparation is important. The selection of a protective packaging must be based on knowledge about the wavelength causing the instability. Details on drug photoreactivity will also be helpful in order to minimize side-effects and/or optimize drug targeting by developing photoresponsive drug delivery systems. This review focuses on practical problems related to formulation and stability testing of photolabile drugs.

  6. Optimizing Energy and Modulation Selection in Multi-Resolution Modulation For Wireless Video Broadcast/Multicast

    KAUST Repository

    She, James

    2009-11-01

    Emerging technologies in Broadband Wireless Access (BWA) networks and video coding have enabled high-quality wireless video broadcast/multicast services in metropolitan areas. Joint source-channel coded wireless transmission, especially using hierarchical/superposition coded modulation at the channel, is recognized as an effective and scalable approach to increase the system scalability while tackling the multi-user channel diversity problem. The power allocation and modulation selection problem, however, is subject to a high computational complexity due to the nonlinear formulation and huge solution space. This paper introduces a dynamic programming framework with conditioned parsing, which significantly reduces the search space. The optimized result is further verified with experiments using real video content. The proposed approach effectively serves as a generalized and practical optimization framework that can gauge and optimize a scalable wireless video broadcast/multicast based on multi-resolution modulation in any BWA network.

  7. Optimizing Energy and Modulation Selection in Multi-Resolution Modulation For Wireless Video Broadcast/Multicast

    KAUST Repository

    She, James; Ho, Pin-Han; Shihada, Basem

    2009-01-01

    Emerging technologies in Broadband Wireless Access (BWA) networks and video coding have enabled high-quality wireless video broadcast/multicast services in metropolitan areas. Joint source-channel coded wireless transmission, especially using hierarchical/superposition coded modulation at the channel, is recognized as an effective and scalable approach to increase the system scalability while tackling the multi-user channel diversity problem. The power allocation and modulation selection problem, however, is subject to a high computational complexity due to the nonlinear formulation and huge solution space. This paper introduces a dynamic programming framework with conditioned parsing, which significantly reduces the search space. The optimized result is further verified with experiments using real video content. The proposed approach effectively serves as a generalized and practical optimization framework that can gauge and optimize a scalable wireless video broadcast/multicast based on multi-resolution modulation in any BWA network.

  8. Numerical simulation using vorticity-vector potential formulation

    Science.gov (United States)

    Tokunaga, Hiroshi

    1993-01-01

    An accurate and efficient computational method is needed for three-dimensional incompressible viscous flows in engineering applications. On solving the turbulent shear flows directly or using the subgrid scale model, it is indispensable to resolve the small scale fluid motions as well as the large scale motions. From this point of view, the pseudo-spectral method is used so far as the computational method. However, the finite difference or the finite element methods are widely applied for computing the flow with practical importance since these methods are easily applied to the flows with complex geometric configurations. However, there exist several problems in applying the finite difference method to direct and large eddy simulations. Accuracy is one of most important problems. This point was already addressed by the present author on the direct simulations on the instability of the plane Poiseuille flow and also on the transition to turbulence. In order to obtain high efficiency, the multi-grid Poisson solver is combined with the higher-order, accurate finite difference method. The formulation method is also one of the most important problems in applying the finite difference method to the incompressible turbulent flows. The three-dimensional Navier-Stokes equations have been solved so far in the primitive variables formulation. One of the major difficulties of this method is the rigorous satisfaction of the equation of continuity. In general, the staggered grid is used for the satisfaction of the solenoidal condition for the velocity field at the wall boundary. However, the velocity field satisfies the equation of continuity automatically in the vorticity-vector potential formulation. From this point of view, the vorticity-vector potential method was extended to the generalized coordinate system. In the present article, we adopt the vorticity-vector potential formulation, the generalized coordinate system, and the 4th-order accurate difference method as the

  9. Canonical resolution of the multiplicity problem for U(3): an explicit and complete constructive solution

    International Nuclear Information System (INIS)

    Biedenharn, L.C.; Lohe, M.A.; Louck, J.D.

    1975-01-01

    The multiplicity problem for tensor operators in U(3) has a unique (canonical) resolution which is utilized to effect the explicit construction of all U(3) Wigner and Racah coefficients. Methods are employed which elucidate the structure of the results; in particular, the significance of the denominator functions entering the structure of these coefficients, and the relation of these denominator functions to the null space of the canonical tensor operators. An interesting feature of the denominator functions is the appearance of new, group theoretical, polynomials exhibiting several remarkable and quite unexpected properties. (U.S.)

  10. Anticancer Potential of Nutraceutical Formulations in MNU-induced Mammary Cancer in Sprague Dawley Rats

    OpenAIRE

    Pitchaiah, Gummalla; Akula, Annapurna; Chandi, Vishala

    2017-01-01

    Background: Nutraceuticals help in combating some of the major health problems of the century including cancer, and ?nutraceutical formulations? have led to the new era of medicine and health. Objective: To develop different nutraceutical formulations and to assess the anticancer potential of nutraceutical formulations in N-methyl-N-nitrosourea (MNU)-induced mammary cancer in Sprague Dawley rats. Materials and Methods: Different nutraceutical formulations were prepared using fine powders of a...

  11. Lagrangian and Eulerian finite element techniques for transient fluid-structure interaction problems

    International Nuclear Information System (INIS)

    Donea, J.; Fasoli-Stella, P.; Giuliani, S.

    1977-01-01

    The basic finite element equations for transient compressible fluid flow are presented in a form that allows the elements to be moved with the fluid in normal Lagrangian fashion, to be held fixed in a Eulerian manner, or to be moved in some arbitrarily specified way. The co-existence of Lagrangian and Eulerian regions within the finite element mesh will permit to handle greater distortions in the fluid motion than would be allowed by a purely Lagrangian method, with more resolution than is afforded by a purely Eulerian method. To achieve a mixed formulation, the conservation statements of mass, momentum and energy are expressed in integral form over a reference volume whose surface may be moving with an arbitrarily prescribed velocity. Direct use can be made of the integral forms of the mass and energy equations to adjust the element density and specific internal energy. The Galerkin process is employed to formulate a variational statement associated with the momentum equation. The difficulties associated with the presence of convective terms in the conservation equations are handled by expressing transports of mass, momentum and energy terms of intermediate velocities derived at each cycle from the previous cycle velocities and accelerations. The hydrodynamic elements presented are triangles, quadrilaterals with constant pressure and density. The finite element equations associated with these elements are described in the necessary detail. Numerical results are presented based on purely Lagrangian, purely Eulerian and mixed formulations. Simple problems with analytic solution are solved first to show the validity and accuracy of the proposed mixed finite element formulation. Then, practical problems are illustrated in the field of fast reactor safety analysis

  12. Introducing radiality constraints in capacitated location-routing problems

    Directory of Open Access Journals (Sweden)

    Eliana Mirledy Toro Ocampo

    2017-03-01

    Full Text Available In this paper, we introduce a unified mathematical formulation for the Capacitated Vehicle Routing Problem (CVRP and for the Capacitated Location Routing Problem (CLRP, adopting radiality constraints in order to guarantee valid routes and eliminate subtours. This idea is inspired by formulations already employed in electric power distribution networks, which requires a radial topology in its operation. The results show that the proposed formulation greatly improves the convergence of the solver.

  13. A Computational Analysis of the Traveling Salesman and Cutting Stock Problems

    Directory of Open Access Journals (Sweden)

    Gracia María D.

    2015-01-01

    Full Text Available The aim of this article is to perform a computational study to analyze the impact of formulations, and the solution strategy on the algorithmic performance of two classical optimization problems: the traveling salesman problem and the cutting stock problem. In order to assess the algorithmic performance on both problems three dependent variables were used: solution quality, computing time and number of iterations. The results are useful for choosing the solution approach to each specific problem. In the STSP, the results demonstrate that the multistage decision formulation is better than the conventional formulations, by solving 90.47% of the instances compared with MTZ (76.19% and DFJ (14.28%. The results of the CSP demonstrate that the cutting patterns formulation is better than the standard formulation with symmetry breaking inequalities, when the objective function is to minimize the loss of trim when cutting the rolls.

  14. Time-evolution problem in Regge calculus

    International Nuclear Information System (INIS)

    Sorkin, R.

    1975-01-01

    The simplectic approximation to Einstein's equations (''Regge calculus'') is derived by considering the net to be actually a (singular) Riemannian manifold. Specific nets for open and closed spaces are introduced in terms of which one can formulate the general time-evolution problem, which thereby reduces to the repeated solution of finite sets of coupled nonlinear (algebraic) equations. The initial-value problem is also formulated in simplectic terms

  15. Linear triangle finite element formulation for multigroup neutron transport analysis with anisotropic scattering

    Energy Technology Data Exchange (ETDEWEB)

    Lillie, R.A.; Robinson, J.C.

    1976-05-01

    The discrete ordinates method is the most powerful and generally used deterministic method to obtain approximate solutions of the Boltzmann transport equation. A finite element formulation, utilizing a canonical form of the transport equation, is here developed to obtain both integral and pointwise solutions to neutron transport problems. The formulation is based on the use of linear triangles. A general treatment of anisotropic scattering is included by employing discrete ordinates-like approximations. In addition, multigroup source outer iteration techniques are employed to perform group-dependent calculations. The ability of the formulation to reduce substantially ray effects and its ability to perform streaming calculations are demonstrated by analyzing a series of test problems. The anisotropic scattering and multigroup treatments used in the development of the formulation are verified by a number of one-dimensional comparisons. These comparisons also demonstrate the relative accuracy of the formulation in predicting integral parameters. The applicability of the formulation to nonorthogonal planar geometries is demonstrated by analyzing a hexagonal-type lattice. A small, high-leakage reactor model is analyzed to investigate the effects of varying both the spatial mesh and order of angular quadrature. This analysis reveals that these effects are more pronounced in the present formulation than in other conventional formulations. However, the insignificance of these effects is demonstrated by analyzing a realistic reactor configuration. In addition, this final analysis illustrates the importance of incorporating anisotropic scattering into the finite element formulation. 8 tables, 29 figures.

  16. Linear triangle finite element formulation for multigroup neutron transport analysis with anisotropic scattering

    International Nuclear Information System (INIS)

    Lillie, R.A.; Robinson, J.C.

    1976-05-01

    The discrete ordinates method is the most powerful and generally used deterministic method to obtain approximate solutions of the Boltzmann transport equation. A finite element formulation, utilizing a canonical form of the transport equation, is here developed to obtain both integral and pointwise solutions to neutron transport problems. The formulation is based on the use of linear triangles. A general treatment of anisotropic scattering is included by employing discrete ordinates-like approximations. In addition, multigroup source outer iteration techniques are employed to perform group-dependent calculations. The ability of the formulation to reduce substantially ray effects and its ability to perform streaming calculations are demonstrated by analyzing a series of test problems. The anisotropic scattering and multigroup treatments used in the development of the formulation are verified by a number of one-dimensional comparisons. These comparisons also demonstrate the relative accuracy of the formulation in predicting integral parameters. The applicability of the formulation to nonorthogonal planar geometries is demonstrated by analyzing a hexagonal-type lattice. A small, high-leakage reactor model is analyzed to investigate the effects of varying both the spatial mesh and order of angular quadrature. This analysis reveals that these effects are more pronounced in the present formulation than in other conventional formulations. However, the insignificance of these effects is demonstrated by analyzing a realistic reactor configuration. In addition, this final analysis illustrates the importance of incorporating anisotropic scattering into the finite element formulation. 8 tables, 29 figures

  17. Conflict Prevention and Resolution Center (CPRC)

    Science.gov (United States)

    The Conflict Prevention and Resolution Center is EPA's primary resource for services and expertise in the areas of consensus-building, collaborative problem solving, alternative dispute resolution, and environmental collaboration and conflict resolution.

  18. The covariant formulation of f ( T ) gravity

    International Nuclear Information System (INIS)

    Krššák, Martin; Saridakis, Emmanuel N

    2016-01-01

    We show that the well-known problem of frame dependence and violation of local Lorentz invariance in the usual formulation of f ( T ) gravity is a consequence of neglecting the role of spin connection. We re-formulate f ( T ) gravity starting from, instead of the ‘pure tetrad’ teleparallel gravity, the covariant teleparallel gravity, using both the tetrad and the spin connection as dynamical variables, resulting in a fully covariant, consistent, and frame-independent version of f ( T ) gravity, which does not suffer from the notorious problems of the usual, pure tetrad, f ( T ) theory. We present the method to extract solutions for the most physically important cases, such as the Minkowski, the Friedmann–Robertson–Walker (FRW) and the spherically symmetric ones. We show that in covariant f ( T ) gravity we are allowed to use an arbitrary tetrad in an arbitrary coordinate system along with the corresponding spin connection, resulting always in the same physically relevant field equations. (paper)

  19. High-resolution method for evolving complex interface networks

    Science.gov (United States)

    Pan, Shucheng; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2018-04-01

    In this paper we describe a high-resolution transport formulation of the regional level-set approach for an improved prediction of the evolution of complex interface networks. The novelty of this method is twofold: (i) construction of local level sets and reconstruction of a global level set, (ii) local transport of the interface network by employing high-order spatial discretization schemes for improved representation of complex topologies. Various numerical test cases of multi-region flow problems, including triple-point advection, single vortex flow, mean curvature flow, normal driven flow, dry foam dynamics and shock-bubble interaction show that the method is accurate and suitable for a wide range of complex interface-network evolutions. Its overall computational cost is comparable to the Semi-Lagrangian regional level-set method while the prediction accuracy is significantly improved. The approach thus offers a viable alternative to previous interface-network level-set method.

  20. Lagrangian formulation of classical BMT-theory

    International Nuclear Information System (INIS)

    Pupasov-Maksimov, Andrey; Deriglazov, Alexei; Guzman, Walberto

    2013-01-01

    Full text: The most popular classical theory of electron has been formulated by Bargmann, Michel and Telegdi (BMT) in 1959. The BMT equations give classical relativistic description of a charged particle with spin and anomalous magnetic momentum moving in homogeneous electro-magnetic field. This allows to study spin dynamics of polarized beams in uniform fields. In particular, first experimental measurements of muon anomalous magnetic momentum were done using changing of helicity predicted by BMT equations. Surprisingly enough, a systematic formulation and the analysis of the BMT theory are absent in literature. In the present work we particularly fill this gap by deducing Lagrangian formulation (variational problem) for BMT equations. Various equivalent forms of Lagrangian will be discussed in details. An advantage of the obtained classical model is that the Lagrangian action describes a relativistic spinning particle without Grassmann variables, for both free and interacting cases. This implies also the possibility of canonical quantization. In the interacting case, an arbitrary electromagnetic background may be considered, which generalizes the BMT theory formulated to the case of homogeneous fields. The classical model has two local symmetries, which gives an interesting example of constrained classical dynamics. It is surprising, that the case of vanishing anomalous part of the magnetic momentum is naturally highlighted in our construction. (author)

  1. The geometry of the SLsub(2,c) gauge formulation of general relativity

    International Nuclear Information System (INIS)

    Kaye, M.

    1978-01-01

    The formulation of Einstein's general theory of relativity as an SLsub(2,c) gauge theory is considered. Use is made of the language of fibre bundles and general arguments are put forward in favour of the SLsub(2,c) approach to problems connected with the study of the space-time structure. The possibility of deriving the dynamics of the theory from a Yang-Mills-type Lagrangian density is discussed. Finally, the spinor approach is compared with other approaches to the problem of formulating Einstein's theory as a gauge theory

  2. Fem Formulation for Heat and Mass Transfer in Porous Medium

    Science.gov (United States)

    Azeem; Soudagar, Manzoor Elahi M.; Salman Ahmed, N. J.; Anjum Badruddin, Irfan

    2017-08-01

    Heat and mass transfer in porous medium can be modelled using three partial differential equations namely, momentum equation, energy equation and mass diffusion. These three equations are coupled to each other by some common terms that turn the whole phenomenon into a complex problem with inter-dependable variables. The current article describes the finite element formulation of heat and mass transfer in porous medium with respect to Cartesian coordinates. The problem under study is formulated into algebraic form of equations by using Galerkin's method with the help of two-node linear triangular element having three nodes. The domain is meshed with smaller sized elements near the wall region and bigger size away from walls.

  3. Combined Helmholtz Integral Equation - Fourier series formulation of acoustical radiation and scattering problems

    CSIR Research Space (South Africa)

    Fedotov, I

    2006-07-01

    Full Text Available The Combined Helmholtz Integral Equation – Fourier series Formulation (CHIEFF) is based on representation of a velocity potential in terms of Fourier series and finding the Fourier coefficients of this expansion. The solution could be substantially...

  4. Formulation of poorly water-soluble Gemfibrozil applying power ultrasound.

    Science.gov (United States)

    Ambrus, R; Naghipour Amirzadi, N; Aigner, Z; Szabó-Révész, P

    2012-03-01

    The dissolution properties of a drug and its release from the dosage form have a basic impact on its bioavailability. Solubility problems are a major challenge for the pharmaceutical industry as concerns the development of new pharmaceutical products. Formulation problems may possibly be overcome by modification of particle size and morphology. The application of power ultrasound is a novel possibility in drug formulation. This article reports on solvent diffusion and melt emulsification, as new methods supplemented with drying in the field of sonocrystallization of poorly water-soluble Gemfibrozil. During thermoanalytical characterization, a modified structure was detected. The specific surface area of the drug was increased following particle size reduction and the poor wettability properties could also be improved. The dissolution rate was therefore significantly increased. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. Software tool for resolution of inverse problems using artificial intelligence techniques: an application in neutron spectrometry

    International Nuclear Information System (INIS)

    Castaneda M, V. H.; Martinez B, M. R.; Solis S, L. O.; Castaneda M, R.; Leon P, A. A.; Hernandez P, C. F.; Espinoza G, J. G.; Ortiz R, J. M.; Vega C, H. R.; Mendez, R.; Gallego, E.; Sousa L, M. A.

    2016-10-01

    The Taguchi methodology has proved to be highly efficient to solve inverse problems, in which the values of some parameters of the model must be obtained from the observed data. There are intrinsic mathematical characteristics that make a problem known as inverse. Inverse problems appear in many branches of science, engineering and mathematics. To solve this type of problem, researches have used different techniques. Recently, the use of techniques based on Artificial Intelligence technology is being explored by researches. This paper presents the use of a software tool based on artificial neural networks of generalized regression in the solution of inverse problems with application in high energy physics, specifically in the solution of the problem of neutron spectrometry. To solve this problem we use a software tool developed in the Mat Lab programming environment, which employs a friendly user interface, intuitive and easy to use for the user. This computational tool solves the inverse problem involved in the reconstruction of the neutron spectrum based on measurements made with a Bonner spheres spectrometric system. Introducing this information, the neural network is able to reconstruct the neutron spectrum with high performance and generalization capability. The tool allows that the end user does not require great training or technical knowledge in development and/or use of software, so it facilitates the use of the program for the resolution of inverse problems that are in several areas of knowledge. The techniques of Artificial Intelligence present singular veracity to solve inverse problems, given the characteristics of artificial neural networks and their network topology, therefore, the tool developed has been very useful, since the results generated by the Artificial Neural Network require few time in comparison to other techniques and are correct results comparing them with the actual data of the experiment. (Author)

  6. The virtual product-process design laboratory to manage the complexity in the verification of formulated products

    DEFF Research Database (Denmark)

    Conte, Elisa; Gani, Rafiqul; Malik, Tahir I.

    2011-01-01

    -Process Design laboratory (virtual PPD-lab) software is based on this decomposition strategy for the design of formulated liquid products. When the needed models are available in the software, the solution of formulation design/verification problems is straightforward, while when models are not available...... mixtures need to be predicted. This complexity has to be managed through decomposition of the problem into sub-problems. Each sub-problem is solved and analyzed and, from the knowledge gained, an overall evaluation of the complex chemical system representing the product is made. The virtual Product...... in the software library, they need to be developed and/or implemented. The potential of the virtual PPD-lab in managing the complexity in the verification of formulated products, after the needed models have been developed and implemented, is highlighted in this paper through a case study from industry dealing...

  7. Formulating a problem for functions of the effect in models of atmospheric pollution with parametric consideration of diffusion in the near earth layer

    Energy Technology Data Exchange (ETDEWEB)

    Ganev, K; Yordanov, D

    1983-01-01

    The formulation of a diffusion problem for numerical models, in which the near earth layer of the atmosphere (PSA) is considered parametrically relative to the zones of pollution (the protected zones) which are also located in the near earth layer of atmosphere, is examined. In modeling atmospheric pollution, the semiempirical equation of turublent diffusion is the starting point. The results are cited of numerical calculations of the functions of the effect for the city of Sofia (the case of an even relief and the disposition of the protected zone totally within the region for which the problem is being solved). The isolines of the integral function of the effect relative to the near earth layer of the atmosphere above the city of Sofia are cited for different meteorological conditions.

  8. Significance of Strain in Formulation in Theory of Solid Mechanics

    Science.gov (United States)

    Patnaik, Surya N.; Coroneos, Rula M.; Hopkins, Dale A.

    2003-01-01

    The basic theory of solid mechanics was deemed complete circa 1860 when St. Venant provided the strain formulation or the field compatibility condition. The strain formulation was incomplete. The missing portion has been formulated and identified as the boundary compatibility condition (BCC). The BCC, derived through a variational formulation, has been verified through integral theorem and solution of problems. The BCC, unlike the field counterpart, do not trivialize when expressed in displacements. Navier s method and the stiffness formulation have to account for the extra conditions especially at the inter-element boundaries in a finite element model. Completion of the strain formulation has led to the revival of the direct force calculation methods: the Integrated Force Method (IFM) and its dual (IFMD) for finite element analysis, and the completed Beltrami-Michell formulation (CBMF) in elasticity. The benefits from the new methods in elasticity, in finite element analysis, and in design optimization are discussed. Existing solutions and computer codes may have to be adjusted for the compliance of the new conditions. Complacency because the discipline is over a century old and computer codes have been developed for half a century can lead to stagnation of the discipline.

  9. General problems arising from the analogical resolution of the kinetic equations of nuclear reactors (1961); Problemes generaux poses par la resolution analogique des equations cinetiques des reacteurs nucleaires (1961)

    Energy Technology Data Exchange (ETDEWEB)

    Caillet, C [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1961-07-01

    The author reviews precisely the analogical techniques used for the resolution of the kinetic equations of nuclear reactors. Prior to this, he recalls the reasons which oblige physicians and engineers, even today, to use electronic machines in this domain. The author then considers the technological problems posed by the range of values which the various nuclear parameters adopt. In each case, he shows that a compromise is possible allowing an optimum precision. He compares the results to those obtained by arithmetic calculation and uses the examples chosen in a critical analysis of the present possibilities of the two methods of calculation. (author) [French] L'auteur cherche a faire un point aussi exact que possible des techniques analogiques utilisees pour resoudre les equations cinetiques des reacteurs nucleaires. Il rappelle auparavant les raisons pour lesquelles physiciens et ingenieurs sont obliges, encore aujourd'hui, de faire appel aux machines electroniques dans ce domaine. Puis il etudie les problemes technologiques que souleve le champ des valeurs prises par les differents parametres nucleaires. Dans chacun des cas, il montre l'existence d'un compromis qui permet d'atteindre une precision optimum. Il compare les resultats obtenus a ceux provenant de calculateurs arithmetiques et profite des exemples choisis pour faire une analyse critique des possibilites actuelles offertes par les deux modes de calcul. (auteur)

  10. Equivalent physical models and formulation of equivalent source layer in high-resolution EEG imaging

    International Nuclear Information System (INIS)

    Yao Dezhong; He Bin

    2003-01-01

    In high-resolution EEG imaging, both equivalent dipole layer (EDL) and equivalent charge layer (ECL) assumed to be located just above the cortical surface have been proposed as high-resolution imaging modalities or as intermediate steps to estimate the epicortical potential. Presented here are the equivalent physical models of these two equivalent source layers (ESL) which show that the strength of EDL is proportional to the surface potential of the layer when the outside of the layer is filled with an insulator, and that the strength of ECL is the normal current of the layer when the outside is filled with a perfect conductor. Based on these equivalent physical models, closed solutions of ECL and EDL corresponding to a dipole enclosed by a spherical layer are given. These results provide the theoretical basis of ESL applications in high-resolution EEG mapping

  11. Equivalent physical models and formulation of equivalent source layer in high-resolution EEG imaging

    Energy Technology Data Exchange (ETDEWEB)

    Yao Dezhong [School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu City, 610054, Sichuan Province (China); He Bin [The University of Illinois at Chicago, IL (United States)

    2003-11-07

    In high-resolution EEG imaging, both equivalent dipole layer (EDL) and equivalent charge layer (ECL) assumed to be located just above the cortical surface have been proposed as high-resolution imaging modalities or as intermediate steps to estimate the epicortical potential. Presented here are the equivalent physical models of these two equivalent source layers (ESL) which show that the strength of EDL is proportional to the surface potential of the layer when the outside of the layer is filled with an insulator, and that the strength of ECL is the normal current of the layer when the outside is filled with a perfect conductor. Based on these equivalent physical models, closed solutions of ECL and EDL corresponding to a dipole enclosed by a spherical layer are given. These results provide the theoretical basis of ESL applications in high-resolution EEG mapping.

  12. High-Resolution Spore Coat Architecture and Assembly of Bacillus Spores

    Energy Technology Data Exchange (ETDEWEB)

    Malkin, A J; Elhadj, S; Plomp, M

    2011-03-14

    Elucidating the molecular architecture of bacterial and cellular surfaces and its structural dynamics is essential to understanding mechanisms of pathogenesis, immune response, physicochemical interactions, environmental resistance, and provide the means for identifying spore formulation and processing attributes. I will discuss the application of in vitro atomic force microscopy (AFM) for studies of high-resolution coat architecture and assembly of several Bacillus spore species. We have demonstrated that bacterial spore coat structures are phylogenetically and growth medium determined. We have proposed that strikingly different species-dependent coat structures of bacterial spore species are a consequence of sporulation media-dependent nucleation and crystallization mechanisms that regulate the assembly of the outer spore coat. Spore coat layers were found to exhibit screw dislocations and two-dimensional nuclei typically observed on inorganic and macromolecular crystals. This presents the first case of non-mineral crystal growth patterns being revealed for a biological organism, which provides an unexpected example of nature exploiting fundamental materials science mechanisms for the morphogenetic control of biological ultrastructures. We have discovered and validated, distinctive formulation-specific high-resolution structural spore coat and dimensional signatures of B. anthracis spores (Sterne strain) grown in different formulation condition. We further demonstrated that measurement of the dimensional characteristics of B. anthracis spores provides formulation classification and sample matching with high sensitivity and specificity. I will present data on the development of an AFM-based immunolabeling technique for the proteomic mapping of macromolecular structures on the B. anthracis surfaces. These studies demonstrate that AFM can probe microbial surface architecture, environmental dynamics and the life cycle of bacterial and cellular systems at near

  13. High-resolution coded-aperture design for compressive X-ray tomography using low resolution detectors

    Science.gov (United States)

    Mojica, Edson; Pertuz, Said; Arguello, Henry

    2017-12-01

    One of the main challenges in Computed Tomography (CT) is obtaining accurate reconstructions of the imaged object while keeping a low radiation dose in the acquisition process. In order to solve this problem, several researchers have proposed the use of compressed sensing for reducing the amount of measurements required to perform CT. This paper tackles the problem of designing high-resolution coded apertures for compressed sensing computed tomography. In contrast to previous approaches, we aim at designing apertures to be used with low-resolution detectors in order to achieve super-resolution. The proposed method iteratively improves random coded apertures using a gradient descent algorithm subject to constraints in the coherence and homogeneity of the compressive sensing matrix induced by the coded aperture. Experiments with different test sets show consistent results for different transmittances, number of shots and super-resolution factors.

  14. A Location-Inventory-Routing Problem in Forward and Reverse Logistics Network Design

    Directory of Open Access Journals (Sweden)

    Qunli Yuchi

    2016-01-01

    Full Text Available We study a new problem of location-inventory-routing in forward and reverse logistic (LIRP-FRL network design, which simultaneously integrates the location decisions of distribution centers (DCs, the inventory policies of opened DCs, and the vehicle routing decision in serving customers, in which new goods are produced and damaged goods are repaired by a manufacturer and then returned to the market to satisfy customers’ demands as new ones. Our objective is to minimize the total costs of manufacturing and remanufacturing goods, building DCs, shipping goods (new or recovered between the manufacturer and opened DCs, and distributing new or recovered goods to customers and ordering and storage costs of goods. A nonlinear integer programming model is proposed to formulate the LIRP-FRL. A new tabu search (NTS algorithm is developed to achieve near optimal solution of the problem. Numerical experiments on the benchmark instances of a simplified version of the LIRP-FRL, the capacitated location routing problem, and the randomly generated LIRP-FRL instances demonstrate the effectiveness and efficiency of the proposed NTS algorithm in problem resolution.

  15. Static And Kinematic Formulation Of Planar Reciprocal Assemblies

    DEFF Research Database (Denmark)

    Parigi, Dario; Kirkegaard, Poul Henning

    2013-01-01

    Planar reciprocal frames are two dimensional structures formed by elements joined together according to the principle of structural reciprocity. In this paper a rigorous formulation of the static and kinematic problem is proposed and developed by extending the work on pin-jointed assemblies by Pe...

  16. Retaining nurses through conflict resolution. Training staff to confront problems and communicate openly can improve the work climate.

    Science.gov (United States)

    Fowler, A R; Bushardt, S C; Jones, M A

    1993-06-01

    The way nurses resolve conflict may be leading them to quit their jobs or leave the profession altogether. Conflict is inevitable in a dynamic organization. What is important is not to avoid conflict but to seek its resolution in a constructive manner. Organizational conflict is typically resolved through one of five strategies: withdrawal, force, conciliation, compromise, or confrontation. A recent study of nurses in three different hospitals showed that the approach they use most is withdrawal. This might manifest itself in a request to change shifts or assignments and may lead to a job change and, eventually, abandonment of the field altogether. Given this scenario, changing nurses' conflict resolution style may help administrators combat the nursing shortage. Healthcare organizations must examine themselves to determine why nurses so frequently use withdrawal; then they must restructure work relationships as needed. Next, organizations need to increase nurses' awareness of the problem and train them to use a resolution style more conducive to building stable relationships: confrontation. Staff should also be trained in effective communications skills to develop trust and openness in their relationships.

  17. Langevin formulation of quantum dynamics

    International Nuclear Information System (INIS)

    Roncadelli, M.

    1989-03-01

    We first show that nonrelativistic quantum mechanics formulated at imaginary-(h/2 π) can formally be viewed as the Fokker-Planck description of a frictionless brownian motion, which occurs (in general) in an absorbing medium. We next offer a new formulation of quantum mechanics, which is basically the Langevin treatment of this brownian motion. Explicitly, we derive a noise-average representation for the transition probability W(X'',t''|X',t'), in terms of the solutions to a Langevin equation with a Gaussian white-noise. Upon analytic continuation back to real-(h/2 π),W(X'',t''|X',t') becomes the propagator of the original Schroedinger equation. Our approach allows for a straightforward application to quantum dynamical problems of the mathematical techniques of classical stochastic processes. Moreover, computer simulations of quantum mechanical systems can be carried out by using numerical programs based on the Langevin dynamics. (author). 19 refs, 1 tab

  18. Separation of pigment formulations by high-performance thin-layer chromatography with automated multiple development.

    Science.gov (United States)

    Stiefel, Constanze; Dietzel, Sylvia; Endress, Marc; Morlock, Gertrud E

    2016-09-02

    Food packaging is designed to provide sufficient protection for the respective filling, legally binding information for the consumers like nutritional facts or filling information, and an attractive appearance to promote the sale. For quality and safety of the package, a regular quality control of the used printing materials is necessary to get consistently good print results, to avoid migration of undesired ink components into the food and to identify potentially faulty ink batches. Analytical approaches, however, have hardly been considered for quality assurance so far due to the lack of robust, suitable methods for the analysis of rarely soluble pigment formulations. Thus, a simple and generic high-performance thin-layer chromatography (HPTLC) method for the separation of different colored pigment formulations was developed on HPTLC plates silica gel 60 by automated multiple development. The gradient system provided a sharp resolution for differently soluble pigment constituents like additives and coating materials. The results of multi-detection allowed a first assignment of the differently detectable bands to particular chemical substance classes (e.g., lipophilic components), enabled the comparison of different commercially available pigment batches and revealed substantial variations in the composition of the batches. Hyphenation of HPTLC with high resolution mass spectrometry and infrared spectroscopy allowed the characterization of single unknown pigment constituents, which may partly be responsible for known quality problems during printing. The newly developed, precise and selective HPTLC method can be used as part of routine quality control for both, incoming pigment batches and monitoring of internal pigment production processes, to secure a consistent pigment composition resulting in consistent ink quality, a faultless print image and safe products. Hyphenation of HPTLC with the A. fischeri bioassay gave first information on the bioactivity or rather

  19. An analytical approach for a nodal formulation of a two-dimensional fixed-source neutron transport problem in heterogeneous medium

    Energy Technology Data Exchange (ETDEWEB)

    Basso Barichello, Liliane; Dias da Cunha, Rudnei [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Inst. de Matematica; Becker Picoloto, Camila [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica; Tres, Anderson [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Matematica Aplicada

    2015-05-15

    A nodal formulation of a fixed-source two-dimensional neutron transport problem, in Cartesian geometry, defined in a heterogeneous medium, is solved by an analytical approach. Explicit expressions, in terms of the spatial variables, are derived for averaged fluxes in each region in which the domain is subdivided. The procedure is an extension of an analytical discrete ordinates method, the ADO method, for the solution of the two-dimensional homogeneous medium case. The scheme is developed from the discrete ordinates version of the two-dimensional transport equation along with the level symmetric quadrature scheme. As usual for nodal schemes, relations between the averaged fluxes and the unknown angular fluxes at the contours are introduced as auxiliary equations. Numerical results are in agreement with results available in the literature.

  20. STATEMENT OF THE OPTIMIZATION PROBLEM OF CARBON PRODUCTS PRODUCTION

    Directory of Open Access Journals (Sweden)

    O. A. Zhuchenko

    2016-08-01

    Full Text Available The paper formulated optimization problem formulation production of carbon products. The analysis of technical and economic parameters that can be used to optimize the production of carbonaceous products had been done by the author. To evaluate the efficiency of the energy-intensive production uses several technical and economic indicators. In particular, the specific cost, productivity, income and profitability of production. Based on a detailed analysis had been formulated optimality criterion that takes into account the technological components of profitability. The components in detail the criteria and the proposed method of calculating non-trivial, one of them - the production cost of each product. When solving the optimization problem of technological modes of production into account constraints on the variables are optimized. Thus, restrictions may be expressed on the number of each product produced. Have been formulated the method of calculating the cost per unit of product. Attention is paid to the quality indices of finished products as an additional constraint in the optimization problem. As a result have been formulated the general problem of optimizing the production of carbon products, which includes the optimality criterion and restrictions.

  1. A formulation to analyze system-of-systems problems: A case study of airport metroplex operations

    Science.gov (United States)

    Ayyalasomayajula, Sricharan Kishore

    A system-of-systems (SoS) can be described as a collection of multiple, heterogeneous, distributed, independent components interacting to achieve a range of objectives. A generic formulation was developed to model component interactions in an SoS to understand their influence on overall SoS performance. The formulation employs a lexicon to aggregate components into hierarchical interaction networks and understand how their topological properties affect the performance of the aggregations. Overall SoS performance is evaluated by monitoring the changes in stakeholder profitability due to changes in component interactions. The formulation was applied to a case study in air transportation focusing on operations at airport metroplexes. Metroplexes are geographical regions with two or more airports in close proximity to one another. The case study explored how metroplex airports interact with one another, what dependencies drive these interactions, and how these dependencies affect metroplex throughput and capacity. Metrics were developed to quantify runway dependencies at a metroplex and were correlated with its throughput and capacity. Operations at the New York/New Jersey metroplex (NYNJ) airports were simulated to explore the feasibility of operating very large aircraft (VLA), such as the Airbus A380, as a delay-mitigation strategy at these airports. The proposed formulation was employed to analyze the impact of this strategy on different stakeholders in the national air transportation system (ATS), such as airlines and airports. The analysis results and their implications were used to compare the pros and cons of operating VLAs at NYNJ from the perspectives of airline profitability, and flight delays at NYNJ and across the ATS.

  2. A generalized transport-velocity formulation for smoothed particle hydrodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Chi; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de; Adams, Nikolaus A.

    2017-05-15

    The standard smoothed particle hydrodynamics (SPH) method suffers from tensile instability. In fluid-dynamics simulations this instability leads to particle clumping and void regions when negative pressure occurs. In solid-dynamics simulations, it results in unphysical structure fragmentation. In this work the transport-velocity formulation of Adami et al. (2013) is generalized for providing a solution of this long-standing problem. Other than imposing a global background pressure, a variable background pressure is used to modify the particle transport velocity and eliminate the tensile instability completely. Furthermore, such a modification is localized by defining a shortened smoothing length. The generalized formulation is suitable for fluid and solid materials with and without free surfaces. The results of extensive numerical tests on both fluid and solid dynamics problems indicate that the new method provides a unified approach for multi-physics SPH simulations.

  3. Segmentation of High Angular Resolution Diffusion MRI using Sparse Riemannian Manifold Clustering

    Science.gov (United States)

    Wright, Margaret J.; Thompson, Paul M.; Vidal, René

    2015-01-01

    We address the problem of segmenting high angular resolution diffusion imaging (HARDI) data into multiple regions (or fiber tracts) with distinct diffusion properties. We use the orientation distribution function (ODF) to represent HARDI data and cast the problem as a clustering problem in the space of ODFs. Our approach integrates tools from sparse representation theory and Riemannian geometry into a graph theoretic segmentation framework. By exploiting the Riemannian properties of the space of ODFs, we learn a sparse representation for each ODF and infer the segmentation by applying spectral clustering to a similarity matrix built from these representations. In cases where regions with similar (resp. distinct) diffusion properties belong to different (resp. same) fiber tracts, we obtain the segmentation by incorporating spatial and user-specified pairwise relationships into the formulation. Experiments on synthetic data evaluate the sensitivity of our method to image noise and the presence of complex fiber configurations, and show its superior performance compared to alternative segmentation methods. Experiments on phantom and real data demonstrate the accuracy of the proposed method in segmenting simulated fibers, as well as white matter fiber tracts of clinical importance in the human brain. PMID:24108748

  4. Formulation, in vitro and in vivo evaluation of transdermal patches containing risperidone.

    Science.gov (United States)

    Aggarwal, Geeta; Dhawan, Sanju; Hari Kumar, S L

    2013-01-01

    The efficacy of oral risperidone treatment in prevention of schizophrenia is well known. However, oral side effects and patient compliance is always a problem for schizophrenics. In this study, risperidone was formulated into matrix transdermal patches to overcome these problems. The formulation factors for such patches, including eudragit RL 100 and eudragit RS 100 as matrix forming polymers, olive oil, groundnut oil and jojoba oil in different concentrations as enhancers and amount of drug loaded were investigated. The transdermal patches containing risperidone were prepared by solvent casting method and characterized for physicochemical and in vitro permeation studies through excised rat skin. Among the tested preparations, formulations with 20% risperidone, 3:2 ERL 100 and ERS 100 as polymers, mixture of olive oil and jojoba oil as enhancer, exhibited greatest cumulative amount of drug permeated (1.87 ± 0.09 mg/cm(2)) in 72 h, so batch ROJ was concluded as optimized formulation and assessed for pharmacokinetic, pharmacodynamic and skin irritation potential. The pharmacokinetic characteristics of the optimized risperidone patch were determined using rabbits, while orally administered risperidone in solution was used for comparison. The calculated relative bioavailability of risperidone transdermal patch was 115.20% with prolonged release of drug. Neuroleptic efficacy of transdermal formulation was assessed by rota-rod and grip test in comparison with control and marketed oral formulations with no skin irritation. This suggests the transdermal application of risperidone holds promise for improved bioavailability and better management of schizophrenia in long-term basis.

  5. From a Nonlinear, Nonconvex Variational Problem to a Linear, Convex Formulation

    International Nuclear Information System (INIS)

    Egozcue, J.; Meziat, R.; Pedregal, P.

    2002-01-01

    We propose a general approach to deal with nonlinear, nonconvex variational problems based on a reformulation of the problem resulting in an optimization problem with linear cost functional and convex constraints. As a first step we explicitly explore these ideas to some one-dimensional variational problems and obtain specific conclusions of an analytical and numerical nature

  6. Single-particle Schroedinger fluid. I. Formulation

    International Nuclear Information System (INIS)

    Kan, K.K.; Griffin, J.J.

    1976-01-01

    The problem of a single quantal particle moving in a time-dependent external potential well is formulated specifically to emphasize and develop the fluid dynamical aspects of the matter flow. This idealized problem, the single-particle Schroedinger fluid, is shown to exhibit already a remarkably rich variety of fluid dynamical features, including compressible flow and line vortices. It provides also a sufficient framework to encompass simultaneously various simplified fluidic models for nuclei which have earlier been postulated on an ad hoc basis, and to illuminate their underlying restrictions. Explicit solutions of the single-particle Schroedinger fluid problem are studied in the adiabatic limit for their mathematical and physical implications (especially regarding the collective kinetic energy). The basic generalizations for extension of the treatment to the many-body Schroedinger fluid are set forth

  7. Resolution analyses for selecting an appropriate airborne electromagnetic (AEM) system

    DEFF Research Database (Denmark)

    Christensen, N.B.; Lawrie, Ken

    2012-01-01

    is necessary and has to be approached in a pragmatic way involving a range of different aspects. In this paper, we concentrate on the resolution analysis perspective and demonstrate that the inversion analysis must be preferred over the derivative analysis because it takes parameter coupling into account, and...... resolution for a series of models relevant to the survey area by comparing the sum over the data of squares of noise-normalised derivatives. We compare this analysis method with a resolution analysis based on the posterior covariance matrix of an inversion formulation. Both of the above analyses depend......, furthermore, that the derivative analysis generally overestimates the resolution capability. Finally we show that impulse response data are to be preferred over step response data for near-surface resolution....

  8. Limitations on energy resolution of segmented silicon detectors

    Science.gov (United States)

    Wiącek, P.; Chudyba, M.; Fiutowski, T.; Dąbrowski, W.

    2018-04-01

    In the paper experimental study of charge division effects and energy resolution of X-ray silicon pad detectors are presented. The measurements of electrical parameters, capacitances and leakage currents, for six different layouts of pad arrays are reported. The X-ray spectra have been measured using a custom developed dedicated low noise front-end electronics. The spectra measured for six different detector layouts have been analysed in detail with particular emphasis on quantitative evaluation of charge division effects. Main components of the energy resolution due to Fano fluctuations, electronic noise, and charge division, have been estimated for six different sensor layouts. General recommendations regarding optimisation of pad sensor layout for achieving best possible energy resolution have been formulated.

  9. Application of Force and Energy Approaches to the Problem of a One-Dimensional, Fully Connected, Nonlinear-Spring Lattice Structure

    Science.gov (United States)

    2015-09-01

    collection of  information  if  it does not display a  currently valid OMB  control  number.  PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS.  1...section of the body. In general, this force balancing requires vectorial addition; however, because the problem under consideration is a 1-D lattice...than 1, the formulations would be still more intricate, as vectorial calculations 15 would be required for component resolution. In the force approach

  10. Formulating natural based cosmetic product - irradiated herbal lip balm

    International Nuclear Information System (INIS)

    Seri Chempaka Mohd Yusof; Ros Anita Ahmad Ramli; Foziah Ali; Zainab Harun

    2007-01-01

    Herbal lip balm was formulated in efforts to produce a safe product, attractive with multifunctional usage i.e. prevent chap lips, reduce mouth odour and benefits in improving the health quality. Problems faced in constructing formulations of herbal lip balm were focused to the extraction of anthocyanins, the stability of the pigments in the formulations and changes of colour during irradiation for the sterilization of herbal lip balm. Natural pigment, anthocyanin was used as a colorant agent in herbal lip balm, obtained from various herbs and vegetables i.e. Hibiscus sabdariffa L. (roselle), Brassica oleracea var. capitata f. rubra (red cabbage) and Daucus carota (carrot). Water based extraction method was used in extracting the anthocyanins. The incorporation of honey in the formulations improved the colour of the lip balm. The usage of plant based ingredient i.e. cocoa butter substituting the normal based ingredient i.e. petroleum jelly in lip balm also affecting the colour of herbal lip balm. Irradiation at 2.5, 5.0 and 10 kGy was carried out as preservation and reducing of microbial load of the herbal lip balm and changes in colour were observed in formulations irradiated at 10 kGy. (Author)

  11. Proposition of a method to formulate idea and innovation problems

    Directory of Open Access Journals (Sweden)

    Stéphane GORIA

    2010-01-01

    Full Text Available Nowadays, innovation is a major stake for development and durability of many companies and institution’s activities. In our work, we are interested in interpersonal communication problem in innovation context. We focus on the upstream of innovation process and we consider the means that can be used. First to leverage organisation’s innovation perspective and second to solve innovation problem proposed by a strategic decision maker. To do this, we believe that people charge with solving an innovation problem should manage some difficulties: to identify what can be an innovation for the decision maker, to define the objects with a big innovation potential, communicate to his possible partners the innovation field to investigate and to find ideas to innovate etc. Then, we try to identify and to present these ideas to contribute to decision making process. At this time when the Web is really the universal source with access to formidable information quantities, we use it to establish some creativity and idea sources for innovation. This paper presents a method to solve these communication problems. It presents a tool box which potentialities are illustrated by an example of the management of a “new chair” development problem.

  12. Reputation mechanism: From resolution for truthful online auctions to the model of optimal one-gambler problem

    Energy Technology Data Exchange (ETDEWEB)

    Bradonjic, Milan [Los Alamos National Laboratory

    2009-01-01

    In this paper we study reputation mechanisms, and show how the notion of reputation can help us in building truthful online auction mechanisms. From the mechanism design prospective, we derive the conditions on and design a truthful online auction mechanism. Moreover, in the case when some agents may lay or cannot have the real knowledge about the other agents reputations, we derive the resolution of the auction, such that the mechanism is truthful. Consequently, we move forward to the optimal one-gambler/one-seller problem, and explain how that problem is refinement of the previously discussed online auction design in the presence of reputation mechanism. In the setting of the optimal one-gambler problem, we naturally rise and solve the specific question: What is an agent's optimal strategy, in order to maximize his revenue? We would like to stress that our analysis goes beyond the scope, which game theory usually discusses under the notion of reputation. We model one-player games, by introducing a new parameter (reputation), which helps us in predicting the agent's behavior, in real-world situations, such as, behavior of a gambler, real-estate dealer, etc.

  13. Mathematical Model and Algorithm for the Reefer Mechanic Scheduling Problem at Seaports

    Directory of Open Access Journals (Sweden)

    Jiantong Zhang

    2017-01-01

    Full Text Available With the development of seaborne logistics, the international trade of goods transported in refrigerated containers is growing fast. Refrigerated containers, also known as reefers, are used in transportation of temperature sensitive cargo, such as perishable fruits. This trend brings new challenges to terminal managers, that is, how to efficiently arrange mechanics to plug and unplug power for the reefers (i.e., tasks at yards. This work investigates the reefer mechanics scheduling problem at container ports. To minimize the sum of the total tardiness of all tasks and the total working distance of all mechanics, we formulate a mathematical model. For the resolution of this problem, we propose a DE algorithm which is combined with efficient heuristics, local search strategies, and parameter adaption scheme. The proposed algorithm is tested and validated through numerical experiments. Computational results demonstrate the effectiveness and efficiency of the proposed algorithm.

  14. The nonlinear dynamics of family problem solving in adolescence: the predictive validity of a peaceful resolution attractor.

    Science.gov (United States)

    Dishion, Thomas J; Forgatch, Marion; Van Ryzin, Mark; Winter, Charlotte

    2012-07-01

    In this study we examined the videotaped family interactions of a community sample of adolescents and their parents. Youths were assessed in early to late adolescence on their levels of antisocial behavior. At age 16-17, youths and their parents were videotaped interacting while completing a variety of tasks, including family problem solving. The interactions were coded and compared for three developmental patterns of antisocial behavior: early onset, persistent; adolescence onset; and typically developing. The mean duration of conflict bouts was the only interaction pattern that discriminated the 3 groups. In the prediction of future antisocial behavior, parent and youth reports of transition entropy and conflict resolution interacted to account for antisocial behavior at age 18-19. Families with low entropy and peaceful resolutions predicted low levels of youth antisocial behavior at age 18-19. These findings suggest the need to study both attractors and repellers to understand family dynamics associated with health and social and emotional development.

  15. Volumetric formulation of lattice Boltzmann models with energy conservation

    OpenAIRE

    Sbragaglia, M.; Sugiyama, K.

    2010-01-01

    We analyze a volumetric formulation of lattice Boltzmann for compressible thermal fluid flows. The velocity set is chosen with the desired accuracy, based on the Gauss-Hermite quadrature procedure, and tested against controlled problems in bounded and unbounded fluids. The method allows the simulation of thermohydrodyamical problems without the need to preserve the exact space-filling nature of the velocity set, but still ensuring the exact conservation laws for density, momentum and energy. ...

  16. Solving the problem of imaging resolution: stochastic multi-scale image fusion

    Science.gov (United States)

    Karsanina, Marina; Mallants, Dirk; Gilyazetdinova, Dina; Gerke, Kiril

    2016-04-01

    Structural features of porous materials define the majority of its physical properties, including water infiltration and redistribution, multi-phase flow (e.g. simultaneous water/air flow, gas exchange between biologically active soil root zone and atmosphere, etc.) and solute transport. To characterize soil and rock microstructure X-ray microtomography is extremely useful. However, as any other imaging technique, this one also has a significant drawback - a trade-off between sample size and resolution. The latter is a significant problem for multi-scale complex structures, especially such as soils and carbonates. Other imaging techniques, for example, SEM/FIB-SEM or X-ray macrotomography can be helpful in obtaining higher resolution or wider field of view. The ultimate goal is to create a single dataset containing information from all scales or to characterize such multi-scale structure. In this contribution we demonstrate a general solution for merging multiscale categorical spatial data into a single dataset using stochastic reconstructions with rescaled correlation functions. The versatility of the method is demonstrated by merging three images representing macro, micro and nanoscale spatial information on porous media structure. Images obtained by X-ray microtomography and scanning electron microscopy were fused into a single image with predefined resolution. The methodology is sufficiently generic for implementation of other stochastic reconstruction techniques, any number of scales, any number of material phases, and any number of images for a given scale. The methodology can be further used to assess effective properties of fused porous media images or to compress voluminous spatial datasets for efficient data storage. Potential practical applications of this method are abundant in soil science, hydrology and petroleum engineering, as well as other geosciences. This work was partially supported by RSF grant 14-17-00658 (X-ray microtomography study of shale

  17. Initial value formulation of higher derivative gravity

    International Nuclear Information System (INIS)

    Noakes, D.R.

    1983-01-01

    The initial value problem is considered for the conformally coupled scalar field and higher derivative gravity, by expressing the equations of each theory in harmonic coordinates. For each theory it is shown that the (vacuum) equations can take the form of a diagonal hyperbolic system with constraints on the initial data. Consequently these theories possess well-posed initial value formulations

  18. Contribution of Fuzzy Minimal Cost Flow Problem by Possibility Programming

    OpenAIRE

    S. Fanati Rashidi; A. A. Noora

    2010-01-01

    Using the concept of possibility proposed by zadeh, luhandjula ([4,8]) and buckley ([1]) have proposed the possibility programming. The formulation of buckley results in nonlinear programming problems. Negi [6]re-formulated the approach of Buckley by the use of trapezoidal fuzzy numbers and reduced the problem into fuzzy linear programming problem. Shih and Lee ([7]) used the Negi approach to solve a minimum cost flow problem, whit fuzzy costs and the upper and lower bound. ...

  19. Topology optimization for acoustic-structure interaction problems

    DEFF Research Database (Denmark)

    Yoon, Gil Ho; Jensen, Jakob Søndergaard; Sigmund, Ole

    2006-01-01

    We propose a gradient based topology optimization algorithm for acoustic-structure (vibro-acoustic) interaction problems without an explicit interfacing boundary representation. In acoustic-structure interaction problems, the pressure field and the displacement field are governed by the Helmholtz...... to subdomain interfaces evolving during the optimization process. In this paper, we propose to use a mixed finite element formulation with displacements and pressure as primary variables (u/p formulation) which eliminates the need for explicit boundary representation. In order to describe the Helmholtz......-dimensional acoustic-structure interaction problems are optimized to show the validity of the proposed method....

  20. Nuclear waste disposal: Can there be a resolution? Past problems and future solutions

    Energy Technology Data Exchange (ETDEWEB)

    Ahearne, J [Scientific Research Society, Sigma Xi, Research Triangle Park, NC (United States)

    1990-07-01

    Why does the high level waste problem have to be solved now? There are perhaps three answers to that question. First, to have a recovery of nuclear power. But a lack of resolution of the high level waste problem is not the principal reason that nuclear power has foundered and, consequently, solving it will not automatically revive nuclear power. However, if the nuclear industry is adamantly convinced that this is the key to reviving nuclear power, then the nuclear industry should demonstrate its conviction by putting much greater effort into resolving the high level waste problem technically, not through public relations. For example, a substantial effort on the actinide burning approach might demonstrate, in the old American phrase, 'putting your money where your mouth is'. Second, the high level waste problem must be solved now because it is a devil's brew. However, chemical wastes last longer, as we all know, than do the radioactive wastes. As one expert has noted: 'There is real risk in nuclear power, just as there is real risk in coal power.... For some of [these risks], like the greenhouse effect, the potential damage is devastating. While for others, like nuclear accidents, the risk is limited, but imaginations are not. For still others, like the risk posed by a high-level waste repository, there is essentially nothing outside the imagination of the gullible.' Furthermore, any technical solution or any solution to a risky problem requires one to think carefully. It is often better to do it right than quickly. A third reason for requiring it to be solved right now is that HLW disposal is a major technical problem blocking a potentially valuable energy source. But we need a new solution. The current solutions are not working. I believe that we ought to recognize the failure of the geologic repository approach. I believe the federal government should identify, with industry's assistance, the best techniques for surface storage. Some federal locations should be

  1. Nuclear waste disposal: Can there be a resolution? Past problems and future solutions

    International Nuclear Information System (INIS)

    Ahearne, J.

    1990-01-01

    Why does the high level waste problem have to be solved now? There are perhaps three answers to that question. First, to have a recovery of nuclear power. But a lack of resolution of the high level waste problem is not the principal reason that nuclear power has foundered and, consequently, solving it will not automatically revive nuclear power. However, if the nuclear industry is adamantly convinced that this is the key to reviving nuclear power, then the nuclear industry should demonstrate its conviction by putting much greater effort into resolving the high level waste problem technically, not through public relations. For example, a substantial effort on the actinide burning approach might demonstrate, in the old American phrase, 'putting your money where your mouth is'. Second, the high level waste problem must be solved now because it is a devil's brew. However, chemical wastes last longer, as we all know, than do the radioactive wastes. As one expert has noted: 'There is real risk in nuclear power, just as there is real risk in coal power.... For some of [these risks], like the greenhouse effect, the potential damage is devastating. While for others, like nuclear accidents, the risk is limited, but imaginations are not. For still others, like the risk posed by a high-level waste repository, there is essentially nothing outside the imagination of the gullible.' Furthermore, any technical solution or any solution to a risky problem requires one to think carefully. It is often better to do it right than quickly. A third reason for requiring it to be solved right now is that HLW disposal is a major technical problem blocking a potentially valuable energy source. But we need a new solution. The current solutions are not working. I believe that we ought to recognize the failure of the geologic repository approach. I believe the federal government should identify, with industry's assistance, the best techniques for surface storage. Some federal locations should be

  2. Dual and primal mixed Petrov-Galerkin finite element methods in heat transfer problems

    International Nuclear Information System (INIS)

    Loula, A.F.D.; Toledo, E.M.

    1988-12-01

    New mixed finite element formulations for the steady state heat transfer problem are presented with no limitation in the choice of conforming finite element spaces. Adding least square residual forms of the governing equations of the classical Galerkin formulation the original saddle point problem is transformed into a minimization problem. Stability analysis, error estimates and numerical results are presented, confirming the error estimates and the good performance of this new formulation. (author) [pt

  3. A nodally condensed SUPG formulation for free-surface computation of steady-state flows constrained by unilateral contact - Application to rolling

    Science.gov (United States)

    Arora, Shitij; Fourment, Lionel

    2018-05-01

    In the context of the simulation of industrial hot forming processes, the resultant time-dependent thermo-mechanical multi-field problem (v →,p ,σ ,ɛ ) can be sped up by 10-50 times using the steady-state methods while compared to the conventional incremental methods. Though the steady-state techniques have been used in the past, but only on simple configurations and with structured meshes, and the modern-days problems are in the framework of complex configurations, unstructured meshes and parallel computing. These methods remove time dependency from the equations, but introduce an additional unknown into the problem: the steady-state shape. This steady-state shape x → can be computed as a geometric correction t → on the domain X → by solving the weak form of the steady-state equation v →.n →(t →)=0 using a Streamline Upwind Petrov Galerkin (SUPG) formulation. There exists a strong coupling between the domain shape and the material flow, hence, a two-step fixed point iterative resolution algorithm was proposed that involves (1) the computation of flow field from the resolution of thermo-mechanical equations on a prescribed domain shape and (2) the computation of steady-state shape for an assumed velocity field. The contact equations are introduced in the penalty form both during the flow computation as well as during the free-surface correction. The fact that the contact description is inhomogeneous, i.e., it is defined in the nodal form in the former, and in the weighted residual form in the latter, is assumed to be critical to the convergence of certain problems. Thus, the notion of nodal collocation is invoked in the weak form of the surface correction equation to homogenize the contact coupling. The surface correction algorithm is tested on certain analytical test cases and the contact coupling is tested with some hot rolling problems.

  4. Topology Optimisation for Coupled Convection Problems

    DEFF Research Database (Denmark)

    Alexandersen, Joe

    This thesis deals with topology optimisation for coupled convection problems. The aim is to extend and apply topology optimisation to steady-state conjugate heat transfer problems, where the heat conduction equation governs the heat transfer in a solid and is coupled to thermal transport...... in a surrounding uid, governed by a convection-diffusion equation, where the convective velocity field is found from solving the isothermal incompressible steady-state Navier-Stokes equations. Topology optimisation is also applied to steady-state natural convection problems. The modelling is done using stabilised...... finite elements, the formulation and implementation of which was done partly during a special course as prepatory work for this thesis. The formulation is extended with a Brinkman friction term in order to facilitate the topology optimisation of fluid flow and convective cooling problems. The derived...

  5. Energy policy formulation and energy administration in South Africa

    International Nuclear Information System (INIS)

    Du Plessis, S.J.P.

    1983-01-01

    The evolvement of the governmental energy administrative mechanisms is discussed. Energy policy formulation and the role of the Department of Mineral and Energy Affairs in this regard are outlined. The energy administrative process, with reference to various energy carriers and specific spheres of the South African energy economy is discussed. It is indicated that close co-operation between the public and private energy sectors should result in mutual understanding of each others' practical problems and objectives, and should contribute towards the process of judicious energy policy formulation and administration in the interests of the national well-being

  6. High resolution solar observations

    International Nuclear Information System (INIS)

    Title, A.

    1985-01-01

    Currently there is a world-wide effort to develop optical technology required for large diffraction limited telescopes that must operate with high optical fluxes. These developments can be used to significantly improve high resolution solar telescopes both on the ground and in space. When looking at the problem of high resolution observations it is essential to keep in mind that a diffraction limited telescope is an interferometer. Even a 30 cm aperture telescope, which is small for high resolution observations, is a big interferometer. Meter class and above diffraction limited telescopes can be expected to be very unforgiving of inattention to details. Unfortunately, even when an earth based telescope has perfect optics there are still problems with the quality of its optical path. The optical path includes not only the interior of the telescope, but also the immediate interface between the telescope and the atmosphere, and finally the atmosphere itself

  7. Development of grout formulations for 106-AN waste: Mixture-experiment results and analysis

    International Nuclear Information System (INIS)

    Spence, R.D.; McDaniel, E.W.; Anderson, C.M.; Lokken, R.O.; Piepel, G.F.

    1993-09-01

    Twenty potential ingredients were identified for use in developing a 106-AN grout formulation, and 18 were subsequently obtained and tested. Four ingredients-Type II-LA (moderate heat of hydration) Portland cement, Class F fly ash, attapulgite 150 drilling clay, and ground air-cooled blast-furnace slag (GABFS) were selected for developing the 106-AN grout formulations. A mixture experiment was designed and conducted around the following formulation: 2.5 lb of cement per gallon, 1.2 lb of fly ash per gallon, 0.8 lb of attapulgite per gallon, and 3.5 lb of GABFS per gallon. Reduced empirical models were generated from the results of the mixture experiment. These models were used to recommend several grout formulations for 106-AN. Westinghouse Hanford Company selected one of these formulations to be verified for use with 106-AN and a backup formulation in case problems arise with the first choice

  8. Nonlinear consider covariance analysis using a sigma-point filter formulation

    Science.gov (United States)

    Lisano, Michael E.

    2006-01-01

    The research reported here extends the mathematical formulation of nonlinear, sigma-point estimators to enable consider covariance analysis for dynamical systems. This paper presents a novel sigma-point consider filter algorithm, for consider-parameterized nonlinear estimation, following the unscented Kalman filter (UKF) variation on the sigma-point filter formulation, which requires no partial derivatives of dynamics models or measurement models with respect to the parameter list. It is shown that, consistent with the attributes of sigma-point estimators, a consider-parameterized sigma-point estimator can be developed entirely without requiring the derivation of any partial-derivative matrices related to the dynamical system, the measurements, or the considered parameters, which appears to be an advantage over the formulation of a linear-theory sequential consider estimator. It is also demonstrated that a consider covariance analysis performed with this 'partial-derivative-free' formulation yields equivalent results to the linear-theory consider filter, for purely linear problems.

  9. Conflict Resolution for Contrasting Cultures.

    Science.gov (United States)

    Clarke, Clifford C.; Lipp, G. Douglas

    1998-01-01

    A seven-step process can help people from different cultures understand each other's intentions and perceptions so they can work together harmoniously: problem identification, problem clarification, cultural exploration, organizational exploration, conflict resolution, impact assessment, and organizational integration. (JOW)

  10. A general formulation of discrete-time quantum mechanics: Restrictions on the action and the relation of unitarity to the existence theorem for initial-value problems

    International Nuclear Information System (INIS)

    Khorrami, M.

    1995-01-01

    A general formulation for discrete-time quantum mechanics, based on Feynman's method in ordinary quantum mechanics, is presented. It is shown that the ambiguities present in ordinary quantum mechanics (due to noncommutativity of the operators), are no longer present here. Then the criteria for the unitarity of the evolution operator are examined. It is shown that the unitarity of the evolution operator puts restrictions on the form of the action, and also implies the existence of a solution for the classical initial-value problem. 13 refs

  11. DEVELOPMENT OF THE UNIVERSAL ALGORITHM FOR THE AIRSPACE CONFLICTS RESOLUTION WITH AN AIRPLANE EN-ROUTE

    Directory of Open Access Journals (Sweden)

    N. A. Petrov

    2014-01-01

    Full Text Available The paper outlines the formulation and solution of the problem of an airplane trajectory control within dynamically changing flight conditions. Physical and mathematical formulation of the problem was justified and algorithms were proposed to solve it using modern automated technologies.

  12. Soft, chewable gelatin-based pharmaceutical oral formulations: a technical approach.

    Science.gov (United States)

    Dille, Morten J; Hattrem, Magnus N; Draget, Kurt I

    2018-06-01

    Hard tablets and capsules for oral drug delivery cause problems for people experiencing dysphagia. This work describes the formulation and properties of a gelatin based, self-preserved, and soft chewable tablet as an alternative and novel drug delivery format. Gelatin (8.8-10% in 24.7-29% water) constituted the matrix of the soft, semi-solid tablets. Three different pharmaceuticals (Ibuprofen 10%, Acetaminophen 15%, and Meloxicam 1.5%) were tested in this formulation. Microbial stability was controlled by lowering the water activity with a mixture of sorbitol and xylitol (45.6-55%). Rheological properties were tested applying small strain oscillation measurements. Taste masking of ibuprofen soft-chew tablets was achieved by keeping the ibuprofen insoluble at pH 4.5 and keeping the processing temperature below the crystalline-to-amorphous transition temperature. Soft-chew formulations showed good stability for all three pharmaceuticals (up to 24 months), and the ibuprofen containing formulation exhibited comparable dissolution to a standard oral tablet as well as good microbial stability. The rheological properties of the ibuprofen/gelatin formulation had the fingerprint of a true gelatin gel, albeit higher moduli, and melting temperature. The results suggest that easy-to-swallow and well taste-masked soft chewable tablet formulations with extended shelf life are within reach for several active pharmaceutical ingredients (APIs).

  13. Comparison of algebraic and analytical approaches to the formulation of the statistical model-based reconstruction problem for X-ray computed tomography.

    Science.gov (United States)

    Cierniak, Robert; Lorent, Anna

    2016-09-01

    The main aim of this paper is to investigate properties of our originally formulated statistical model-based iterative approach applied to the image reconstruction from projections problem which are related to its conditioning, and, in this manner, to prove a superiority of this approach over ones recently used by other authors. The reconstruction algorithm based on this conception uses a maximum likelihood estimation with an objective adjusted to the probability distribution of measured signals obtained from an X-ray computed tomography system with parallel beam geometry. The analysis and experimental results presented here show that our analytical approach outperforms the referential algebraic methodology which is explored widely in the literature and exploited in various commercial implementations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Dealing with Insecurity in Problem Oriented Learning Approaches--The Importance of Problem Formulation

    Science.gov (United States)

    Jensen, Annie Aarup; Lund, Birthe

    2016-01-01

    Introduction of a pedagogical concept, Kubus, in a problem oriented learning context--analysed within the framework of an activity system--indicates what might happen when offering tools tempting to influence and regulate students' learning approach and hereby neglecting the importance of existing habits and values. Introduction of this new…

  15. Compromise decision support problems for hierarchical design involving uncertainty

    Science.gov (United States)

    Vadde, S.; Allen, J. K.; Mistree, F.

    1994-08-01

    In this paper an extension to the traditional compromise Decision Support Problem (DSP) formulation is presented. Bayesian statistics is used in the formulation to model uncertainties associated with the information being used. In an earlier paper a compromise DSP that accounts for uncertainty using fuzzy set theory was introduced. The Bayesian Decision Support Problem is described in this paper. The method for hierarchical design is demonstrated by using this formulation to design a portal frame. The results are discussed and comparisons are made with those obtained using the fuzzy DSP. Finally, the efficacy of incorporating Bayesian statistics into the traditional compromise DSP formulation is discussed and some pending research issues are described. Our emphasis in this paper is on the method rather than the results per se.

  16. Adaptive solution of some steady-state fluid-structure interaction problems

    International Nuclear Information System (INIS)

    Etienne, S.; Pelletier, D.

    2003-01-01

    This paper presents a general integrated and coupled formulation for modeling the steady-state interaction of a viscous incompressible flow with an elastic structure undergoing large displacements (geometric non-linearities). This constitutes an initial step towards developing a sensitivity analysis formulation for this class of problems. The formulation uses velocity and pressures as unknowns in a flow domain and displacements in the structural components. An interface formulation is presented that leads to clear and simple finite element implementation of the equilibrium conditions at the fluid-solid interface. Issues of error estimation and mesh adaptation are discussed. The adaptive formulation is verified on a problem with a closed form solution. It is then applied to a sample case for which the structure undergoes large displacements induced by the flow. (author)

  17. The balanced academic curriculum problem revisited

    DEFF Research Database (Denmark)

    Chiarandini, Marco; Di Gaspero, Luca; Gualandi, Stefano

    2012-01-01

    The Balanced Academic Curriculum Problem (BACP) consists in assigning courses to teaching terms satisfying prerequisites and balancing the credit course load within each term. The BACP is part of the CSPLib with three benchmark instances, but its formulation is simpler than the problem solved...... in practice by universities. In this article, we introduce a generalized version of the problem that takes different curricula and professor preferences into account, and we provide a set of real-life problem instances arisen at University of Udine. Since the existing formulation based on a min-max objective...... function does not balance effectively the credit load for the new instances, we also propose alternative objective functions. Whereas all the CSPLib instances are efficiently solved with Integer Linear Programming (ILP) state-of-the-art solvers, our new set of real-life instances turns out to be much more...

  18. Continuum mechanics and thermodynamics in the Hamilton and the Godunov-type formulations

    Science.gov (United States)

    Peshkov, Ilya; Pavelka, Michal; Romenski, Evgeniy; Grmela, Miroslav

    2018-01-01

    Continuum mechanics with dislocations, with the Cattaneo-type heat conduction, with mass transfer, and with electromagnetic fields is put into the Hamiltonian form and into the form of the Godunov-type system of the first-order, symmetric hyperbolic partial differential equations (SHTC equations). The compatibility with thermodynamics of the time reversible part of the governing equations is mathematically expressed in the former formulation as degeneracy of the Hamiltonian structure and in the latter formulation as the existence of a companion conservation law. In both formulations the time irreversible part represents gradient dynamics. The Godunov-type formulation brings the mathematical rigor (the local well posedness of the Cauchy initial value problem) and the possibility to discretize while keeping the physical content of the governing equations (the Godunov finite volume discretization).

  19. Development of grout formulations for 106-AN waste: Mixture-experiment results and analysis

    International Nuclear Information System (INIS)

    Spence, R.D.; McDaniel, E.W.; Anderson, C.M.; Lokken, R.O.; Piepel, G.F.

    1993-09-01

    Twenty potential ingredients were identified for use in developing a 106-AN grout formulation, and 18 were subsequently obtained and tested. Four ingredients: Type II-LA (moderate heat of hydration) Portland cement, Class F fly ash, attapulgite 150 drilling clay, and ground air-cooled blast-furnace slag (GABFS) -- were selected for developing the 106-AN grout formulations. A mixture experiment was designed and conducted around the following formulation: 2.5 lb of cement per gallon, 1.2 lb of fly ash per gallon, 0.8 lb of attapulgite per gallon, and 3.5 lb of GABFS per gallon. Reduced empirical models were generated from the results of the mixture experiment. These models were used to recommend several grout formulations for 106-AN. Westinghouse Hanford Company selected one of these formulations to be verified for use with 106-AN and a backup formulation in case problems arise with the first choice. This report presents the mixture-experimental results and leach data

  20. SU-E-T-675: Remote Dosimetry with a Novel PRESAGE Formulation

    International Nuclear Information System (INIS)

    Mein, S; Juang, T; Malcolm, J; Adamovics, J; Oldham, M

    2015-01-01

    Purpose: 3D-gel dosimetry provides high-resolution treatment validation; however, scanners aren’t widely available. In remote dosimetry, dosimeters are shipped out from a central base institution to a remote site for irradiation, then shipped back for scanning and analysis, affording a convenient service for treatment validation to institutions lacking the necessary equipment and resources. Previous works demonstrated the high-resolution performance and temporal stability of PRESAGE. Here the newest formulation is investigated for remote dosimetry use. Methods: A new formulation of PRESAGE was created with the aim of improved color stability post irradiation. Dose sensitivity was determined by irradiating cuvettes on a Varian Linac (6MV) from 0–15Gy and measuring change in optical density at 633nm. Sensitivity readings were tracked over time in a temperature control study to determine long-term stability. A large volume study was performed to evaluate the accuracy for remote dosimetry. A 1kg dosimeter was pre-scanned, irradiated on-site with an 8Gy 4field box treatment, post-scanned and shipped to Princess Margaret Hospital for remote reading on an identical scanner. Results: Dose sensitivities ranged from 0.0194–0.0295 ΔOD/(Gy*cm)—similar to previous formulations. Post-irradiated cuvettes stored at 10°C retained 100% initial sensitivity over 5 days and 98.6% over 10 weeks while cuvettes stored at room temperature fell to 95.8% after 5 days and 37.4% after 10 weeks. The immediate and 5-day scans of the 4field box dosimeter data was reconstructed, registered to the corresponding eclipse dose-distribution, and compared with analytical tools in CERR. Immediate and 5-day scans looked visually similar. Line profiles revealed close agreement aside from a slight elevation in dose at the edge in the 5-day readout. Conclusion: The remote dosimetry formulation exhibits excellent temporal stability in small volumes. While immediate and 5-day readout scans of large

  1. Hub location problems in transportation networks

    DEFF Research Database (Denmark)

    Gelareh, Shahin; Nickel, Stefan

    2011-01-01

    In this paper we propose a 4-index formulation for the uncapacitated multiple allocation hub location problem tailored for urban transport and liner shipping network design. This formulation is very tight and most of the tractable instances for MIP solvers are optimally solvable at the root node....... also introduce fixed cost values for Australian Post (AP) dataset....

  2. A Resolution Prover for Coalition Logic

    OpenAIRE

    Nalon, Cláudia; Zhang, Lan; Dixon, Clare; Hustadt, Ullrich

    2014-01-01

    We present a prototype tool for automated reasoning for Coalition Logic, a non-normal modal logic that can be used for reasoning about cooperative agency. The theorem prover CLProver is based on recent work on a resolution-based calculus for Coalition Logic that operates on coalition problems, a normal form for Coalition Logic. We provide an overview of coalition problems and of the resolution-based calculus for Coalition Logic. We then give details of the implementation of CLProver and prese...

  3. On the Hardest Problem Formulations for the 0/1 Lasserre Hierarchy

    OpenAIRE

    Kurpisz, Adam; Leppänen, Samuli; Mastrolilli, Monaldo

    2015-01-01

    The Lasserre/Sum-of-Squares (SoS) hierarchy is a systematic procedure for constructing a sequence of increasingly tight semidefinite relaxations. It is known that the hierarchy converges to the 0/1 polytope in n levels and captures the convex relaxations used in the best available approximation algorithms for a wide variety of optimization problems. In this paper we characterize the set of 0/1 integer linear problems and unconstrained 0/1 polynomial optimization problems that can still have ...

  4. January: IBM 7094 programme for the resolution of cell problems in planar, spherical and cylindrical geometry using the double Pn approximation

    International Nuclear Information System (INIS)

    Amouyal, A.; Tariel, H.

    1966-01-01

    Code name: January 1 st SCEA 011S. 2) Computer: IBM 7094; Programme system: Fortran II, 2 nd version. 3) Nature of the problem: resolution of cell problems with one space variable (planar, spherical and cylindrical geometries) and with one energy group, with isotropic sources in the double P n approximation (DP 1 and DP 3 approximation in planar and spherical geometries, DP 1 and DP 2 in cylindrical geometry). 4) Method used: the differential equations with limiting conditions are transformed into differential system with initial conditions which are integrated by a separate-step method. 5) Restrictions: number of physical media [fr

  5. The classical Stefan problem basic concepts, modelling and analysis

    CERN Document Server

    Gupta, SC

    2003-01-01

    This volume emphasises studies related toclassical Stefan problems. The term "Stefan problem" isgenerally used for heat transfer problems with phase-changes suchas from the liquid to the solid. Stefan problems have somecharacteristics that are typical of them, but certain problemsarising in fields such as mathematical physics and engineeringalso exhibit characteristics similar to them. The term``classical" distinguishes the formulation of these problems fromtheir weak formulation, in which the solution need not possessclassical derivatives. Under suitable assumptions, a weak solutioncould be as good as a classical solution. In hyperbolic Stefanproblems, the characteristic features of Stefan problems arepresent but unlike in Stefan problems, discontinuous solutions areallowed because of the hyperbolic nature of the heat equation. Thenumerical solutions of inverse Stefan problems, and the analysis ofdirect Stefan problems are so integrated that it is difficult todiscuss one without referring to the other. So no...

  6. Iterative Reconstruction Methods for Hybrid Inverse Problems in Impedance Tomography

    DEFF Research Database (Denmark)

    Hoffmann, Kristoffer; Knudsen, Kim

    2014-01-01

    For a general formulation of hybrid inverse problems in impedance tomography the Picard and Newton iterative schemes are adapted and four iterative reconstruction algorithms are developed. The general problem formulation includes several existing hybrid imaging modalities such as current density...... impedance imaging, magnetic resonance electrical impedance tomography, and ultrasound modulated electrical impedance tomography, and the unified approach to the reconstruction problem encompasses several algorithms suggested in the literature. The four proposed algorithms are implemented numerically in two...

  7. A finite element method for flow problems in blast loading

    International Nuclear Information System (INIS)

    Forestier, A.; Lepareux, M.

    1984-06-01

    This paper presents a numerical method which describes fast dynamic problems in flow transient situations as in nuclear plants. A finite element formulation has been chosen; it is described by a preprocessor in CASTEM system: GIBI code. For these typical flow problems, an A.L.E. formulation for physical equations is used. So, some applications are presented: the well known problem of shock tube, the same one in 2D case and a last application to hydrogen detonation

  8. A Methodology for Validation of High Resolution Combat Models

    Science.gov (United States)

    1988-06-01

    TELEOLOGICAL PROBLEM ................................ 7 C. EPISTEMOLOGICAL PROBLEM ............................. 8 D. UNCERTAINTY PRINCIPLE...theoretical issues. "The Teleological Problem"--How a model by its nature formulates an explicit cause-and-effect relationship that excludes other...34experts" in establishing the standard for reality. Generalization from personal experience is often hampered by the parochial aspects of the

  9. Application of Fuzzy Optimization to the Orienteering Problem

    Directory of Open Access Journals (Sweden)

    Madhushi Verma

    2015-01-01

    Full Text Available This paper deals with the orienteering problem (OP which is a combination of two well-known problems (i.e., travelling salesman problem and the knapsack problem. OP is an NP-hard problem and is useful in appropriately modeling several challenging applications. As the parameters involved in these applications cannot be measured precisely, depicting them using crisp numbers is unrealistic. Further, the decision maker may be satisfied with graded satisfaction levels of solutions, which cannot be formulated using a crisp program. To deal with the above-stated two issues, we formulate the fuzzy orienteering problem (FOP and provide a method to solve it. Here we state the two necessary conditions of OP of maximizing the total collected score and minimizing the time taken to traverse a path (within the specified time bound as fuzzy goals and the remaining necessary conditions as crisp constraints. Using the max-min formulation of the fuzzy sets obtained from the fuzzy goals, we calculate the fuzzy decision sets (Z and Z∗ that contain the feasible paths and the desirable paths, respectively, along with the degrees to which they are acceptable. To efficiently solve large instances of FOP, we also present a parallel algorithm on CREW PRAM model.

  10. "Planar" Tautologies Hard for Resolution

    DEFF Research Database (Denmark)

    Dantchev, Stefan; Riis, Søren

    2001-01-01

    We prove exponential lower bounds on the resolution proofs of some tautologies, based on rectangular grid graphs. More specifically, we show a 2Ω(n) lower bound for any resolution proof of the mutilated chessboard problem on a 2n×2n chessboard as well as for the Tseitin tautology (G. Tseitin, 196...

  11. Unraveling the Thousand Word Picture: An Introduction to Super-Resolution Data Analysis.

    Science.gov (United States)

    Lee, Antony; Tsekouras, Konstantinos; Calderon, Christopher; Bustamante, Carlos; Pressé, Steve

    2017-06-14

    Super-resolution microscopy provides direct insight into fundamental biological processes occurring at length scales smaller than light's diffraction limit. The analysis of data at such scales has brought statistical and machine learning methods into the mainstream. Here we provide a survey of data analysis methods starting from an overview of basic statistical techniques underlying the analysis of super-resolution and, more broadly, imaging data. We subsequently break down the analysis of super-resolution data into four problems: the localization problem, the counting problem, the linking problem, and what we've termed the interpretation problem.

  12. Optimization Formulations for the Maximum Nonlinear Buckling Load of Composite Structures

    DEFF Research Database (Denmark)

    Lindgaard, Esben; Lund, Erik

    2011-01-01

    This paper focuses on criterion functions for gradient based optimization of the buckling load of laminated composite structures considering different types of buckling behaviour. A local criterion is developed, and is, together with a range of local and global criterion functions from literature......, benchmarked on a number of numerical examples of laminated composite structures for the maximization of the buckling load considering fiber angle design variables. The optimization formulations are based on either linear or geometrically nonlinear analysis and formulated as mathematical programming problems...... solved using gradient based techniques. The developed local criterion is formulated such it captures nonlinear effects upon loading and proves useful for both analysis purposes and as a criterion for use in nonlinear buckling optimization. © 2010 Springer-Verlag....

  13. Islamic Education Research Problem

    Directory of Open Access Journals (Sweden)

    Abdul Muthalib

    2012-04-01

    Full Text Available This paper will discuss Islamic educational studies that is reviewing how to find, limit and define problems and problem-solving concepts. The central question of this paper is to describe how to solve the problem in Islamic educational research. A researcher or educator who has the knowledge, expertise, or special interest on education for example is usually having a sensitivity to issues relating to educational research. In the research dimension of religious education, there are three types of problems, namely: Problems foundation, structural problems and operational issues. In doing research in Islamic education someone should understand research problem, limiting and formulating the problem, how to solve the problem, other problem relating to the point of research, and research approach.

  14. Congener-specific analysis of polychlorinated naphthalenes (PCNs) in the major Chinese technical PCB formulation from a stored Chinese electrical capacitor.

    Science.gov (United States)

    Huang, Jun; Yu, Gang; Yamauchi, Makoto; Matsumura, Toru; Yamazaki, Norimasa; Weber, Roland

    2015-10-01

    Impurity of polychlorinated naphthalenes (PCNs) in commercial polychlorinated biphenyl (PCB) formulations has been recognized as a relevant source of PCNs in the environment. Congener-specific analysis of most main PCB formulations has been accomplished previously, excluding the Chinese product. The insulating oil in a stored Chinese electric capacitor containing the major Chinese technical formulation "PCB3" was sampled and tested by isotope dilution technology using high-resolution gas chromatography coupled to high-resolution mass spectrometry (HRGC/HRMS). The detected concentration of PCNs in the Chinese PCB oil sample was 1,307.5 μg/g and therefore significantly higher than that reported in PCB formulations from other countries, as well as that in the transformer oil (ASKAREL Nr 1740) additionally tested in the present study for comparison. Based on the measurement, the total amount of PCNs in Chinese PCB3 oil is estimated to be 7.8 t, which would mean only 0.005 % of global production of PCNs of 150,000 t. The homolog profile is similar to those of PCN in Aroclor 1262 and Clophen A40, where the contributions from hexa-CNs and hepta-CNs are predominant and accounted for similar proportions. The Toxic Equivalent Quantity (TEQ) concentration of dioxin-like PCN congeners is 0.47 μg TEQ/g, with the dominant contributors of CN-73 and CN-66/67. This TEQ content from PCN is higher than that in most other PCB formulations with the exemption of the Russian Sovol formulation. The total TEQ in the historic 6,000 t of the Chinese PCB3 formulation is estimated to be 2.8 kg TEQ.

  15. A Continuous Formulation for Logical Decisions in Differential Algebraic Systems using Mathematical Programs with Complementarity Constraints

    Directory of Open Access Journals (Sweden)

    Kody M. Powell

    2016-03-01

    Full Text Available This work presents a methodology to represent logical decisions in differential algebraic equation simulation and constrained optimization problems using a set of continuous algebraic equations. The formulations may be used when state variables trigger a change in process dynamics, and introduces a pseudo-binary decision variable, which is continuous, but should only have valid solutions at values of either zero or one within a finite time horizon. This formulation enables dynamic optimization problems with logical disjunctions to be solved by simultaneous solution methods without using methods such as mixed integer programming. Several case studies are given to illustrate the value of this methodology including nonlinear model predictive control of a chemical reactor using a surge tank with overflow to buffer disturbances in feed flow rate. Although this work contains novel methodologies for solving dynamic algebraic equation (DAE constrained problems where the system may experience an abrupt change in dynamics that may otherwise require a conditional statement, there remain substantial limitations to this methodology, including a limited domain where problems may converge and the possibility for ill-conditioning. Although the problems presented use only continuous algebraic equations, the formulation has inherent non-smoothness. Hence, these problems must be solved with care and only in select circumstances, such as in simulation or situations when the solution is expected to be near the solver’s initial point.

  16. The optimal graph partitioning problem

    DEFF Research Database (Denmark)

    Sørensen, Michael Malmros; Holm, Søren

    1993-01-01

    . This problem can be formulated as a MILP, which turns out to be completely symmetrical with respect to the p classes, and the gap between the relaxed LP solution and the optimal solution is the largest one possible. These two properties make it very difficult to solve even smaller problems. In this paper...

  17. Contribution of Fuzzy Minimal Cost Flow Problem by Possibility Programming

    Directory of Open Access Journals (Sweden)

    S. Fanati Rashidi

    2010-06-01

    Full Text Available Using the concept of possibility proposed by zadeh, luhandjula ([4,8] and buckley ([1] have proposed the possibility programming. The formulation of buckley results in nonlinear programming problems. Negi [6]re-formulated the approach of Buckley by the use of trapezoidal fuzzy numbers and reduced the problem into fuzzy linear programming problem. Shih and Lee ([7] used the Negi approach to solve a minimum cost flow problem, whit fuzzy costs and the upper and lower bound. In this paper we shall consider the general form of this problem where all of the parameters and variables are fuzzy and also a model for solving is proposed

  18. Bivium as a Mixed Integer Programming Problem

    DEFF Research Database (Denmark)

    Borghoff, Julia; Knudsen, Lars Ramkilde; Stolpe, Mathias

    2009-01-01

    over $GF(2)$ into a combinatorial optimization problem. We convert the Boolean equation system into an equation system over $\\mathbb{R}$ and formulate the problem of finding a $0$-$1$-valued solution for the system as a mixed-integer programming problem. This enables us to make use of several...

  19. A numerical approach to Stefan problem

    International Nuclear Information System (INIS)

    Kotalik, P.

    1993-07-01

    The one-dimensional Stefan problem for a thin plate heated by laser pulses is solved by approximating the enthalpy formulation of the problem by C 0 piecewise linear finite elements in space combined with a semi-implicit scheme in time. (author) 6 figs., 5 refs

  20. Resolution effects in reconstructing ancestral genomes.

    Science.gov (United States)

    Zheng, Chunfang; Jeong, Yuji; Turcotte, Madisyn Gabrielle; Sankoff, David

    2018-05-09

    The reconstruction of ancestral genomes must deal with the problem of resolution, necessarily involving a trade-off between trying to identify genomic details and being overwhelmed by noise at higher resolutions. We use the median reconstruction at the synteny block level, of the ancestral genome of the order Gentianales, based on coffee, Rhazya stricta and grape, to exemplify the effects of resolution (granularity) on comparative genomic analyses. We show how decreased resolution blurs the differences between evolving genomes, with respect to rate, mutational process and other characteristics.

  1. An Algorithm for Modelling the Impact of the Judicial Conflict-Resolution Process on Construction Investment

    Directory of Open Access Journals (Sweden)

    Andrej Bugajev

    2018-01-01

    Full Text Available In this article, the modelling of the judicial conflict-resolution process is considered from a construction investor’s point of view. Such modelling is important for improving the risk management for construction investors and supporting sustainable city development by supporting the development of rules regulating the construction process. Thus, this raises the problem of evaluation of different decisions and selection of the optimal one followed by distribution extraction. First, the example of such a process is analysed and schematically represented. Then, it is formalised as a graph, which is described in the form of a decision graph with cycles. We use some natural problem properties and provide the algorithm to convert this graph into a tree. Then, we propose the algorithm to evaluate profits for different scenarios with estimation of time, which is done by integration of an average daily costs function. Afterwards, the optimisation problem is solved and the optimal investor strategy is obtained—this allows one to extract the construction project profit distribution, which can be used for further analysis by standard risk (and other important information-evaluation techniques. The overall algorithm complexity is analysed, the computational experiment is performed and conclusions are formulated.

  2. Conforming discretizations of boundary element solutions to the electroencephalography forward problem

    Science.gov (United States)

    Rahmouni, Lyes; Adrian, Simon B.; Cools, Kristof; Andriulli, Francesco P.

    2018-01-01

    In this paper, we present a new discretization strategy for the boundary element formulation of the Electroencephalography (EEG) forward problem. Boundary integral formulations, classically solved with the Boundary Element Method (BEM), are widely used in high resolution EEG imaging because of their recognized advantages, in several real case scenarios, in terms of numerical stability and effectiveness when compared with other differential equation based techniques. Unfortunately, however, it is widely reported in literature that the accuracy of standard BEM schemes for the forward EEG problem is often limited, especially when the current source density is dipolar and its location approaches one of the brain boundary surfaces. This is a particularly limiting problem given that during an high-resolution EEG imaging procedure, several EEG forward problem solutions are required, for which the source currents are near or on top of a boundary surface. This work will first present an analysis of standardly and classically discretized EEG forward problem operators, reporting on a theoretical issue of some of the formulations that have been used so far in the community. We report on the fact that several standardly used discretizations of these formulations are consistent only with an L2-framework, requiring the expansion term to be a square integrable function (i.e., in a Petrov-Galerkin scheme with expansion and testing functions). Instead, those techniques are not consistent when a more appropriate mapping in terms of fractional-order Sobolev spaces is considered. Such a mapping allows the expansion function term to be a less regular function, thus sensibly reducing the need for mesh refinements and low-precisions handling strategies that are currently required. These more favorable mappings, however, require a different and conforming discretization, which must be suitably adapted to them. In order to appropriately fulfill this requirement, we adopt a mixed

  3. Entity resolution for uncertain data

    NARCIS (Netherlands)

    Ayat, N.; Akbarinia, R.; Afsarmanesh, H.; Valduriez, P.

    2012-01-01

    Entity resolution (ER), also known as duplicate detection or record matching, is the problem of identifying the tuples that represent the same real world entity. In this paper, we address the problem of ER for uncertain data, which we call ERUD. We propose two different approaches for the ERUD

  4. A Resolution Prover for Coalition Logic

    Directory of Open Access Journals (Sweden)

    Cláudia Nalon

    2014-04-01

    Full Text Available We present a prototype tool for automated reasoning for Coalition Logic, a non-normal modal logic that can be used for reasoning about cooperative agency. The theorem prover CLProver is based on recent work on a resolution-based calculus for Coalition Logic that operates on coalition problems, a normal form for Coalition Logic. We provide an overview of coalition problems and of the resolution-based calculus for Coalition Logic. We then give details of the implementation of CLProver and present the results for a comparison with an existing tableau-based solver.

  5. NP-hardness of the cluster minimization problem revisited

    Science.gov (United States)

    Adib, Artur B.

    2005-10-01

    The computational complexity of the 'cluster minimization problem' is revisited (Wille and Vennik 1985 J. Phys. A: Math. Gen. 18 L419). It is argued that the original NP-hardness proof does not apply to pairwise potentials of physical interest, such as those that depend on the geometric distance between the particles. A geometric analogue of the original problem is formulated, and a new proof for such potentials is provided by polynomial time transformation from the independent set problem for unit disk graphs. Limitations of this formulation are pointed out, and new subproblems that bear more direct consequences to the numerical study of clusters are suggested.

  6. NP-hardness of the cluster minimization problem revisited

    International Nuclear Information System (INIS)

    Adib, Artur B

    2005-01-01

    The computational complexity of the 'cluster minimization problem' is revisited (Wille and Vennik 1985 J. Phys. A: Math. Gen. 18 L419). It is argued that the original NP-hardness proof does not apply to pairwise potentials of physical interest, such as those that depend on the geometric distance between the particles. A geometric analogue of the original problem is formulated, and a new proof for such potentials is provided by polynomial time transformation from the independent set problem for unit disk graphs. Limitations of this formulation are pointed out, and new subproblems that bear more direct consequences to the numerical study of clusters are suggested

  7. NP-hardness of the cluster minimization problem revisited

    Energy Technology Data Exchange (ETDEWEB)

    Adib, Artur B [Physics Department, Brown University, Providence, RI 02912 (United States)

    2005-10-07

    The computational complexity of the 'cluster minimization problem' is revisited (Wille and Vennik 1985 J. Phys. A: Math. Gen. 18 L419). It is argued that the original NP-hardness proof does not apply to pairwise potentials of physical interest, such as those that depend on the geometric distance between the particles. A geometric analogue of the original problem is formulated, and a new proof for such potentials is provided by polynomial time transformation from the independent set problem for unit disk graphs. Limitations of this formulation are pointed out, and new subproblems that bear more direct consequences to the numerical study of clusters are suggested.

  8. On the fairlie's Moyal formulation of M(atrix)-theory

    International Nuclear Information System (INIS)

    Hssaini, M.; Sedra, M.B.; Bennai, M.; Maroufi, B.

    2000-07-01

    Starting from the Moyal formulation of M-theory in the large N-limit, we propose to reexamine the associated membrane equations of motion in 10 dimensions formulated in terms of Poisson bracket. Among the results obtained, we rewrite the coupled first order Nahm's equations into a simple form leading in turn to their systematic relation with SU(∞) Yang Mills equations of motion. The former are interpreted as the vanishing condition of some conserved currents which we propose. We also develop an algebraic analysis in which an ansatz is considered and find an explicit form for the membrane solution of our problem. Typical solutions known in literature can also emerge as special cases of the proposed solution. (author)

  9. Refining stability and dissolution rate of amorphous drug formulations

    DEFF Research Database (Denmark)

    Grohganz, Holger; Priemel, Petra A; Löbmann, Korbinian

    2014-01-01

    Introduction: Poor aqueous solubility of active pharmaceutical ingredients (APIs) is one of the main challenges in the development of new small molecular drugs. Additionally, the proportion of poorly soluble drugs among new chemical entities is increasing. The transfer of a crystalline drug to its...... and on the interaction of APIs with small molecular compounds rather than polymers. Finally, in situ formation of an amorphous form might be an option to avoid storage problems altogether. Expert opinion: The diversity of poorly soluble APIs formulated in an amorphous drug delivery system will require different...... approaches for their stabilisation. Thus, increased focus on emerging techniques can be expected and a rational approach to decide the correct formulation is needed....

  10. Stability Analysis of Discontinuous Galerkin Approximations to the Elastodynamics Problem

    KAUST Repository

    Antonietti, Paola F.

    2015-11-21

    We consider semi-discrete discontinuous Galerkin approximations of both displacement and displacement-stress formulations of the elastodynamics problem. We prove the stability analysis in the natural energy norm and derive optimal a-priori error estimates. For the displacement-stress formulation, schemes preserving the total energy of the system are introduced and discussed. We verify our theoretical estimates on two and three dimensions test problems.

  11. Stability Analysis of Discontinuous Galerkin Approximations to the Elastodynamics Problem

    KAUST Repository

    Antonietti, Paola F.; Ayuso de Dios, Blanca; Mazzieri, Ilario; Quarteroni, Alfio

    2015-01-01

    We consider semi-discrete discontinuous Galerkin approximations of both displacement and displacement-stress formulations of the elastodynamics problem. We prove the stability analysis in the natural energy norm and derive optimal a-priori error estimates. For the displacement-stress formulation, schemes preserving the total energy of the system are introduced and discussed. We verify our theoretical estimates on two and three dimensions test problems.

  12. The Inhibiting Bisection Problem

    Energy Technology Data Exchange (ETDEWEB)

    Pinar, Ali; Fogel, Yonatan; Lesieutre, Bernard

    2006-12-18

    Given a graph where each vertex is assigned a generation orconsumption volume, we try to bisect the graph so that each part has asignificant generation/consumption mismatch, and the cutsize of thebisection is small. Our motivation comes from the vulnerability analysisof distribution systems such as the electric power system. We show thatthe constrained version of the problem, where we place either the cutsizeor the mismatch significance as a constraint and optimize the other, isNP-complete, and provide an integer programming formulation. We alsopropose an alternative relaxed formulation, which can trade-off betweenthe two objectives and show that the alternative formulation of theproblem can be solved in polynomial time by a maximum flow solver. Ourexperiments with benchmark electric power systems validate theeffectiveness of our methods.

  13. A Study of the Constraint to Formulation and Implementation of ...

    African Journals Online (AJOL)

    In recent years, Benin metropolis has been faced with solid waste management problems. Solid wastes generated from household and commercial activities are dump indiscriminately in the metropolis. The Edo state government has made effort in policy formulation and funding in line with the national policy on the ...

  14. Types of conflict, types of relationships and preferred conflict resolution strategies: Implications for constructive conflict resolution programmes

    Directory of Open Access Journals (Sweden)

    Petrović Danijela S.

    2012-01-01

    Full Text Available Constructive conflict resolution programmes are based on the idea that children and youth do no have sufficient knowledge of the procedures and skills for conflict resolution, which is why the conflicts they take part in soon become destructive. Notwithstanding the indubitable practical significance of the constructive conflict resolution programmes, it can be objected that they are not sufficiently based on empirical findings about the characteristics of conflicts in childhood and adolescence. Hence, this paper explores different types of conflict with peers and friends with the aim of determining the preferred conflict resolution strategies and using the obtained results to consider the implications for the improvement of constructive conflict resolution programmes. The research was conducted on the sample of 286 adolescents. The method of hypothetical conflict situations was used for studying the preferred conflict resolution strategies. The key results, which should be taken into account when developing constructive conflict resolution programmes, indicate that the preference for a conflict resolution strategy varies depending on conflict type (problem solving is mostly used in conflicts occurring due to opinion differences and disrespect of agreement, unlike the conflicts arising due to provocations, stubbornness and dishonesty and relationship types (in conflicts with friends, adolescents prefer problem solving, while in peer conflicts they more frequently opt for competition. [Projekat Ministarstva nauke Republike Srbije, br. 179018: Identifikacija, merenje i razvoj kognitivnih i emocionalnih kompetencija važnih društvu orijentisanom na evropske integracije

  15. Quantum first passage problem

    International Nuclear Information System (INIS)

    Kumar, N.

    1984-07-01

    Quantum first passage problem (QUIPP) is formulated and solved in terms of a constrained Feynman path integral. The related paradox of blocking of unitary evolution by continuous observation on the system implicit in QUIPP is briefly discussed. (author)

  16. Popular Problems

    DEFF Research Database (Denmark)

    Skovhus, Randi Boelskifte; Thomsen, Rie

    2017-01-01

    This article introduces a method to critical reviews and explores the ways in which problems have been formulated in knowledge production on career guidance in Denmark over a 10-year period from 2004 to 2014. The method draws upon the work of Bacchi focussing on the ‘What's the problem represented...... to be’ (WPR) approach. Forty-nine empirical studies on Danish youth career guidance were included in the study. An analysis of the issues in focus resulted in nine problem categories. One of these, ‘targeting’, is analysed using the WPR approach. Finally, the article concludes that the WPR approach...... provides a constructive basis for a critical analysis and discussion of the collective empirical knowledge production on career guidance, stimulating awareness of problems and potential solutions among the career guidance community....

  17. Volumetric formulation for a class of kinetic models with energy conservation.

    Science.gov (United States)

    Sbragaglia, M; Sugiyama, K

    2010-10-01

    We analyze a volumetric formulation of lattice Boltzmann for compressible thermal fluid flows. The velocity set is chosen with the desired accuracy, based on the Gauss-Hermite quadrature procedure, and tested against controlled problems in bounded and unbounded fluids. The method allows the simulation of thermohydrodyamical problems without the need to preserve the exact space-filling nature of the velocity set, but still ensuring the exact conservation laws for density, momentum, and energy. Issues related to boundary condition problems and improvements based on grid refinement are also investigated.

  18. What is the Nondominated Formulation? A Demonstration of de Novo Water Supply Portfolio Planning Under Deep Uncertainty

    Science.gov (United States)

    Kasprzyk, J. R.; Reed, P. M.; Characklis, G. W.; Kirsch, B. R.

    2010-12-01

    This paper proposes and demonstrates a new interactive framework for sensitivity-informed de Novo programming, in which a learning approach to formulating decision problems can confront the deep uncertainty within water management problems. The framework couples global sensitivity analysis using Sobol’ variance decomposition with multiobjective evolutionary algorithms (MOEAs) to generate planning alternatives and test their robustness to new modeling assumptions and scenarios. We explore these issues within the context of a risk-based water supply management problem, where a city seeks the most efficient use of a water market. The case study examines a single city’s water supply in the Lower Rio Grande Valley (LRGV) in Texas, using both a 10-year planning horizon and an extreme single-year drought scenario. The city’s water supply portfolio comprises a volume of permanent rights to reservoir inflows and use of a water market through anticipatory thresholds for acquiring transfers of water through optioning and spot leases. Diagnostic information from the Sobol’ variance decomposition is used to create a sensitivity-informed problem formulation testing different decision variable configurations, with tradeoffs for the formulation solved using a MOEA. Subsequent analysis uses the drought scenario to expose tradeoffs between long-term and short-term planning and illustrate the impact of deeply uncertain assumptions on water availability in droughts. The results demonstrate water supply portfolios’ efficiency, reliability, and utilization of transfers in the water supply market and show how to adaptively improve the value and robustness of our problem formulations by evolving our definition of optimality to discover key tradeoffs.

  19. A new approach to the Container Positioning Problem

    DEFF Research Database (Denmark)

    Ahmt, Jonas; Sigtenbjerggaard, Jonas Skott; Lusby, Richard Martin

    2016-01-01

    for solvinglarger instances of the problem. We show that this new formulation drasticallyoutperforms previous attempts at the problem through a direct comparison oninstances available in the literature. Furthermore, we also show that the rollinghorizon based heuristic can further reduce the solution time...... departure the time is. Other important improvements includea reduction in the model size, and the ability of the model to consider containersinitially at the terminal. In addition, we describe several classes of valid inequalitiesfor this new formulation and present a rolling horizon based heuristic...

  20. The Construction of Mathematical Literacy Problems for Geometry

    Science.gov (United States)

    Malasari, P. N.; Herman, T.; Jupri, A.

    2017-09-01

    The students of junior high school should have mathematical literacy ability to formulate, apply, and interpret mathematics in problem solving of daily life. Teaching these students are not enough by giving them ordinary mathematics problems. Teaching activities for these students brings consequence for teacher to construct mathematical literacy problems. Therefore, the aim of this study is to construct mathematical literacy problems to assess mathematical literacy ability. The steps of this study that consists of analysing, designing, theoretical validation, revising, limited testing to students, and evaluating. The data was collected with written test to 38 students of grade IX at one of state junior high school. Mathematical literacy problems consist of three essays with three indicators and three levels at polyhedron subject. The Indicators are formulating and employing mathematics. The results show that: (1) mathematical literacy problems which are constructed have been valid and practical, (2) mathematical literacy problems have good distinguishing characteristics and adequate distinguishing characteristics, (3) difficulty levels of problems are easy and moderate. The final conclusion is mathematical literacy problems which are constructed can be used to assess mathematical literacy ability.

  1. Reconcile: A Coreference Resolution Research Platform

    Energy Technology Data Exchange (ETDEWEB)

    Stoyanov, V; Cardie, C; Gilbert, N; Riloff, E; Buttler, D; Hysom, D

    2009-10-29

    Despite the availability of standard data sets and metrics, approaches to the problem of noun phrase coreference resolution are hard to compare empirically due to the different evaluation setting stemming, in part, from the lack of comprehensive coreference resolution research platforms. In this tech report we present Reconcile, a coreference resolution research platform that aims to facilitate the implementation of new approaches to coreference resolution as well as the comparison of existing approaches. We discuss Reconcile's architecture and give results of running Reconcile on six data sets using four evaluation metrics, showing that Reconcile's performance is comparable to state-of-the-art systems in coreference resolution.

  2. THE NONISOTHERMAL STAGE OF MAGNETIC STAR FORMATION. I. FORMULATION OF THE PROBLEM AND METHOD OF SOLUTION

    International Nuclear Information System (INIS)

    Kunz, Matthew W.; Mouschovias, Telemachos Ch.

    2009-01-01

    We formulate the problem of the formation and subsequent evolution of fragments (or cores) in magnetically supported, self-gravitating molecular clouds in two spatial dimensions. The six-fluid (neutrals, electrons, molecular and atomic ions, positively charged, negatively charged, and neutral grains) physical system is governed by the radiation, nonideal magnetohydrodynamic equations. The magnetic flux is not assumed to be frozen in any of the charged species. Its evolution is determined by a newly derived generalized Ohm's law, which accounts for the contributions of both elastic and inelastic collisions to ambipolar diffusion and Ohmic dissipation. The species abundances are calculated using an extensive chemical-equilibrium network. Both MRN and uniform grain size distributions are considered. The thermal evolution of the protostellar core and its effect on the dynamics are followed by employing the gray flux-limited diffusion approximation. Realistic temperature-dependent grain opacities are used that account for a variety of grain compositions. We have augmented the publicly available Zeus-MP code to take into consideration all these effects and have modified several of its algorithms to improve convergence, accuracy, and efficiency. Results of magnetic star formation simulations that accurately track the evolution of a protostellar fragment from a density ≅10 3 cm -3 to a density ≅10 15 cm -3 , while rigorously accounting for both nonideal MHD processes and radiative transfer, are presented in a separate paper.

  3. Solving inverse problems through a smooth formulation of multiple-point geostatistics

    DEFF Research Database (Denmark)

    Melnikova, Yulia

    be inferred, for instance, from a conceptual geological model termed a training image.The main motivation for this study was the challenge posed by history matching, an inverse problem aimed at estimating rock properties from production data. We addressed two main difficulties of the history matching problem...... corresponding inverse problems. However, noise in data, non-linear relationships and sparse observations impede creation of realistic reservoir models. Including complex a priori information on reservoir parameters facilitates the process of obtaining acceptable solutions. Such a priori knowledge may...... strategies including both theoretical motivation and practical aspects of implementation. Finally, it is complemented by six research papers submitted, reviewed and/or published in the period 2010 - 2013....

  4. Policy formulation of public acceptance

    International Nuclear Information System (INIS)

    Kasai, Akihiro

    1978-01-01

    Since 1970, the new policy formulation for public acceptance of the new consideration on the location of electric power generation has been set and applied. The planning and the enforcement being conducted by local public organizations for the local economic build-up with plant location and also the adjustement of the requirements for fishery are two main specific characters in this new policy. The background of this new public acceptance policy, the history and the actual problems about the compensation for the location of power generation plants are reviewed. One new proposal, being recommended by the Policy and Science Laboratory to MITI in 1977 is explained. This is based on the method of promoting the location of power generation plants by public participation placing the redevelopment of regional societies as its basis. The problems concerning the industrial structures in farm villages, fishing villages and the areas of commerce and industry should be systematized, and explained from the viewpoint of outside impact, the characteristics of local areas and the location problems in this new proposal. Finally, the location process and its effectiveness should be put in order. (Nakai, Y.)

  5. RESOLUTION OF THE PROBLEM OF TREATMENT OF WASTE WATER GENERATED BY CAR WASHES AND TRANSPORT ENTERPRISES

    Directory of Open Access Journals (Sweden)

    Gogina Elena Sergeevna

    2012-12-01

    big cities of Russia. At the same time, the quality of the waste water treated by local water treatment stations fails to meet the present-day standard requirements. Moreover, potable water shall not be used for the purpose of washing transport vehicles. Within the recent 10 years, MGSU has developed a number of research projects aimed at the resolution of this problem. The concept developed by the MGSU specialists is to attain the highest quality of treated waste water generated by car washes and transport enterprises using the most advanced technologies of water treatment rather than to design new water treatment plants. Various methods may be applied for this purpose: restructuring of water treatment facilities, advanced feed, updated regulations governing the operation of water treatment plants.

  6. A Comparison of Approaches for Solving Hard Graph-Theoretic Problems

    Science.gov (United States)

    2015-04-29

    and Search”, in Discrete Mathematics and Its Applications, Book 7, CRC Press (1998): Boca Raton. [6] A. Lucas, “Ising Formulations of Many NP Problems...owner. 14. ABSTRACT In order to formulate mathematical conjectures likely to be true, a number of base cases must be determined. However, many... combinatorial problems are NP-hard and the computational complexity makes this research approach difficult using a standard brute force approach on a

  7. Acute liver injury associated with a newer formulation of the herbal weight loss supplement Hydroxycut.

    Science.gov (United States)

    Araujo, James L; Worman, Howard J

    2015-05-06

    Despite the widespread use of herbal and dietary supplements (HDS), serious cases of hepatotoxicity have been reported. The popular herbal weight loss supplement, Hydroxycut, has previously been implicated in acute liver injury. Since its introduction, Hydroxycut has undergone successive transformations in its formulation; yet, cases of liver injury have remained an ongoing problem. We report a case of a 41-year-old Hispanic man who developed acute hepatocellular liver injury with associated nausea, vomiting, jaundice, fatigue and asterixis attributed to the use of a newer formulation of Hydroxycut, SX-7 Clean Sensory. The patient required hospitalisation and improved with supportive therapy. Despite successive transformations in its formulation, potential liver injury appears to remain an ongoing problem with Hydroxycut. Our case illustrates the importance of obtaining a thorough medication history, including HDS, regardless of new or reformulated product marketing efforts. 2015 BMJ Publishing Group Ltd.

  8. Variational formulation based analysis on growth of yield front in ...

    African Journals Online (AJOL)

    The present study investigates the growth of elastic-plastic front in rotating solid disks of non-uniform thickness having exponential and parabolic geometry variation. The problem is solved through an extension of a variational method in elastoplastic regime. The formulation is based on von-Mises yield criterion and linear ...

  9. Cardiovascular studies Hiroshima 1958-1960: Report Number 1. Electrocardiographic findings in relation to the aging process formulation of the problem

    Energy Technology Data Exchange (ETDEWEB)

    Ueda, Shoichi; Yano, Katsuhiko

    1961-03-01

    As a part of ABCC's research program to review the general hypothesis that radiation exposure may accelerate aging processes, a comparative study now is being conducted on the pattern of age changes in electrocardiographic findings for the exposed and controls. In this report, emphasis is placed on the method of formulation of this study for statistical analysis. Two aspects of aging are considered: The frequency of abnormal electrocardiographic findings. Age changes found in normal electrocardiographic tracings. The first problem is an analysis of a three dimensional cross table of prevalence of electrocardiographic abnormalities by age, exposure group and a third factor (for instance, a socioeconomic or physiological factor). In the analysis of the second problem, an aging index was established in order to analyse effectively age changes in the electrocardiographic tracings. After a study of 22 measurements, QRS axis, Q-T interval, R/sub II/, T/sub I/, SV/sub 1/, RV/sub 1/, and RV/sub 5/ amplitudes were selected and their combination was derived to attain the highest correlation with age. Preliminary analysis was conducted for males for whom data collection had been completed. In the results so far obtained, no statistically significant differences were noted between the exposure groups. However, further detailed analysis should be conducted on these data together with data for females which are now being compiled. 18 references, 4 figures, 5 tables.

  10. Problem Solving with General Semantics.

    Science.gov (United States)

    Hewson, David

    1996-01-01

    Discusses how to use general semantics formulations to improve problem solving at home or at work--methods come from the areas of artificial intelligence/computer science, engineering, operations research, and psychology. (PA)

  11. Two Fundamental Problems Connected with AI

    OpenAIRE

    Dobrev, Dimiter

    2008-01-01

    This paper is about two fundamental problems in the field of computer science. Solving these two problems is important because it has to do with the creation of Artificial Intelligence. In fact, these two problems are not very famous because they have not many applications outside the field of Artificial Intelligence. In this paper we will give a solution neither of the first nor of the second problem. Our goal will be to formulate these two problems and to give some ideas for...

  12. Handelman's hierarchy for the maximum stable set problem

    NARCIS (Netherlands)

    Laurent, M.; Sun, Z.

    2014-01-01

    The maximum stable set problem is a well-known NP-hard problem in combinatorial optimization, which can be formulated as the maximization of a quadratic square-free polynomial over the (Boolean) hypercube. We investigate a hierarchy of linear programming relaxations for this problem, based on a

  13. A new Ellipsoidal Gravimetric-Satellite Altimetry Boundary Value Problem; Case study: High Resolution Geoid of Iran

    Science.gov (United States)

    Ardalan, A.; Safari, A.; Grafarend, E.

    2003-04-01

    A new ellipsoidal gravimetric-satellite altimetry boundary value problem has been developed and successfully tested. This boundary value problem has been constructed for gravity observables of the type (i) gravity potential (ii) gravity intensity (iii) deflection of vertical and (iv) satellite altimetry data. The developed boundary value problem is enjoying the ellipsoidal nature and as such can take advantage of high precision GPS observations in the set-up of the problem. The highlights of the solution are as follows: begin{itemize} Application of ellipsoidal harmonic expansion up to degree/order and ellipsoidal centrifugal field for the reduction of global gravity and isostasy effects from the gravity observable at the surface of the Earth. Application of ellipsoidal Newton integral on the equal area map projection surface for the reduction of residual mass effects within a radius of 55 km around the computational point. Ellipsoidal harmonic downward continuation of the residual observables from the surface of the earth down to the surface of reference ellipsoid using the ellipsoidal height of the observation points derived from GPS. Restore of the removed effects at the application points on the surface of reference ellipsoid. Conversion of the satellite altimetry derived heights of the water bodies into potential. Combination of the downward continued gravity information with the potential equivalent of the satellite altimetry derived heights of the water bodies. Application of ellipsoidal Bruns formula for converting the potential values on the surface of the reference ellipsoid into the geoidal heights (i.e. ellipsoidal heights of the geoid) with respect to the reference ellipsoid. Computation of the high-resolution geoid of Iran has successfully tested this new methodology!

  14. A Mathematical Optimization Problem in Bioinformatics

    Science.gov (United States)

    Heyer, Laurie J.

    2008-01-01

    This article describes the sequence alignment problem in bioinformatics. Through examples, we formulate sequence alignment as an optimization problem and show how to compute the optimal alignment with dynamic programming. The examples and sample exercises have been used by the author in a specialized course in bioinformatics, but could be adapted…

  15. Crystallization Formulation Lab

    Data.gov (United States)

    Federal Laboratory Consortium — The Crystallization Formulation Lab fills a critical need in the process development and optimization of current and new explosives and energetic formulations. The...

  16. Multiphase flows of N immiscible incompressible fluids: A reduction-consistent and thermodynamically-consistent formulation and associated algorithm

    Science.gov (United States)

    Dong, S.

    2018-05-01

    We present a reduction-consistent and thermodynamically consistent formulation and an associated numerical algorithm for simulating the dynamics of an isothermal mixture consisting of N (N ⩾ 2) immiscible incompressible fluids with different physical properties (densities, viscosities, and pair-wise surface tensions). By reduction consistency we refer to the property that if only a set of M (1 ⩽ M ⩽ N - 1) fluids are present in the system then the N-phase governing equations and boundary conditions will exactly reduce to those for the corresponding M-phase system. By thermodynamic consistency we refer to the property that the formulation honors the thermodynamic principles. Our N-phase formulation is developed based on a more general method that allows for the systematic construction of reduction-consistent formulations, and the method suggests the existence of many possible forms of reduction-consistent and thermodynamically consistent N-phase formulations. Extensive numerical experiments have been presented for flow problems involving multiple fluid components and large density ratios and large viscosity ratios, and the simulation results are compared with the physical theories or the available physical solutions. The comparisons demonstrate that our method produces physically accurate results for this class of problems.

  17. High Resolution Simulations of Future Climate in West Africa Using a Variable-Resolution Atmospheric Model

    Science.gov (United States)

    Adegoke, J. O.; Engelbrecht, F.; Vezhapparambu, S.

    2013-12-01

    In previous work demonstrated the application of a var¬iable-resolution global atmospheric model, the conformal-cubic atmospheric model (CCAM), across a wide range of spatial and time scales to investigate the ability of the model to provide realistic simulations of present-day climate and plausible projections of future climate change over sub-Saharan Africa. By applying the model in stretched-grid mode the versatility of the model dynamics, numerical formulation and physical parameterizations to function across a range of length scales over the region of interest, was also explored. We primarily used CCAM to illustrate the capability of the model to function as a flexible downscaling tool at the climate-change time scale. Here we report on additional long term climate projection studies performed by downscaling at much higher resolutions (8 Km) over an area that stretches from just south of Sahara desert to the southern coast of the Niger Delta and into the Gulf of Guinea. To perform these simulations, CCAM was provided with synoptic-scale forcing of atmospheric circulation from 2.5 deg resolution NCEP reanalysis at 6-hourly interval and SSTs from NCEP reanalysis data uses as lower boundary forcing. CCAM 60 Km resolution downscaled to 8 Km (Schmidt factor 24.75) then 8 Km resolution simulation downscaled to 1 Km (Schmidt factor 200) over an area approximately 50 Km x 50 Km in the southern Lake Chad Basin (LCB). Our intent in conducting these high resolution model runs was to obtain a deeper understanding of linkages between the projected future climate and the hydrological processes that control the surface water regime in this part of sub-Saharan Africa.

  18. Thermal processing of EVA encapsulants and effects of formulation additives

    Energy Technology Data Exchange (ETDEWEB)

    Pern, F.J.; Glick, S.H. [National Renewable Energy Lab., Golden, CO (United States)

    1996-05-01

    The authors investigated the in-situ processing temperatures and effects of various formulation additives on the formation of ultraviolet (UV) excitable chromophores, in the thermal lamination and curing of ethylene-vinyl acetate (EVA) encapsulants. A programmable, microprocessor-controlled, double-bag vacuum laminator was used to study two commercial as formulated EVA films, A9918P and 15295P, and solution-cast films of Elvaxrm (EVX) impregnated with various curing agents and antioxidants. The results show that the actual measured temperatures of EVA lagged significantly behind the programmed profiles for the heating elements and were affected by the total thermal mass loaded inside the laminator chamber. The antioxidant Naugard P{trademark}, used in the two commercial EVA formulations, greatly enhances the formation of UV-excitable, short chromophores upon curing, whereas other tested antioxidants show little effect. A new curing agent chosen specifically for the EVA formulation modification produces little or no effect on chromophore formation, no bubbling problems in the glass/EVX/glass laminates, and a gel content of {approximately}80% when cured at programmed 155{degrees}C for 4 min. Also demonstrated is the greater discoloring effect with higher concentrations of curing-generated chromophores.

  19. An ayurvedic formulation Sankat Mochan: A potent anthelmintic medicine

    Directory of Open Access Journals (Sweden)

    Khomendra Kumar Sarwa

    2017-01-01

    Full Text Available Aim and Object: Sankat Mochan is an ayurvedic formulation used in the urban and rural area of India. This polyherbal formulation is used for general stomach problems including abdominal cramping and diarrhea. The present investigation evaluated the anthelmintic activity of an aqueous solution of an ayurvedic medicine Sankat Mochan. Materials and Method: Various concentrations (1%, 5%, and 10% of medicine were used for anthelmintic activity on Pheretima posthuma. Piperazine citrate (10 mg/ml was used as a reference standard and distilled water as a control. Result and Conclusion: The result showed that the Sankat Mochan possess anthelmintic activity more potent than that of piperazine citrate. Thus, Sankat Mochan may be used as a potent anthelmintic agent against helminthiasis.

  20. Applications of an alternative formulation for one-layer real time optimization

    Directory of Open Access Journals (Sweden)

    Schiavon Júnior A.L.

    2000-01-01

    Full Text Available This paper presents two applications of an alternative formulation for one-layer real time structure for control and optimization. This new formulation have arisen from predictive controller QDMC (Quadratic Dynamic Matrix Control, a type of predictive control (Model Predictive Control - MPC. At each sampling time, the values of the outputs of process are fed into the optimization-control structure which supplies the new values of the manipulated variables already considering the best conditions of process. The variables of optimization are both set-point changes and control actions. The future stationary outputs and the future stationary control actions have both a different formulation of conventional one-layer structure and they are calculated from the inverse gain matrix of the process. This alternative formulation generates a convex problem, which can be solved by less sophisticated optimization algorithms. Linear and nonlinear economic objective functions were considered. The proposed approach was applied to two linear models, one SISO (single-input/single output and the other MIMO (multiple-input/multiple-output. The results showed an excellent performance.

  1. AIRCRAFT CONFLICTS RESOLUTION BY COURSE MANEUVERING

    Directory of Open Access Journals (Sweden)

    В. Харченко

    2011-02-01

    Full Text Available Enhancement of requirements for air traffic efficiency at increasing of flights intensity determines the necessity of development of new optimization methods for aircraft conflict resolutions. The statement of problem of optimal conflict resolutions at Cooperative Air Traffic Management was done. The method for optimal aircraft conflict  resolution by course maneuvering has been  developed. The method using dynamic programming provides planning of aircraft conflict-free trajectory with minimum length. The decomposition of conflict resolution process on phases and stages, definition of states, controls and recursive  equations for generation of optimal course control program were done. Computer modeling of aircraft conflict resolution by developed method was done

  2. Institutional statism: an overview of the formulation of the taxi recapitalisation policy

    Directory of Open Access Journals (Sweden)

    D. van Schalkwyk

    2008-07-01

    Full Text Available This article provides an overview of the government’s formulation of the taxi recapitalisation policy which is aimed at regulating the minibus taxi industry. Coupled with a brief social and politico-historical context of the policy, the aim is to highlight the government’s statist conduct in the formulation of the recapitalisation policy. The minibus taxi industry started to fulfil a prominent role in the 1970s as a result of a loophole in the legislation of the former apartheid government. It is currently the most accessible mode of public transport and conveys 65 per cent of the country’s commuters daily. Consequently, the Industry is an imperative force to be considered by the government in its formulation of transport policies. However, the industry is characterised by numerous problems, including a high rate of minibus taxis involved in accidents, unroadworthy vehicles and violence. It is in this context that the government formulated both the original and revised versions of the recapitalisation policy. However, the formulation of the policy has been problematic. The government followed a statist approach during the formulation process when it directed the course of the process according to its interests and without adequate consultation with relevant role players.

  3. Lagrangian formulation of the general relativistic Poynting-Robertson effect

    Science.gov (United States)

    De Falco, Vittorio; Battista, Emmanuele; Falanga, Maurizio

    2018-04-01

    We propose the Lagrangian formulation for describing the motion of a test particle in a general relativistic, stationary, and axially symmetric spacetime. The test particle is also affected by a radiation field, modeled as a coherent flux of photons traveling along the null geodesics of the background spacetime, including the general relativistic Poynting-Robertson effect. The innovative part of this work is to prove the existence of the potential linked to the dissipative action caused by the Poynting-Robertson effect in general relativity through the help of an integrating factor, depending on the energy of the system. Generally, such kinds of inverse problems involving dissipative effects might not admit a Lagrangian formulation; especially, in general relativity, there are no examples of such attempts in the literature so far. We reduce this general relativistic Lagrangian formulation to the classic case in the weak-field limit. This approach facilitates further studies in improving the treatment of the radiation field, and it contains, for example, some implications for a deeper comprehension of the gravitational waves.

  4. Interactions of collimation, sampling and filtering on spect spatial resolution

    International Nuclear Information System (INIS)

    Tsui, B.M.W.; Jaszczak, R.J.

    1984-01-01

    The major factors which affect the spatial resolution of single-photon emission computer tomography (SPECT) include collimation, sampling and filtering. A theoretical formulation is presented to describe the relationship between these factors and their effects on the projection data. Numerical calculations were made using commercially available SPECT systems and imaging parameters. The results provide an important guide for proper selection of the collimator-detector design, the imaging and the reconstruction parameters to avoid unnecessary spatial resolution degradation and aliasing artifacts in the reconstructed image. In addition, the understanding will help in the fair evaluation of different SPECT systems under specific imaging conditions

  5. Fermion bag solutions to some sign problems in four-fermion field theories

    International Nuclear Information System (INIS)

    Li, Anyi

    2013-01-01

    Lattice four-fermion models containing N flavors of staggered fermions, that are invariant under Z 2 and U(1) chiral symmetries, are known to suffer from sign problems when formulated using the auxiliary field approach. Although these problems have been ignored in previous studies, they can be severe. In this talk, we show that the sign problems disappear when the models are formulated in the fermion bag approach, allowing us to solve them rigorously for the first time.

  6. Fermion bag solutions to some sign problems in four-fermion field theories

    Science.gov (United States)

    Li, Anyi

    2013-04-01

    Lattice four-fermion models containing N flavors of staggered fermions, that are invariant under Z2 and U(1) chiral symmetries, are known to suffer from sign problems when formulated using the auxiliary field approach. Although these problems have been ignored in previous studies, they can be severe. In this talk, we show that the sign problems disappear when the models are formulated in the fermion bag approach, allowing us to solve them rigorously for the first time.

  7. A bicriterion Steiner tree problem on graph

    Directory of Open Access Journals (Sweden)

    Vujošević Mirko B.

    2003-01-01

    Full Text Available This paper presents a formulation of bicriterion Steiner tree problem which is stated as a task of finding a Steiner tree with maximal capacity and minimal length. It is considered as a lexicographic multicriteria problem. This means that the bottleneck Steiner tree problem is solved first. After that, the next optimization problem is stated as a classical minimums Steiner tree problem under the constraint on capacity of the tree. The paper also presents some computational experiments with the multicriteria problem.

  8. Mathematics and computational methods development in U.S. department of energy-sponsored research (nuclear energy research initiative and nuclear engineering education research). 5. Analysis of Angular V-Cycle Multigrid Formulation for Three-Dimensional Discrete Ordinates Shielding Problems

    International Nuclear Information System (INIS)

    Kucukboyaci, Vefa; Haghighat, Alireza

    2001-01-01

    We have developed new angular multigrid formulations, including the Simplified Angular Multigrid (SAM), Nested Iteration (NI), and V-Cycle schemes, that are compatible with the parallel environment and the adaptive differencing strategy of the PENTRAN three-dimensional parallel S N code. Using the Fourier analysis method for an infinite, homogenous medium, we have investigated the effectiveness of the V-Cycle scheme for different problem parameters including scattering ratio, spatial differencing weights, quadrature order, and mesh size. We have further investigated the effectiveness of the new schemes for practical shielding applications such as the Kobayashi benchmark problem and the boiling water reactor core shroud problem. In this paper, we summarize the angular V-Cycle scheme implemented in the PENTRAN code, the Fourier Analysis of the V-Cycle scheme, and results of convergence analysis of the V-Cycle scheme using different problem parameters. The theoretical analysis reveals that the V-Cycle scheme is effective for a large range of scattering ratios and is insensitive to mesh size. Besides the theoretical analysis, we have applied the new angular multigrid schemes to shielding problems. In comparison to the standard PCR formulation, combinations of the new angular multigrid schemes and PCR (e.g., SAM+V-Cycle+PCR) have proved to be very effective for scattering ratios in a range of 0.6 to 0.9. (authors)

  9. An eddy current vector potential formulation for estimating hysteresis losses of superconductors with FEM

    International Nuclear Information System (INIS)

    Stenvall, A; Tarhasaari, T

    2010-01-01

    Many people these days employ only commercial finite element method (FEM) software when solving for the hysteresis losses of superconductors. Thus, the knowledge of a modeller is in the capability of using the black boxes of software efficiently. This has led to a relatively superficial examination of different formulations while the discussion stays mainly on the usage of the user interfaces of these programs. Also, if we stay only at the mercy of commercial software producers, we end up having less and less knowledge on the details of solvers. Then, it becomes more and more difficult to conceptually solve new kinds of problem. This may prevent us finding new kinds of method to solve old problems more efficiently, or finding a solution for a problem that was considered almost impossible earlier. In our earlier research, we presented the background of a co-tree gauged T-ψ FEM solver for computing the hysteresis losses of superconductors. In this paper, we examine the feasibility of FEM and eddy current vector potential formulation in the same problem.

  10. mathematical model of thermal explosion, the dual variational formulation of nonlinear problem, alternative functional

    Directory of Open Access Journals (Sweden)

    V. S. Zarubin

    2016-01-01

    in its plane, and in the circular cylinder unlimited in length.An approximate numerical solution of the differential equation that is included in a nonlinear mathematical model of the thermal explosion enables us to obtain quantitative estimates of combination of determining parameters at which the limit state occurs in areas of not only canonical form. A capability to study of the thermal explosion state can be extended in the context of development of mathematical modeling methods, including methods of model analysis to describe the thermal state of solids.To analyse a mathematical model of the thermal explosion in a homogeneous solid the paper uses a variational approach based on the dual variational formulation of the appropriate nonlinear stationary problem of heat conduction in such a body. This formulation contains two alternative functional reaching the matching values in their stationary points corresponding to the true temperature distribution. This functional feature allows you to not only get an approximate quantitative estimate of the combination of parameters that determine the thermal explosion state, but also to find the greatest possible error in such estimation.

  11. Strategy Formulation and Implementation for PT.Multigarmen Jaya

    OpenAIRE

    Yoanita, Martha; Wandebori, Harimukti

    2013-01-01

    - The objective of this final project is to formulate and proposed the strategy for PT.Multigarmen Jaya (PT.MGJ) to faced the tight competition in garment industries. The analysis begins from environmental analysis that consist of external and internal analysis. For external analysis used PEST, Porter five forces, and competitor analysis, and for internal analysis used value chain analysis and resources analysis. From that analysis, there are several problems discovered, such as competitor ...

  12. Lagrangian finite element formulation for fluid-structure interaction and application

    International Nuclear Information System (INIS)

    Hautfenne, M.H.

    1983-01-01

    The aim of this communication is to present a new finite element software (FLUSTRU) for fluid-structure interaction in a lagrangian formulation. The stiffness and damping matrices of the fluid are computed from the governing laws of the medium: the fluid is supposed to be viscous and compressible (Stokes' equations). The main problem stated by the lagrangian formulation of the fluid is the presence of spurious free-vibration modes (zero energy modes) in the fluid. Those modes are generated by the particular form of the matrix. These spurious modes have been examined and two particular methods to eliminate them have been developed: industrial applications prove the efficiency of the proposed methods. (orig./GL)

  13. The overshoot problem and giant structures

    International Nuclear Information System (INIS)

    Itzhaki, Nissan

    2008-01-01

    Models of small-field inflation often suffer from the overshoot problem. A particularly efficient resolution to the problem was proposed recently in the context of string theory. We show that this resolution predicts the existence of giant spherically symmetric overdense regions with radius of at least 110 Mpc. We argue that if such structures will be found they could offer an experimental window into string theory.

  14. Elementary topology problem textbook

    CERN Document Server

    Viro, O Ya; Netsvetaev, N Yu; Kharlamov, V M

    2008-01-01

    This textbook on elementary topology contains a detailed introduction to general topology and an introduction to algebraic topology via its most classical and elementary segment centered at the notions of fundamental group and covering space. The book is tailored for the reader who is determined to work actively. The proofs of theorems are separated from their formulations and are gathered at the end of each chapter. This makes the book look like a pure problem book and encourages the reader to think through each formulation. A reader who prefers a more traditional style can either find the pr

  15. Fault estimation - A standard problem approach

    DEFF Research Database (Denmark)

    Stoustrup, J.; Niemann, Hans Henrik

    2002-01-01

    This paper presents a range of optimization based approaches to fault diagnosis. A variety of fault diagnosis problems are reformulated in the so-called standard problem set-up introduced in the literature on robust control. Once the standard problem formulations are given, the fault diagnosis...... problems can be solved by standard optimization techniques. The proposed methods include (1) fault diagnosis (fault estimation, (FE)) for systems with model uncertainties; FE for systems with parametric faults, and FE for a class of nonlinear systems. Copyright...

  16. The nonconforming virtual element method for eigenvalue problems

    Energy Technology Data Exchange (ETDEWEB)

    Gardini, Francesca [Univ. of Pavia (Italy). Dept. of Mathematics; Manzini, Gianmarco [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Vacca, Giuseppe [Univ. of Milano-Bicocca, Milan (Italy). Dept. of Mathematics and Applications

    2018-02-05

    We analyse the nonconforming Virtual Element Method (VEM) for the approximation of elliptic eigenvalue problems. The nonconforming VEM allow to treat in the same formulation the two- and three-dimensional case.We present two possible formulations of the discrete problem, derived respectively by the nonstabilized and stabilized approximation of the L2-inner product, and we study the convergence properties of the corresponding discrete eigenvalue problems. The proposed schemes provide a correct approximation of the spectrum and we prove optimal-order error estimates for the eigenfunctions and the usual double order of convergence of the eigenvalues. Finally we show a large set of numerical tests supporting the theoretical results, including a comparison with the conforming Virtual Element choice.

  17. Integrating Industry in Project Organized Problem Based Learning for Engineering Educations

    DEFF Research Database (Denmark)

    Nielsen, Kirsten M.

    2006-01-01

    This abstract deals with the challenge of establishing engineering student projects in collaboration with industry. Based on empirical results a set of advices for industrial collaboration in project oriented problem based learning are formulated......This abstract deals with the challenge of establishing engineering student projects in collaboration with industry. Based on empirical results a set of advices for industrial collaboration in project oriented problem based learning are formulated...

  18. Sterilization of solutions for parenterals products. Problem analysis

    Directory of Open Access Journals (Sweden)

    Yanelys Montes-González

    2017-09-01

    Full Text Available The solutions for the formulation of parenteral products must be sterile before the aseptic formulation process. For this reason, different methods of sterilization referred in the literature are analyzed. Thermodynamic criteria that rule the sterilization are presented. Furthermore, previous experiences in the sterilization of solutions for the formulation of parental products in an autoclave are analyzed, that take large time of processing and only low volumes of solution can be handled. Using jacketed stirred tanks for the sterilization may solve the problem and, therefore, criteria for the design of the later that allow to process high volumes of solution for the formulation of parenteral products are shown.

  19. The vehicle routing problem with edge set costs

    DEFF Research Database (Denmark)

    Reinhardt, Line Blander; Jepsen, Mads Kehlet; Pisinger, David

    We consider an important generalization of the vehicle routing problem with time windows in which a fixed cost must be paid for accessing a set of edges. This fixed cost could reflect payment for toll roads, investment in new facilities, the need for certifications and other costly investments....... The certifications and contributions impose a cost for the company while they also give unlimited usage of a set of roads to all vehicles belonging to the company. Different versions for defining the edge sets are discussed and formulated. A MIP-formulation of the problem is presented, and a solution method based...

  20. Simultaneous super-resolution and blind deconvolution

    International Nuclear Information System (INIS)

    Sroubek, F; Flusser, J; Cristobal, G

    2008-01-01

    In many real applications, blur in input low-resolution images is a nuisance, which prevents traditional super-resolution methods from working correctly. This paper presents a unifying approach to the blind deconvolution and superresolution problem of multiple degraded low-resolution frames of the original scene. We introduce a method which assumes no prior information about the shape of degradation blurs and which is properly defined for any rational (fractional) resolution factor. The method minimizes a regularized energy function with respect to the high-resolution image and blurs, where regularization is carried out in both the image and blur domains. The blur regularization is based on a generalized multichannel blind deconvolution constraint. Experiments on real data illustrate robustness and utilization of the method

  1. Robust and Reliable Portfolio Optimization Formulation of a Chance Constrained Problem

    Directory of Open Access Journals (Sweden)

    Sengupta Raghu Nandan

    2017-02-01

    Full Text Available We solve a linear chance constrained portfolio optimization problem using Robust Optimization (RO method wherein financial script/asset loss return distributions are considered as extreme valued. The objective function is a convex combination of portfolio’s CVaR and expected value of loss return, subject to a set of randomly perturbed chance constraints with specified probability values. The robust deterministic counterpart of the model takes the form of Second Order Cone Programming (SOCP problem. Results from extensive simulation runs show the efficacy of our proposed models, as it helps the investor to (i utilize extensive simulation studies to draw insights into the effect of randomness in portfolio decision making process, (ii incorporate different risk appetite scenarios to find the optimal solutions for the financial portfolio allocation problem and (iii compare the risk and return profiles of the investments made in both deterministic as well as in uncertain and highly volatile financial markets.

  2. The Aalborg Model and The Problem

    DEFF Research Database (Denmark)

    Qvist, Palle

    To know the definition of a problem in is an important implication for the possibility to identify and formulate the problem1, the starting point of the learning process in the Aalborg Model2 3. For certification it has been suggested that: A problem grows out of students’ wondering within differ...... – a wondering - that something is different from what is expected, something novel and unexpected or inexplicable; astonishment mingled with perplexity or bewildered curiosity?...

  3. On the formulation and numerical simulation of distributed-order fractional optimal control problems

    Science.gov (United States)

    Zaky, M. A.; Machado, J. A. Tenreiro

    2017-11-01

    In a fractional optimal control problem, the integer order derivative is replaced by a fractional order derivative. The fractional derivative embeds implicitly the time delays in an optimal control process. The order of the fractional derivative can be distributed over the unit interval, to capture delays of distinct sources. The purpose of this paper is twofold. Firstly, we derive the generalized necessary conditions for optimal control problems with dynamics described by ordinary distributed-order fractional differential equations (DFDEs). Secondly, we propose an efficient numerical scheme for solving an unconstrained convex distributed optimal control problem governed by the DFDE. We convert the problem under consideration into an optimal control problem governed by a system of DFDEs, using the pseudo-spectral method and the Jacobi-Gauss-Lobatto (J-G-L) integration formula. Next, we present the numerical solutions for a class of optimal control problems of systems governed by DFDEs. The convergence of the proposed method is graphically analyzed showing that the proposed scheme is a good tool for the simulation of distributed control problems governed by DFDEs.

  4. The inverse Fourier problem in the case of poor resolution in one given direction: the maximum-entropy solution

    International Nuclear Information System (INIS)

    Papoular, R.J.; Zheludev, A.; Ressouche, E.; Schweizer, J.

    1995-01-01

    When density distributions in crystals are reconstructed from 3D diffraction data, a problem sometimes occurs when the spatial resolution in one given direction is very small compared to that in perpendicular directions. In this case, a 2D projected density is usually reconstructed. For this task, the conventional Fourier inversion method only makes use of those structure factors measured in the projection plane. All the other structure factors contribute zero to the reconstruction of a projected density. On the contrary, the maximum-entropy method uses all the 3D data, to yield 3D-enhanced 2D projected density maps. It is even possible to reconstruct a projection in the extreme case when not one structure factor in the plane of projection is known. In the case of poor resolution along one given direction, a Fourier inversion reconstruction gives very low quality 3D densities 'smeared' in the third dimension. The application of the maximum-entropy procedure reduces the smearing significantly and reasonably well resolved projections along most directions can now be obtained from the MaxEnt 3D density. To illustrate these two ideas, particular examples based on real polarized neutron diffraction data sets are presented. (orig.)

  5. High Order Tensor Formulation for Convolutional Sparse Coding

    KAUST Repository

    Bibi, Adel Aamer

    2017-12-25

    Convolutional sparse coding (CSC) has gained attention for its successful role as a reconstruction and a classification tool in the computer vision and machine learning community. Current CSC methods can only reconstruct singlefeature 2D images independently. However, learning multidimensional dictionaries and sparse codes for the reconstruction of multi-dimensional data is very important, as it examines correlations among all the data jointly. This provides more capacity for the learned dictionaries to better reconstruct data. In this paper, we propose a generic and novel formulation for the CSC problem that can handle an arbitrary order tensor of data. Backed with experimental results, our proposed formulation can not only tackle applications that are not possible with standard CSC solvers, including colored video reconstruction (5D- tensors), but it also performs favorably in reconstruction with much fewer parameters as compared to naive extensions of standard CSC to multiple features/channels.

  6. Contribution to the resolution of algebraic differential equations. Application to electronic circuits and nuclear reactors

    International Nuclear Information System (INIS)

    Monsef, Youssef.

    1977-05-01

    This note deals with the resolution of large algebraic differential systems involved in the physical sciences, with special reference to electronics and nuclear physics. The theoretical aspect of the algorithms established and developed for this purpose is discussed in detail. A decomposition algorithm based on the graph theory is developed in detail and the regressive analysis of the error involved in the decomposition is carried out. The specific application of these algorithms on the analyses of non-linear electronic circuits and to the integration of algebraic differential equations simulating the general operation of nuclear reactors coupled to heat exchangers is discussed in detail. To conclude, it is shown that the development of efficient digital resolution techniques dealing with the elements in order is sub-optimal for large systems and calls for the revision of conventional formulation methods. Thus for a high-order physical system, the larger, the number of auxiliary unknowns introduced, the easier the formulation and resolution, owing to the elimination of any form of complex matricial calculation such as those given by the state variables method [fr

  7. Evaluation of the CPU time for solving the radiative transfer equation with high-order resolution schemes applying the normalized weighting-factor method

    Science.gov (United States)

    Xamán, J.; Zavala-Guillén, I.; Hernández-López, I.; Uriarte-Flores, J.; Hernández-Pérez, I.; Macías-Melo, E. V.; Aguilar-Castro, K. M.

    2018-03-01

    In this paper, we evaluated the convergence rate (CPU time) of a new mathematical formulation for the numerical solution of the radiative transfer equation (RTE) with several High-Order (HO) and High-Resolution (HR) schemes. In computational fluid dynamics, this procedure is known as the Normalized Weighting-Factor (NWF) method and it is adopted here. The NWF method is used to incorporate the high-order resolution schemes in the discretized RTE. The NWF method is compared, in terms of computer time needed to obtain a converged solution, with the widely used deferred-correction (DC) technique for the calculations of a two-dimensional cavity with emitting-absorbing-scattering gray media using the discrete ordinates method. Six parameters, viz. the grid size, the order of quadrature, the absorption coefficient, the emissivity of the boundary surface, the under-relaxation factor, and the scattering albedo are considered to evaluate ten schemes. The results showed that using the DC method, in general, the scheme that had the lowest CPU time is the SOU. In contrast, with the results of theDC procedure the CPU time for DIAMOND and QUICK schemes using the NWF method is shown to be, between the 3.8 and 23.1% faster and 12.6 and 56.1% faster, respectively. However, the other schemes are more time consuming when theNWFis used instead of the DC method. Additionally, a second test case was presented and the results showed that depending on the problem under consideration, the NWF procedure may be computationally faster or slower that the DC method. As an example, the CPU time for QUICK and SMART schemes are 61.8 and 203.7%, respectively, slower when the NWF formulation is used for the second test case. Finally, future researches to explore the computational cost of the NWF method in more complex problems are required.

  8. Understanding conflict-resolution taskload: Implementing advisory conflict-detection and resolution algorithms in an airspace

    Science.gov (United States)

    Vela, Adan Ernesto

    2011-12-01

    of the research is to understand how the formulation, capabilities, and implementation of conflict-detection and resolution tools affect the controller taskload (system demands) associated with the conflict-resolution process, and implicitly the controller workload (physical and psychological demands). Furthermore this thesis seeks to establish best practices for the design of future conflict-detection and resolution systems. To generalize conclusions on the conflict-resolution taskload and best design practices of conflict-detection and resolution systems, this thesis focuses on abstracting and parameterizing the behaviors and capabilities of the advisory tools. Ideally, this abstraction of advisory decision-support tools serves as an alternative to exhaustively designing tools, implementing them in high-fidelity simulations, and analyzing their conflict-resolution taskload. Such an approach of simulating specific conflict-detection and resolution systems limits the type of conclusions that can be drawn concerning the design of more generic algorithms. In the process of understanding conflict-detection and resolution systems, evidence in the thesis reveals that the most effective approach to reducing conflict-resolution taskload is to improve conflict-detection systems. Furthermore, studies in the this thesis indicate that there is significant exibility in the design of conflict-resolution algorithms.

  9. Design of analog networks in the control theory formulation. Part 2: Numerical results

    OpenAIRE

    Zemliak, A. M.

    2005-01-01

    The paper presents numerical results of design of nonlinear electronic networks based on the problem formulation in terms of the control theory. Several examples illustrate the prospects of the approach suggested in the first part of the work.

  10. Nodal deterministic simulation for problems of neutron shielding in multigroup formulation

    International Nuclear Information System (INIS)

    Baptista, Josue Costa; Heringer, Juan Diego dos Santos; Santos, Luiz Fernando Trindade; Alves Filho, Hermes

    2013-01-01

    In this paper, we propose the use of some computational tools, with the implementation of numerical methods SGF (Spectral Green's Function), making use of a deterministic model of transport of neutral particles in the study and analysis of a known and simplified problem of nuclear engineering, known in the literature as a problem of neutron shielding, considering the model with two energy groups. These simulations are performed in MatLab platform, version 7.0, and are presented and developed with the help of a Computer Simulator providing a friendly computer application for their utilities

  11. Some problems of relativistic electrodynamics

    International Nuclear Information System (INIS)

    Strel'tsov, V.N.

    1991-01-01

    Some problems of electrodynamics are considered from the point of view of the radar formulation of relativity theory. This formulation is based on light or retarded distances, the increasing of longitudinal sizes of moving objects is its consequence ( e longation formula ) . Based of Lienard-Wiechert potentials it is shown that in terms of retarded distances equipotential surfaces take the form of rotation ellipsoids, stretched in the direction of electric charge motion. The difficulty connected with the appearance of charge in a moving (neutral) current-carrying conductor is overcome. 23 refs.; 4 figs

  12. Formulation of Ionic-Liquid Electrolyte To Expand the Voltage Window of Supercapacitors

    Energy Technology Data Exchange (ETDEWEB)

    Van Aken, Katherine L.; Beidaghi, Majid; Gogotsi, Yury

    2015-03-18

    An effective method to expand the operating potential window (OPW) of electrochemical capacitors based on formulating the ionic-liquid (IL) electrolytes is reported. Using model electrochemical cells based on two identical onion-like carbon (OLC) electrodes and two different IL electrolytes and their mixtures, it was shown that the asymmetric behavior of the electrolyte cation and anion toward the two electrodes limits the OPW of the cell and therefore its energy density. Also, a general solution to this problem is proposed by formulating the IL electrolyte mixtures to balance the capacitance of electrodes in a symmetric supercapacitor.

  13. Mind over matter? I: philosophical aspects of the mind-brain problem.

    Science.gov (United States)

    Schimmel, P

    2001-08-01

    To conceptualize the essence of the mind-body or mind-brain problem as one of metaphysics rather than science, and to propose a formulation of the problem in the context of current scientific knowledge and its limitations. The background and conceptual parameters of the mind-body problem are delineated, and the limitations of brain research in formulating a solution identified. The problem is reformulated and stated in terms of two propositions. These constitute a 'double aspect theory'. The problem appears to arise as a consequence of the conceptual limitations of the human mind, and hence remains essentially a metaphysical one. A 'double aspect theory' recognizes the essential unity of mind and brain, while remaining consistent with the dualism inherent in human experience.

  14. MO-AB-BRA-01: A Global Level Set Based Formulation for Volumetric Modulated Arc Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, D; Lyu, Q; Ruan, D; O’Connor, D; Low, D; Sheng, K [Department of Radiation Oncology, University of California Los Angeles, Los Angeles, CA (United States)

    2016-06-15

    Purpose: The current clinical Volumetric Modulated Arc Therapy (VMAT) optimization is formulated as a non-convex problem and various greedy heuristics have been employed for an empirical solution, jeopardizing plan consistency and quality. We introduce a novel global direct aperture optimization method for VMAT to overcome these limitations. Methods: The global VMAT (gVMAT) planning was formulated as an optimization problem with an L2-norm fidelity term and an anisotropic total variation term. A level set function was used to describe the aperture shapes and adjacent aperture shapes were penalized to control MLC motion range. An alternating optimization strategy was implemented to solve the fluence intensity and aperture shapes simultaneously. Single arc gVMAT plans, utilizing 180 beams with 2° angular resolution, were generated for a glioblastoma multiforme (GBM), lung (LNG), and 2 head and neck cases—one with 3 PTVs (H&N3PTV) and one with 4 PTVs (H&N4PTV). The plans were compared against the clinical VMAT (cVMAT) plans utilizing two overlapping coplanar arcs. Results: The optimization of the gVMAT plans had converged within 600 iterations. gVMAT reduced the average max and mean OAR dose by 6.59% and 7.45% of the prescription dose. Reductions in max dose and mean dose were as high as 14.5 Gy in the LNG case and 15.3 Gy in the H&N3PTV case. PTV coverages (D95, D98, D99) were within 0.25% of the prescription dose. By globally considering all beams, the gVMAT optimizer allowed some beams to deliver higher intensities, yielding a dose distribution that resembles a static beam IMRT plan with beam orientation optimization. Conclusions: The novel VMAT approach allows for the search of an optimal plan in the global solution space and generates deliverable apertures directly. The single arc VMAT approach fully utilizes the digital linacs’ capability in dose rate and gantry rotation speed modulation. Varian Medical Systems, NIH grant R01CA188300, NIH grant R43CA183390.

  15. Solution for Nonlinear Three-Dimensional Intercept Problem with Minimum Energy

    Directory of Open Access Journals (Sweden)

    Henzeh Leeghim

    2013-01-01

    a minimum-energy application, which then generates both the desired initial interceptor velocity and the TOF for the minimum-energy transfer. The optimization problem is formulated by using the classical Lagrangian f and g coefficients, which map initial position and velocity vectors to future times, and a universal time variable x. A Newton-Raphson iteration algorithm is introduced for iteratively solving the problem. A generalized problem formulation is introduced for minimizing the TOF as part of the optimization problem. Several examples are presented, and the results are compared with the Hohmann transfer solution approaches. The resulting minimum-energy intercept solution algorithm is expected to be broadly useful as a starting iterative for applications spanning: targeting, rendezvous, interplanetary trajectory design, and so on.

  16. Thermal stability of formulations of PVC irradiated with γ of 60

    International Nuclear Information System (INIS)

    Martinez P, M.E.; Carrasco A, H.; Castaneda F, A.; Benavides C, R.; Garcia R, S.P.

    2004-01-01

    The industry of cables and wires frequently use cable isolations with base of formulations of PVC, in those that stabilizer has usually been used with the help of heavy metals, as the lead, which is toxic. To solve the problem, from the 2002 one has come studying in combined form in the National Institute of Nuclear Research ININ and the Center of Investigation in Applied Chemistry CIQA, the modifications induced by the radiation in formulations with the help of vinyl poly chloride PVC. In these formulations, prepared with cross linking agent, plastifying industrial grade, stuff and non toxic stabilizers of calcium estearate and zinc industrial grade, it is sought to replace the stabilizer of Pb. For this were irradiated it test tubes of PVC with gamma radiation of cobalt 60 to three different dose in atmospheres of air and argon. Later it was determined their thermal stability at different times of heating and it was measured the Young modulus by means of thermo mechanical analysis. Those results obtained together with other techniques of characterization suggest that the irradiated proposed formulation can substitute the one stabilized with lead. (Author)

  17. A new formulation of the equivalent thermal in optimization of hydrothermal systems

    Directory of Open Access Journals (Sweden)

    Bayón L.

    2002-01-01

    Full Text Available In this paper, we revise the classical formulation of the problem depriving it of the concepts that are superfluous from the mathematical point of view. We observe that a number of power stations can be substituted by a single one that behaves equivalently to the entire set. Proceeding in this way, we obtain a variational formulation in its purest sense (without restrictions. This formulation allows us to employ the theory of calculus of variations to the highest degree. We then calculate the equivalent minimizer in the case where the cost functions are second-order polynomials. We prove that the equivalent minimizer is a second-order polynomial with piece-wise constant coefficients. Moreover, it belongs to the class C 1 . Finally, we present various examples prompted by real systems and perform the proposed algorithms using Mathematica.

  18. Cooperative conflict detection and resolution of civil unmanned aerial vehicles in metropolis

    Directory of Open Access Journals (Sweden)

    Jian Yang

    2016-06-01

    Full Text Available Unmanned air vehicles have recently attracted attention of many researchers because of their potential civil applications. A systematic integration of unmanned air vehicles in non-segregated airspace is required that allows safe operation of unmanned air vehicles along with other manned aircrafts. One of the critical issues is conflict detection and resolution. This article proposes to solve unmanned air vehicles’ conflict detection and resolution problem in metropolis airspace. First, the structure of metropolis airspace in the coming future is studied, and the airspace conflict problem between different unmanned air vehicles is analyzed by velocity obstacle theory. Second, a conflict detection and resolution framework in metropolis is proposed, and factors that have influences on conflict-free solutions are discussed. Third, the multi-unmanned air vehicle conflict resolution problem is formalized as a nonlinear optimization problem with the aim of minimizing overall conflict resolution consumption. The safe separation constraint is further discussed to improve the computation efficiency. When the speeds of conflict-involved unmanned air vehicles are equal, the nonlinear safe separation constraint is transformed into linear constraints. The problem is solved by mixed integer convex programming. When unmanned air vehicles are with unequal speeds, we propose to solve the nonlinear optimization problem by stochastic parallel gradient descent–based method. Our approaches are demonstrated in computational examples.

  19. A new formulation for the problem of fuel cell homogenization

    International Nuclear Information System (INIS)

    Chao, Y.-A.; Martinez, A.S.

    1982-01-01

    A new homogenization method for reactor cells is described. This new method consists in eliminating the NR approximation for the fuel resonance and the Wigner approximation for the resonance escape probability; the background cross section is then redefined and the problem studied is reanalyzed. (E.G.) [pt

  20. Formulations and exact algorithms for the vehicle routing problem with time windows

    DEFF Research Database (Denmark)

    Kallehauge, Brian

    2008-01-01

    In this paper we review the exact algorithms proposed in the last three decades for the solution of the vehicle routing problem with time windows (VRPTW). The exact algorithms for the VRPTW are in many aspects inherited from work on the traveling salesman problem (TSP). In recognition of this fact...

  1. Nonclassical pseudospectral method for the solution of brachistochrone problem

    International Nuclear Information System (INIS)

    Alipanah, A.; Razzaghi, M.; Dehghan, M.

    2007-01-01

    In this paper, nonclassical pseudospectral method is proposed for solving the classic brachistochrone problem. The brachistochrone problem is first formulated as a nonlinear optimal control problem. Properties of nonclassical pseudospectral method are presented, these properties are then utilized to reduce the computation of brachistochrone problem to the solution of algebraic equations. Using this method, the solution to the brachistochrone problem is compared with those in the literature

  2. BIGHORN Computational Fluid Dynamics Theory, Methodology, and Code Verification & Validation Benchmark Problems

    Energy Technology Data Exchange (ETDEWEB)

    Xia, Yidong [Idaho National Lab. (INL), Idaho Falls, ID (United States); Andrs, David [Idaho National Lab. (INL), Idaho Falls, ID (United States); Martineau, Richard Charles [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-08-01

    This document presents the theoretical background for a hybrid finite-element / finite-volume fluid flow solver, namely BIGHORN, based on the Multiphysics Object Oriented Simulation Environment (MOOSE) computational framework developed at the Idaho National Laboratory (INL). An overview of the numerical methods used in BIGHORN are discussed and followed by a presentation of the formulation details. The document begins with the governing equations for the compressible fluid flow, with an outline of the requisite constitutive relations. A second-order finite volume method used for solving the compressible fluid flow problems is presented next. A Pressure-Corrected Implicit Continuous-fluid Eulerian (PCICE) formulation for time integration is also presented. The multi-fluid formulation is being developed. Although multi-fluid is not fully-developed, BIGHORN has been designed to handle multi-fluid problems. Due to the flexibility in the underlying MOOSE framework, BIGHORN is quite extensible, and can accommodate both multi-species and multi-phase formulations. This document also presents a suite of verification & validation benchmark test problems for BIGHORN. The intent for this suite of problems is to provide baseline comparison data that demonstrates the performance of the BIGHORN solution methods on problems that vary in complexity from laminar to turbulent flows. Wherever possible, some form of solution verification has been attempted to identify sensitivities in the solution methods, and suggest best practices when using BIGHORN.

  3. Solving inversion problems with neural networks

    Science.gov (United States)

    Kamgar-Parsi, Behzad; Gualtieri, J. A.

    1990-01-01

    A class of inverse problems in remote sensing can be characterized by Q = F(x), where F is a nonlinear and noninvertible (or hard to invert) operator, and the objective is to infer the unknowns, x, from the observed quantities, Q. Since the number of observations is usually greater than the number of unknowns, these problems are formulated as optimization problems, which can be solved by a variety of techniques. The feasibility of neural networks for solving such problems is presently investigated. As an example, the problem of finding the atmospheric ozone profile from measured ultraviolet radiances is studied.

  4. Formulation of coarse mesh finite difference to calculate mathematical adjoint flux

    International Nuclear Information System (INIS)

    Pereira, Valmir; Martinez, Aquilino Senra; Silva, Fernando Carvalho da

    2002-01-01

    The objective of this work is the obtention of the mathematical adjoint flux, having as its support the nodal expansion method (NEM) for coarse mesh problems. Since there are difficulties to evaluate this flux by using NEM. directly, a coarse mesh finite difference program was developed to obtain this adjoint flux. The coarse mesh finite difference formulation (DFMG) adopted uses results of the direct calculation (node average flux and node face averaged currents) obtained by NEM. These quantities (flux and currents) are used to obtain the correction factors which modify the classical finite differences formulation . Since the DFMG formulation is also capable of calculating the direct flux it was also tested to obtain this flux and it was verified that it was able to reproduce with good accuracy both the flux and the currents obtained via NEM. In this way, only matrix transposition is needed to calculate the mathematical adjoint flux. (author)

  5. A novel super-resolution camera model

    Science.gov (United States)

    Shao, Xiaopeng; Wang, Yi; Xu, Jie; Wang, Lin; Liu, Fei; Luo, Qiuhua; Chen, Xiaodong; Bi, Xiangli

    2015-05-01

    Aiming to realize super resolution(SR) to single image and video reconstruction, a super resolution camera model is proposed for the problem that the resolution of the images obtained by traditional cameras behave comparatively low. To achieve this function we put a certain driving device such as piezoelectric ceramics in the camera. By controlling the driving device, a set of continuous low resolution(LR) images can be obtained and stored instantaneity, which reflect the randomness of the displacements and the real-time performance of the storage very well. The low resolution image sequences have different redundant information and some particular priori information, thus it is possible to restore super resolution image factually and effectively. The sample method is used to derive the reconstruction principle of super resolution, which analyzes the possible improvement degree of the resolution in theory. The super resolution algorithm based on learning is used to reconstruct single image and the variational Bayesian algorithm is simulated to reconstruct the low resolution images with random displacements, which models the unknown high resolution image, motion parameters and unknown model parameters in one hierarchical Bayesian framework. Utilizing sub-pixel registration method, a super resolution image of the scene can be reconstructed. The results of 16 images reconstruction show that this camera model can increase the image resolution to 2 times, obtaining images with higher resolution in currently available hardware levels.

  6. A Fourth Order Formulation of DDM for Crack Analysis in Brittle Solids

    Directory of Open Access Journals (Sweden)

    Abolfazl Abdollahipour

    2017-01-01

    Full Text Available A fourth order formulation of the displacement discontinuity method (DDM is proposed for the crack analysis of brittle solids such as rocks, glasses, concretes and ceramics. A fourth order boundary collocation scheme is used for the discretization of each boundary element (the source element. In this approach, the source boundary element is divided into five sub-elements each recognized by a central node where the displacement discontinuity components are to be numerically evaluated. Three different formulating procedures are presented and their corresponding discretization schemes are discussed. A new discretization scheme is also proposed to use the fourth order formulation for the special crack tip elements which may be used to increase the accuracy of the stress and displacement fields near the crack ends. Therefore, these new crack tips discretizing schemes are also improved by using the proposed fourth order displacement discontinuity formulation and the corresponding shape functions for a bunch of five special crack tip elements. Some example problems in brittle fracture mechanics are solved for estimating the Mode I and Mode II stress intensity factors near the crack ends. These semi-analytical results are compared to those cited in the fracture mechanics literature whereby the high accuracy of the fourth order DDM formulation is demonstrated.

  7. Resolution of the neutron transport equation by massively parallel computer in the Cronos code

    International Nuclear Information System (INIS)

    Zardini, D.M.

    1996-01-01

    The feasibility of neutron transport problems parallel resolution by CRONOS code's SN module is here studied. In this report we give the first data about the parallel resolution by angular variable decomposition of the transport equation. Problems about parallel resolution by spatial variable decomposition and memory stage limits are also explained here. (author)

  8. History Matching Through a Smooth Formulation of Multiple-Point Statistics

    DEFF Research Database (Denmark)

    Melnikova, Yulia; Zunino, Andrea; Lange, Katrine

    2014-01-01

    and the mismatch with multiple-point statistics. As a result, in the framework of the Bayesian approach, such a solution belongs to a high posterior region. The methodology, while applicable to any inverse problem with a training-image-based prior, is especially beneficial for problems which require expensive......We propose a smooth formulation of multiple-point statistics that enables us to solve inverse problems using gradient-based optimization techniques. We introduce a differentiable function that quantifies the mismatch between multiple-point statistics of a training image and of a given model. We...... show that, by minimizing this function, any continuous image can be gradually transformed into an image that honors the multiple-point statistics of the discrete training image. The solution to an inverse problem is then found by minimizing the sum of two mismatches: the mismatch with data...

  9. Anticancer Potential of Nutraceutical Formulations in MNU-induced Mammary Cancer in Sprague Dawley Rats.

    Science.gov (United States)

    Pitchaiah, Gummalla; Akula, Annapurna; Chandi, Vishala

    2017-01-01

    Nutraceuticals help in combating some of the major health problems of the century including cancer, and 'nutraceutical formulations' have led to the new era of medicine and health. To develop different nutraceutical formulations and to assess the anticancer potential of nutraceutical formulations in N-methyl-N-nitrosourea (MNU)-induced mammary cancer in Sprague Dawley rats. Different nutraceutical formulations were prepared using fine powders of amla, apple, garlic, onion, papaya, turmeric, and wheat grass with and without cow urine distillate. Total phenolic content, acute oral toxicity, and microbial load of nutraceutical formulations were assessed. The anticancer potential of nutraceutical formulations was evaluated against MNU-induced mammary cancer in female Sprague Dawley rats. Improvement in total phenolic content was significant ( P safe to use in animals. Microbial load was within the limits. Significant longer tumor-free days ( P apple, garlic, onion, papaya, turmeric, and wheat grass with and without cow urine distillate. Total phenolic content, acute oral toxicity, and microbial load of nutraceutical formulations were assessed. The anticancer potential of nutraceutical formulations was evaluated against MNU-induced mammary cancer in female Sprague Dawley rats. Improvement in total phenolic content was observed after self-fortification process. Toxicity studies showed that the nutraceutical formulations were safe to use in animals. Microbial load was within the limits. Longer tumor-free days, lower tumor incidence, lower tumor multiplicity and tumor burden were observed for nutraceutical formulation-treated groups. This suggests that combination of whole food-based nutraceuticals acted synergistically in the prevention of mammary cancer. Further, the process of fortification enhanced the anticancer potential of nutraceutical formulations. Abbreviations used: HMNU: N-methyl-N-nitrosourea, CAM: Complementary and Alternative Medicine, NF: Nutraceutical

  10. Projecting India's energy requirements for policy formulation

    International Nuclear Information System (INIS)

    Parikh, Kirit S.; Karandikar, Vivek; Rana, Ashish; Dani, Prasanna

    2009-01-01

    Energy policy has to have a long-term perspective. To formulate it one needs to know the contours of energy requirements and options. Different approaches have been followed in literature, each with their own problems. A top down econometric approach provides little guidance on policies, while a bottom up approval requires too much knowledge and too many assumptions. Using top-down econometric approach for aggregate overall benchmarking and a detailed activity analysis model, Integrated Energy System Model, for a few large sectors, provides a unique combination for easing the difficulties of policy formulation. The model is described in this paper. Eleven alternate scenarios are built, designed to map out extreme points of feasible options. Results show that even after employing all domestic energy resource to their full potential, there will be a continued rise of fossil fuel use, continued importance of coal, and continued rise of import dependence. Energy efficiency emerges as a major option with a potential to reduce energy requirement by as much as 17%. Scenario results point towards pushing for development of alternative sources. (author)

  11. Regression filter for signal resolution

    International Nuclear Information System (INIS)

    Matthes, W.

    1975-01-01

    The problem considered is that of resolving a measured pulse height spectrum of a material mixture, e.g. gamma ray spectrum, Raman spectrum, into a weighed sum of the spectra of the individual constituents. The model on which the analytical formulation is based is described. The problem reduces to that of a multiple linear regression. A stepwise linear regression procedure was constructed. The efficiency of this method was then tested by transforming the procedure in a computer programme which was used to unfold test spectra obtained by mixing some spectra, from a library of arbitrary chosen spectra, and adding a noise component. (U.K.)

  12. Microcanonical formulation of quantum field theories

    International Nuclear Information System (INIS)

    Iwazaki, A.

    1984-03-01

    A microcanonical formulation of Euclidean quantum field theories is presented. In the formulation, correlation functions are given by a microcanonical ensemble average of fields. Furthermore, the perturbative equivalence of the formulation and the standard functional formulation is proved and the equipartition low is derived in our formulation. (author)

  13. High resolution, large deformation 3D traction force microscopy.

    Directory of Open Access Journals (Sweden)

    Jennet Toyjanova

    Full Text Available Traction Force Microscopy (TFM is a powerful approach for quantifying cell-material interactions that over the last two decades has contributed significantly to our understanding of cellular mechanosensing and mechanotransduction. In addition, recent advances in three-dimensional (3D imaging and traction force analysis (3D TFM have highlighted the significance of the third dimension in influencing various cellular processes. Yet irrespective of dimensionality, almost all TFM approaches have relied on a linear elastic theory framework to calculate cell surface tractions. Here we present a new high resolution 3D TFM algorithm which utilizes a large deformation formulation to quantify cellular displacement fields with unprecedented resolution. The results feature some of the first experimental evidence that cells are indeed capable of exerting large material deformations, which require the formulation of a new theoretical TFM framework to accurately calculate the traction forces. Based on our previous 3D TFM technique, we reformulate our approach to accurately account for large material deformation and quantitatively contrast and compare both linear and large deformation frameworks as a function of the applied cell deformation. Particular attention is paid in estimating the accuracy penalty associated with utilizing a traditional linear elastic approach in the presence of large deformation gradients.

  14. Polymorphic Uncertain Linear Programming for Generalized Production Planning Problems

    Directory of Open Access Journals (Sweden)

    Xinbo Zhang

    2014-01-01

    Full Text Available A polymorphic uncertain linear programming (PULP model is constructed to formulate a class of generalized production planning problems. In accordance with the practical environment, some factors such as the consumption of raw material, the limitation of resource and the demand of product are incorporated into the model as parameters of interval and fuzzy subsets, respectively. Based on the theory of fuzzy interval program and the modified possibility degree for the order of interval numbers, a deterministic equivalent formulation for this model is derived such that a robust solution for the uncertain optimization problem is obtained. Case study indicates that the constructed model and the proposed solution are useful to search for an optimal production plan for the polymorphic uncertain generalized production planning problems.

  15. Multi-Stage Transportation Problem With Capacity Limit

    Directory of Open Access Journals (Sweden)

    I. Brezina

    2010-06-01

    Full Text Available The classical transportation problem can be applied in a more general way in practice. Related problems as Multi-commodity transportation problem, Transportation problems with different kind of vehicles, Multi-stage transportation problems, Transportation problem with capacity limit is an extension of the classical transportation problem considering the additional special condition. For solving such problems many optimization techniques (dynamic programming, linear programming, special algorithms for transportation problem etc. and heuristics approaches (e.g. evolutionary techniques were developed. This article considers Multi-stage transportation problem with capacity limit that reflects limits of transported materials (commodity quantity. Discussed issues are: theoretical base, problem formulation as way as new proposed algorithm for that problem.

  16. On some special cases of the restricted assignment problem

    NARCIS (Netherlands)

    Wang, C. (Chao); R.A. Sitters (René)

    2016-01-01

    textabstractWe consider some special cases of the restricted assignment problem. In this scheduling problem on parallel machines, any job j can only be assigned to one of the machines in its given subset Mj of machines. We give an LP-formulation for the problem with two job sizes and show that it

  17. Reactive decontamination formulation

    Science.gov (United States)

    Giletto, Anthony [College Station, TX; White, William [College Station, TX; Cisar, Alan J [Cypress, TX; Hitchens, G Duncan [Bryan, TX; Fyffe, James [Bryan, TX

    2003-05-27

    The present invention provides a universal decontamination formulation and method for detoxifying chemical warfare agents (CWA's) and biological warfare agents (BWA's) without producing any toxic by-products, as well as, decontaminating surfaces that have come into contact with these agents. The formulation includes a sorbent material or gel, a peroxide source, a peroxide activator, and a compound containing a mixture of KHSO.sub.5, KHSO.sub.4 and K.sub.2 SO.sub.4. The formulation is self-decontaminating and once dried can easily be wiped from the surface being decontaminated. A method for decontaminating a surface exposed to chemical or biological agents is also disclosed.

  18. A multiresolution approach for the convergence acceleration of multivariate curve resolution methods.

    Science.gov (United States)

    Sawall, Mathias; Kubis, Christoph; Börner, Armin; Selent, Detlef; Neymeyr, Klaus

    2015-09-03

    Modern computerized spectroscopic instrumentation can result in high volumes of spectroscopic data. Such accurate measurements rise special computational challenges for multivariate curve resolution techniques since pure component factorizations are often solved via constrained minimization problems. The computational costs for these calculations rapidly grow with an increased time or frequency resolution of the spectral measurements. The key idea of this paper is to define for the given high-dimensional spectroscopic data a sequence of coarsened subproblems with reduced resolutions. The multiresolution algorithm first computes a pure component factorization for the coarsest problem with the lowest resolution. Then the factorization results are used as initial values for the next problem with a higher resolution. Good initial values result in a fast solution on the next refined level. This procedure is repeated and finally a factorization is determined for the highest level of resolution. The described multiresolution approach allows a considerable convergence acceleration. The computational procedure is analyzed and is tested for experimental spectroscopic data from the rhodium-catalyzed hydroformylation together with various soft and hard models. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Bioelectromagnetic forward problem: isolated source approach revis(it)ed.

    Science.gov (United States)

    Stenroos, M; Sarvas, J

    2012-06-07

    Electro- and magnetoencephalography (EEG and MEG) are non-invasive modalities for studying the electrical activity of the brain by measuring voltages on the scalp and magnetic fields outside the head. In the forward problem of EEG and MEG, the relationship between the neural sources and resulting signals is characterized using electromagnetic field theory. This forward problem is commonly solved with the boundary-element method (BEM). The EEG forward problem is numerically challenging due to the low relative conductivity of the skull. In this work, we revise the isolated source approach (ISA) that enables the accurate, computationally efficient BEM solution of this problem. The ISA is formulated for generic basis and weight functions that enable the use of Galerkin weighting. The implementation of the ISA-formulated linear Galerkin BEM (LGISA) is first verified in spherical geometry. Then, the LGISA is compared with conventional Galerkin and symmetric BEM approaches in a realistic 3-shell EEG/MEG model. The results show that the LGISA is a state-of-the-art method for EEG/MEG forward modeling: the ISA formulation increases the accuracy and decreases the computational load. Contrary to some earlier studies, the results show that the ISA increases the accuracy also in the computation of magnetic fields.

  20. Digital approach to high-resolution pulse processing for semiconductor detectors

    International Nuclear Information System (INIS)

    Georgiev, A.; Buchner, A.; Gast, W.; Lieder, R.M.

    1992-01-01

    A new design philosophy for processing signals produced by high resolution, large volume semiconductor detectors is described. These detectors, to be used in the next generation of spectrometer arrays for nuclear research (i.e. EUROBALL, etc.), present a set of problems like resolution degradation due to charge trapping and ballistic defect effects, low resolution at a high count rate, poor long term stability, etc. To solve these problems, a new design approach has been developed, including reconstruction of the event charge, providing a pure triangular residual function, and suppressing low frequency noise. 5 refs., 4 figs

  1. Digital approach to high-resolution pulse processing for semiconductor detectors

    Energy Technology Data Exchange (ETDEWEB)

    Georgiev, A [Sofia Univ. (Bulgaria); Buchner, A [Forschungszentrum Rossendorf (Germany); Gast, W; Lieder, R M [Forschungszentrum Juelich GmbH (Germany). Inst. fuer Kernphysik; Stein, J [Target System Electronic GmbH, Solingen, (Germany)

    1992-08-01

    A new design philosophy for processing signals produced by high resolution, large volume semiconductor detectors is described. These detectors, to be used in the next generation of spectrometer arrays for nuclear research (i.e. EUROBALL, etc.), present a set of problems like resolution degradation due to charge trapping and ballistic defect effects, low resolution at a high count rate, poor long term stability, etc. To solve these problems, a new design approach has been developed, including reconstruction of the event charge, providing a pure triangular residual function, and suppressing low frequency noise. 5 refs., 4 figs.

  2. Approximation of Bayesian Inverse Problems for PDEs

    OpenAIRE

    Cotter, S. L.; Dashti, M.; Stuart, A. M.

    2010-01-01

    Inverse problems are often ill posed, with solutions that depend sensitively on data.n any numerical approach to the solution of such problems, regularization of some form is needed to counteract the resulting instability. This paper is based on an approach to regularization, employing a Bayesian formulation of the problem, which leads to a notion of well posedness for inverse problems, at the level of probability measures. The stability which results from this well posedness may be used as t...

  3. Isogeometric shell formulation based on a classical shell model

    KAUST Repository

    Niemi, Antti

    2012-09-04

    This paper constitutes the first steps in our work concerning isogeometric shell analysis. An isogeometric shell model of the Reissner-Mindlin type is introduced and a study of its accuracy in the classical pinched cylinder benchmark problem presented. In contrast to earlier works [1,2,3,4], the formulation is based on a shell model where the displacement, strain and stress fields are defined in terms of a curvilinear coordinate system arising from the NURBS description of the shell middle surface. The isogeometric shell formulation is implemented using the PetIGA and igakit software packages developed by the authors. The igakit package is a Python package used to generate NURBS representations of geometries that can be utilised by the PetIGA finite element framework. The latter utilises data structures and routines of the portable, extensible toolkit for scientific computation (PETSc), [5,6]. The current shell implementation is valid for static, linear problems only, but the software package is well suited for future extensions to geometrically and materially nonlinear regime as well as to dynamic problems. The accuracy of the approach in the pinched cylinder benchmark problem and present comparisons against the h-version of the finite element method with bilinear elements. Quadratic, cubic and quartic NURBS discretizations are compared against the isoparametric bilinear discretization introduced in [7]. The results show that the quadratic and cubic NURBS approximations exhibit notably slower convergence under uniform mesh refinement as the thickness decreases but the quartic approximation converges relatively quickly within the standard variational framework. The authors future work is concerned with building an isogeometric finite element method for modelling nonlinear structural response of thin-walled shells undergoing large rigid-body motions. The aim is to use the model in a aeroelastic framework for the simulation of flapping wings.

  4. On Resolution Complexity of Matching Principles

    DEFF Research Database (Denmark)

    Dantchev, Stefan S.

    proof system. The results in the thesis fall in this category. We study the Resolution complexity of some Matching Principles. The three major contributions of the thesis are as follows. Firstly, we develop a general technique of proving resolution lower bounds for the perfect matchingprinciples based...... Chessboard as well as for Tseitin tautologies based on rectangular grid graph. We reduce these problems to Tiling games, a concept introduced by us, which may be of interest on its own. Secondly, we find the exact Tree-Resolution complexity of the Weak Pigeon-Hole Principle. It is the most studied...

  5. Mathematical models and heuristic solutions for container positioning problems in port terminals

    DEFF Research Database (Denmark)

    Kallehauge, Louise Sibbesen

    2008-01-01

    presents an efficient solution algorithm for the CPP. Based on a number of new concepts, an event-based construction heuristic is developed and its ability to solve real-life problem instances is established. The backbone of the algorithm is a list of events, corresponding to a sequence of operations...... by constructing mathematical programming formulations of the problem and developing an efficient heuristic algorithm for its solution. The thesis consists of an introduction, two main chapters concerning new mathematical formulations and a new heuristic for the CPP, technical issues, computational results...... concerning the subject is reviewed. The research presented in this thesis is divided into two main parts: Construction and investigation of new mathematical programming formulations of the CPP and development and implementation of a new event-based heuristic for the problem. The first part presents three...

  6. Finite-element formulations for the thermal stress analysis of two- and three-dimensional thin ractor structures

    International Nuclear Information System (INIS)

    Kulak, R.F.; Kennedy, J.M.; Belytschko, T.B.; Schoeberle, D.F.

    1977-01-01

    This paper describes finite-element formulations for the thermal stress analysis of LMFBR structures. The first formulation is applicable to large displacement rotation problems in which the strains are small. For this formulation, a general temperature-dependent constituent relationship is derived from a Gibbs potential function and a temperature dependent yield surface. The temperature dependency of the yield surface is based upon a temperature-dependent, material-hardening model. The model uses a temperature-equivalent stress-plastic strain diagram which is generated from isothermal uniaxial stress-strain data. A second formulation is presented for problems characterized by both large displacement-rotations and large strains. Here a set of large strain hypoelastic-plastic relationships are developed to linearly relate the rate of stress to the rate of deformation. The temperature field is described through time-dependent values at mesh node points; the temperature fields in each element are then obtained by interpolation formulas. Hence, problems with both spatial and temporal dependent temperature fields can easily be treated. The above developments were incorporated into two ANL developed finite-element computer codes: the implicit version of STRAW and the 3D Implicit Structural Analysis Code. STRAW is a two-dimensional code with a plane stress/plane strain beam element. The 3D Implicit code has a triangular flat plate element which is capable of sustaining both membrane and bending loads. To insure numerical stability both codes are based on an iterative-incremental solution procedure with equilibrium checks based on an error in energy

  7. Scalable Algorithms for Large High-Resolution Terrain Data

    DEFF Research Database (Denmark)

    Mølhave, Thomas; Agarwal, Pankaj K.; Arge, Lars Allan

    2010-01-01

    In this paper we demonstrate that the technology required to perform typical GIS computations on very large high-resolution terrain models has matured enough to be ready for use by practitioners. We also demonstrate the impact that high-resolution data has on common problems. To our knowledge, so...

  8. Mixed-hybrid finite element method for the transport equation and diffusion approximation of transport problems

    International Nuclear Information System (INIS)

    Cartier, J.

    2006-04-01

    This thesis focuses on mathematical analysis, numerical resolution and modelling of the transport equations. First of all, we deal with numerical approximation of the solution of the transport equations by using a mixed-hybrid scheme. We derive and study a mixed formulation of the transport equation, then we analyse the related variational problem and present the discretization and the main properties of the scheme. We particularly pay attention to the behavior of the scheme and we show its efficiency in the diffusion limit (when the mean free path is small in comparison with the characteristic length of the physical domain). We present academical benchmarks in order to compare our scheme with other methods in many physical configurations and validate our method on analytical test cases. Unstructured and very distorted meshes are used to validate our scheme. The second part of this thesis deals with two transport problems. The first one is devoted to the study of diffusion due to boundary conditions in a transport problem between two plane plates. The second one consists in modelling and simulating radiative transfer phenomenon in case of the industrial context of inertial confinement fusion. (author)

  9. Alternative dispute resolution mechanisms, plea bargain and ...

    African Journals Online (AJOL)

    Conflicts, disputes, disagreements, problems and issues are inevitable in human affairs. Most of these disputes and problems in some circumstances give rise to offences for which a criminal prosecution becomes necessary. One can say that Alternative Dispute Resolution (ADR) is used all round the world to resolve ...

  10. Wave packet formulation of the boomerang model for resonant electron--molecule scattering

    International Nuclear Information System (INIS)

    McCurdy, C.W.; Turner, J.L.

    1983-01-01

    A time-dependent formulation of the boomerang model for resonant electron--molecule scattering is presented in terms of a wave packet propagating on the complex potential surface of the metastable anion. The results of calculations using efficient semiclassical techniques for propagating the wave packet are found to be in excellent agreement with full quantum-mechanical calculations of vibrational excitation cross sections in e - --N 2 scattering. The application of the wave packet formulation as a computational and conceptual approach to the problem of resonant collisions with polyatomic molecules is discussed in the light of recent wave packet calculations on polyatomic photodissociation and Raman spectra

  11. Configuration space methods in the three-nucleon problem

    International Nuclear Information System (INIS)

    Friar, J.L.

    1985-01-01

    The assumptions underlying the formulation and solution of the Schroedinger equation for three nucleons in configuration space are reviewed. Those qualitative aspects of the two-nucleon problem which play an important role in the trinucleon are discussed. The geometrical aspects of the problem are developed, and the importance of the angular momentum barrier is demonstrated. The Faddeev-Noyes formulation of the Schroedinger equation is motivated, and the boundary conditions for various three-body problems is reviewed. The method of splines is shown to provide a particularly useful numerical modelling technique for solving the Faddeev-Noyes equation. The properties of explicit trinucleon solutions for various two-body force models are discussed, and the evidence for three-body forces is reviewed. The status of calculations of trinucleon observables is discussed, and conclusions are presented. 40 refs., 14 figs

  12. Dynamic Flow Management Problems in Air Transportation

    Science.gov (United States)

    Patterson, Sarah Stock

    1997-01-01

    In 1995, over six hundred thousand licensed pilots flew nearly thirty-five million flights into over eighteen thousand U.S. airports, logging more than 519 billion passenger miles. Since demand for air travel has increased by more than 50% in the last decade while capacity has stagnated, congestion is a problem of undeniable practical significance. In this thesis, we will develop optimization techniques that reduce the impact of congestion on the national airspace. We start by determining the optimal release times for flights into the airspace and the optimal speed adjustment while airborne taking into account the capacitated airspace. This is called the Air Traffic Flow Management Problem (TFMP). We address the complexity, showing that it is NP-hard. We build an integer programming formulation that is quite strong as some of the proposed inequalities are facet defining for the convex hull of solutions. For practical problems, the solutions of the LP relaxation of the TFMP are very often integral. In essence, we reduce the problem to efficiently solving large scale linear programming problems. Thus, the computation times are reasonably small for large scale, practical problems involving thousands of flights. Next, we address the problem of determining how to reroute aircraft in the airspace system when faced with dynamically changing weather conditions. This is called the Air Traffic Flow Management Rerouting Problem (TFMRP) We present an integrated mathematical programming approach for the TFMRP, which utilizes several methodologies, in order to minimize delay costs. In order to address the high dimensionality, we present an aggregate model, in which we formulate the TFMRP as a multicommodity, integer, dynamic network flow problem with certain side constraints. Using Lagrangian relaxation, we generate aggregate flows that are decomposed into a collection of flight paths using a randomized rounding heuristic. This collection of paths is used in a packing integer

  13. A stabilized finite element formulation for the solution of the Navier-Stokes equations in axisymmetric geometry

    International Nuclear Information System (INIS)

    Souza, Altivo Monteiro de

    2008-12-01

    The world energy consumption has been increasing strongly in recent years. Nuclear energy has been regarded as a suitable option to supply this growing energy demand in industrial scale. In view of the need of improving the understanding and capacity of analysis of nuclear power plants, modern simulation techniques for flow and heat transfer problems are gaining greater importance. A large number of problems found in nuclear reactor engineering can be dealt assuming axial symmetry. Thus, in this work a stabilized finite element formulation for the solution of the Navier-Stokes and energy equations for axisymmetric problems have been developed and tested. The formulation has been implemented in the NS S OLVER M PI 2 D A program developed at the Parallel Computation Laboratory of the Instituto de Engenharia Nuclear (LCP/IEN) and is now available either for safety analysis or design of nuclear systems. (author)

  14. Identification of the Diffusion Parameter in Nonlocal Steady Diffusion Problems

    Energy Technology Data Exchange (ETDEWEB)

    D’Elia, M., E-mail: mdelia@fsu.edu, E-mail: mdelia@sandia.gov [Sandia National Laboratories (United States); Gunzburger, M. [Florida State University (United States)

    2016-04-15

    The problem of identifying the diffusion parameter appearing in a nonlocal steady diffusion equation is considered. The identification problem is formulated as an optimal control problem having a matching functional as the objective of the control and the parameter function as the control variable. The analysis makes use of a nonlocal vector calculus that allows one to define a variational formulation of the nonlocal problem. In a manner analogous to the local partial differential equations counterpart, we demonstrate, for certain kernel functions, the existence of at least one optimal solution in the space of admissible parameters. We introduce a Galerkin finite element discretization of the optimal control problem and derive a priori error estimates for the approximate state and control variables. Using one-dimensional numerical experiments, we illustrate the theoretical results and show that by using nonlocal models it is possible to estimate non-smooth and discontinuous diffusion parameters.

  15. Applying Column Generation to the Discrete Fleet Planning Problem

    NARCIS (Netherlands)

    Bosman, M.G.C.; Bakker, Vincent; Molderink, Albert; Hurink, Johann L.; Smit, Gerardus Johannes Maria

    2010-01-01

    The paper discusses an Integer Linear Programming (ILP) formulation that describes the problem of planning the use of domestic distributed generators, under individual as well as fleet constraints. The planning problem comprises the assignment of time intervals during which the local generator must

  16. Extensive preclinical investigation of polymersomal formulation of doxorubicin versus Doxil-mimic formulation.

    Science.gov (United States)

    Alibolandi, Mona; Abnous, Khalil; Mohammadi, Marzieh; Hadizadeh, Farzin; Sadeghi, Fatemeh; Taghavi, Sahar; Jaafari, Mahmoud Reza; Ramezani, Mohammad

    2017-10-28

    Due to the severe cardiotoxicity of doxorubicin, its usage is limited. This shortcoming could be overcome by modifying pharmacokinetics of the drugs via preparation of various nanoplatforms. Doxil, a well-known FDA-approved nanoplatform of doxorubicin as antineoplastic agent, is frequently used in clinics in order to reduce cardiotoxicity of doxorubicin. Since Doxil shows some shortcomings in clinics including hand and food syndrome and very slow release pattern thus, there is a demand for the development and preparation of new doxorubicin nanoformulation with fewer side effects. The new formulation of the doxorubicin, synthesized previously by our group was extensively examined in the current study. This new formulation is doxorubicin encapsulated in PEG-PLGA polymersomes (PolyDOX). The main aim of the study was to compare the distribution and treatment efficacy of a new doxorubicin-polymersomal formulation (PolyDOX) with regular liposomal formulation (Doxil-mimic) in murine colon adenocarcinoma model. Additionally, the pathological, hematological changes, pharmacodynamics, biodistribution, tolerated dose and survival rate in vivo were evaluated and compared. Murine colon cancer model was induced by subcutaneous inoculation of BALB/c mice with C26 cells. Afterwards, either Doxil-mimic or PolyDOX was administered intravenously. The obtained results from biodistribution study showed a remarkable difference in the distribution of drugs in murine organs. In this regard, Doxil-mimic exhibited prolonged (48h) presence within liver tissues while PolyDOX preferentially accumulate in tumor and the presence in liver 48h post-treatment was significantly lower than that of Doxil-mimic. Obtained results demonstrated comparable final length of life for mice receiving either Doxil-mimic or PolyDOX formulations whereas tolerated dose of mice receiving Doxil-mimic was remarkably higher than those receiving PolyDOX. Therapeutic efficacy of formulation in term of tumor growth rate

  17. Menu-Driven Solver Of Linear-Programming Problems

    Science.gov (United States)

    Viterna, L. A.; Ferencz, D.

    1992-01-01

    Program assists inexperienced user in formulating linear-programming problems. A Linear Program Solver (ALPS) computer program is full-featured LP analysis program. Solves plain linear-programming problems as well as more-complicated mixed-integer and pure-integer programs. Also contains efficient technique for solution of purely binary linear-programming problems. Written entirely in IBM's APL2/PC software, Version 1.01. Packed program contains licensed material, property of IBM (copyright 1988, all rights reserved).

  18. An inverse problem strategy based on forward model evaluations: Gradient-based optimization without adjoint solves

    Energy Technology Data Exchange (ETDEWEB)

    Aguilo Valentin, Miguel Alejandro [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-07-01

    This study presents a new nonlinear programming formulation for the solution of inverse problems. First, a general inverse problem formulation based on the compliance error functional is presented. The proposed error functional enables the computation of the Lagrange multipliers, and thus the first order derivative information, at the expense of just one model evaluation. Therefore, the calculation of the Lagrange multipliers does not require the solution of the computationally intensive adjoint problem. This leads to significant speedups for large-scale, gradient-based inverse problems.

  19. Technical Note: Adjoint formulation of the TOMCAT atmospheric transport scheme in the Eulerian backtracking framework (RETRO-TOM)

    Science.gov (United States)

    Haines, P. E.; Esler, J. G.; Carver, G. D.

    2014-06-01

    A new methodology for the formulation of an adjoint to the transport component of the chemistry transport model TOMCAT is described and implemented in a new model, RETRO-TOM. The Eulerian backtracking method is used, allowing the forward advection scheme (Prather's second-order moments) to be efficiently exploited in the backward adjoint calculations. Prather's scheme is shown to be time symmetric, suggesting the possibility of high accuracy. To attain this accuracy, however, it is necessary to make a careful treatment of the "density inconsistency" problem inherent to offline transport models. The results are verified using a series of test experiments. These demonstrate the high accuracy of RETRO-TOM when compared with direct forward sensitivity calculations, at least for problems in which flux limiters in the advection scheme are not required. RETRO-TOM therefore combines the flexibility and stability of a "finite difference of adjoint" formulation with the accuracy of an "adjoint of finite difference" formulation.

  20. High-Level Waste Glass Formulation Model Sensitivity Study 2009 Glass Formulation Model Versus 1996 Glass Formulation Model

    International Nuclear Information System (INIS)

    Belsher, J.D.; Meinert, F.L.

    2009-01-01

    This document presents the differences between two HLW glass formulation models (GFM): The 1996 GFM and 2009 GFM. A glass formulation model is a collection of glass property correlations and associated limits, as well as model validity and solubility constraints; it uses the pretreated HLW feed composition to predict the amount and composition of glass forming additives necessary to produce acceptable HLW glass. The 2009 GFM presented in this report was constructed as a nonlinear optimization calculation based on updated glass property data and solubility limits described in PNNL-18501 (2009). Key mission drivers such as the total mass of HLW glass and waste oxide loading are compared between the two glass formulation models. In addition, a sensitivity study was performed within the 2009 GFM to determine the effect of relaxing various constraints on the predicted mass of the HLW glass.

  1. Siegert pseudostate formulation of scattering theory: Nonzero angular momenta in the one-channel case

    International Nuclear Information System (INIS)

    Batishchev, Pavel A.; Tolstikhin, Oleg I.

    2007-01-01

    The Siegert pseudostate (SPS) formulation of scattering theory, originally developed by Tolstikhin, Ostrovsky, and Nakamura [Phys. Rev. A, 58, 2077 (1998)] for s-wave scattering in a spherically symmetric finite-range potential, is generalized to nonzero angular momenta. The orthogonality and completeness properties of SPSs are established and SPS expansions for the outgoing-wave Green's function, physical states, and scattering matrix are obtained. The present formulation completes the theory of SPSs in the one-channel case, making its application to three-dimensional problems possible. The results are illustrated by calculations for several model potentials

  2. Branch-and-cut algorithms for the split delivery vehicle routing problem

    NARCIS (Netherlands)

    Archetti, Claudia; Bianchessi, Nicola; Speranza, M. Grazia

    2014-01-01

    In this paper we present two exact branch-and-cut algorithms for the Split Delivery Vehicle Routing Problem (SDVRP) based on two relaxed formulations that provide lower bounds to the optimum. Procedures to obtain feasible solutions to the SDVRP from a feasible solution to the relaxed formulations

  3. A Dynamic Programming Algorithm for the k-Haplotyping Problem

    Institute of Scientific and Technical Information of China (English)

    Zhen-ping Li; Ling-yun Wu; Yu-ying Zhao; Xiang-sun Zhang

    2006-01-01

    The Minimum Fragments Removal (MFR) problem is one of the haplotyping problems: given a set of fragments, remove the minimum number of fragments so that the resulting fragments can be partitioned into k classes of non-conflicting subsets. In this paper, we formulate the k-MFR problem as an integer linear programming problem, and develop a dynamic programming approach to solve the k-MFR problem for both the gapless and gap cases.

  4. The three-body problem

    CERN Document Server

    Marchal, Christian

    1990-01-01

    Recent research on the theory of perturbations, the analytical approach and the quantitative analysis of the three-body problem have reached a high degree of perfection. The use of electronics has aided developments in quantitative analysis and has helped to disclose the extreme complexity of the set of solutions. This accelerated progress has given new orientation and impetus to the qualitative analysis that is so complementary to the quantitative analysis. The book begins with the various formulations of the three-body problem, the main classical results and the important questions and conje

  5. Parent-Adolescent Conflict as Sequences of Reciprocal Negative Emotion: Links with Conflict Resolution and Adolescents' Behavior Problems.

    Science.gov (United States)

    Moed, Anat; Gershoff, Elizabeth T; Eisenberg, Nancy; Hofer, Claire; Losoya, Sandra; Spinrad, Tracy L; Liew, Jeffrey

    2015-08-01

    Although conflict is a normative part of parent-adolescent relationships, conflicts that are long or highly negative are likely to be detrimental to these relationships and to youths' development. In the present article, sequential analyses of data from 138 parent-adolescent dyads (adolescents' mean age was 13.44, SD = 1.16; 52 % girls, 79 % non-Hispanic White) were used to define conflicts as reciprocal exchanges of negative emotion observed while parents and adolescents were discussing "hot," conflictual issues. Dynamic components of these exchanges, including who started the conflicts, who ended them, and how long they lasted, were identified. Mediation analyses revealed that a high proportion of conflicts ended by adolescents was associated with longer conflicts, which in turn predicted perceptions of the "hot" issue as unresolved and adolescent behavior problems. The findings illustrate advantages of using sequential analysis to identify patterns of interactions and, with some certainty, obtain an estimate of the contingent relationship between a pattern of behavior and child and parental outcomes. These interaction patterns are discussed in terms of the roles that parents and children play when in conflict with each other, and the processes through which these roles affect conflict resolution and adolescents' behavior problems.

  6. Generalized formulation of free energy and application to photosynthesis

    Science.gov (United States)

    Zhang, Hwe Ik; Choi, M. Y.

    2018-03-01

    The origin of free energy on the earth is solar radiation. However, the amount of free energy it contains has seldom been investigated, because the free energy concept was believed to be inappropriate for a system of photons. Instead, the origin of free energy has been sought in the process of photosynthesis, imposing a limit of conversion given by the Carnot efficiency. Here we present a general formulation, capable of not only assessing accurately the available amount of free energy in the photon gas but also explaining the primary photosynthetic process more succinctly. In this formulation, the problem of "photosynthetic conversion of the internal energy of photons into the free energy of chlorophyll" is replaced by simple "free energy transduction" between the photons and chlorophyll. An analytic expression for the photosynthetic efficiency is derived and shown to deviate from the Carnot efficiency. Some predictions verifiable possibly by observation are also suggested.

  7. THE DUBINS TRAVELING SALESMAN PROBLEM WITH CONSTRAINED COLLECTING MANEUVERS

    Directory of Open Access Journals (Sweden)

    Petr Váňa

    2016-11-01

    Full Text Available In this paper, we introduce a variant of the Dubins traveling salesman problem (DTSP that is called the Dubins traveling salesman problem with constrained collecting maneuvers (DTSP-CM. In contrast to the ordinary formulation of the DTSP, in the proposed DTSP-CM, the vehicle is requested to visit each target by specified collecting maneuver to accomplish the mission. The proposed problem formulation is motivated by scenarios with unmanned aerial vehicles where particular maneuvers are necessary for accomplishing the mission, such as object dropping or data collection with sensor sensitive to changes in vehicle heading. We consider existing methods for the DTSP and propose its modifications to use these methods to address a variant of the introduced DTSP-CM, where the collecting maneuvers are constrained to straight line segments.

  8. Linear finite element method for one-dimensional diffusion problems

    Energy Technology Data Exchange (ETDEWEB)

    Brandao, Michele A.; Dominguez, Dany S.; Iglesias, Susana M., E-mail: micheleabrandao@gmail.com, E-mail: dany@labbi.uesc.br, E-mail: smiglesias@uesc.br [Universidade Estadual de Santa Cruz (LCC/DCET/UESC), Ilheus, BA (Brazil). Departamento de Ciencias Exatas e Tecnologicas. Laboratorio de Computacao Cientifica

    2011-07-01

    We describe in this paper the fundamentals of Linear Finite Element Method (LFEM) applied to one-speed diffusion problems in slab geometry. We present the mathematical formulation to solve eigenvalue and fixed source problems. First, we discretized a calculus domain using a finite set of elements. At this point, we obtain the spatial balance equations for zero order and first order spatial moments inside each element. Then, we introduce the linear auxiliary equations to approximate neutron flux and current inside the element and architect a numerical scheme to obtain the solution. We offer numerical results for fixed source typical model problems to illustrate the method's accuracy for coarse-mesh calculations in homogeneous and heterogeneous domains. Also, we compare the accuracy and computational performance of LFEM formulation with conventional Finite Difference Method (FDM). (author)

  9. A Generalized Cauchy Distribution Framework for Problems Requiring Robust Behavior

    Directory of Open Access Journals (Sweden)

    Carrillo RafaelE

    2010-01-01

    Full Text Available Statistical modeling is at the heart of many engineering problems. The importance of statistical modeling emanates not only from the desire to accurately characterize stochastic events, but also from the fact that distributions are the central models utilized to derive sample processing theories and methods. The generalized Cauchy distribution (GCD family has a closed-form pdf expression across the whole family as well as algebraic tails, which makes it suitable for modeling many real-life impulsive processes. This paper develops a GCD theory-based approach that allows challenging problems to be formulated in a robust fashion. Notably, the proposed framework subsumes generalized Gaussian distribution (GGD family-based developments, thereby guaranteeing performance improvements over traditional GCD-based problem formulation techniques. This robust framework can be adapted to a variety of applications in signal processing. As examples, we formulate four practical applications under this framework: (1 filtering for power line communications, (2 estimation in sensor networks with noisy channels, (3 reconstruction methods for compressed sensing, and (4 fuzzy clustering.

  10. An IP Framework for the Crew Pairing Problem Using Subsequence Generation

    DEFF Research Database (Denmark)

    Rasmussen, Matias Sevel; Lusby, Richard Martin; Ryan, David

    In this paper we consider an important problem for the airline industry. The widely studied crew pairing problem is typically formulated as a set partitioning problem and solved using the branch-and-price methodology. Here we develop a new integer programming framework, based on the concept...... of subsequence generation, for solving the set partitioning formulation. In subsequence generation one restricts the number of permitted subsequent flights, that a crew member can turn to after completing any particular flight. By restricting the number of subsequences, the number of pairings in the problem...... decreases. The aim is then to dynamically add attractive subsequences to the problem, thereby increasing the number of possible pairings and improving the solution quality. Encouraging results are obtained on 19 real-life instances supplied by Air New Zealand and show that the described methodology...

  11. Lipid Based Formulations of Biopharmaceutics Classification System (BCS Class II Drugs: Strategy, Formulations, Methods and Saturation

    Directory of Open Access Journals (Sweden)

    Šoltýsová I.

    2016-12-01

    Full Text Available Active ingredients in pharmaceuticals differ by their physico-chemical properties and their bioavailability therefore varies. The most frequently used and most convenient way of administration of medicines is oral, however many drugs are little soluble in water. Thus they are not sufficiently effective and suitable for such administration. For this reason a system of lipid based formulations (LBF was developed. Series of formulations were prepared and tested in water and biorelevant media. On the basis of selection criteria, there were selected formulations with the best emulsification potential, good dispersion in the environment and physical stability. Samples of structurally different drugs included in the Class II of the Biopharmaceutics classification system (BCS were obtained, namely Griseofulvin, Glibenclamide, Carbamazepine, Haloperidol, Itraconazol, Triclosan, Praziquantel and Rifaximin, for testing of maximal saturation in formulations prepared from commercially available excipients. Methods were developed for preparation of formulations, observation of emulsification and its description, determination of maximum solubility of drug samples in the respective formulation and subsequent analysis. Saturation of formulations with drugs showed that formulations 80 % XA and 20 % Xh, 35 % XF and 65 % Xh were best able to dissolve the drugs which supports the hypothesis that it is desirable to identify limited series of formulations which could be generally applied for this purpose.

  12. An improved acoustic Fourier boundary element method formulation using fast Fourier transform integration

    NARCIS (Netherlands)

    Kuijpers, A.H.W.M.; Verbeek, G.; Verheij, J.W.

    1997-01-01

    Effective use of the Fourier series boundary element method (FBEM) for everyday applications is hindered by the significant numerical problems that have to be overcome for its implementation. In the FBEM formulation for acoustics, some integrals over the angle of revolution arise, which need to be

  13. KINOFORM LENSES - TOWARD NANOMETER RESOLUTION.

    Energy Technology Data Exchange (ETDEWEB)

    STEIN, A.; EVANS-LUTTERODT, K.; TAYLOR, A.

    2004-10-23

    While hard x-rays have wavelengths in the nanometer and sub-nanometer range, the ability to focus them is limited by the quality of sources and optics, and not by the wavelength. A few options, including reflective (mirrors), diffractive (zone plates) and refractive (CRL's) are available, each with their own limitations. Here we present our work with kinoform lenses which are refractive lenses with all material causing redundant 2{pi} phase shifts removed to reduce the absorption problems inherently limiting the resolution of refractive lenses. By stacking kinoform lenses together, the effective numerical aperture, and thus the focusing resolution, can be increased. The present status of kinoform lens fabrication and testing at Brookhaven is presented as well as future plans toward achieving nanometer resolution.

  14. Singular perturbation techniques in the gravitational self-force problem

    International Nuclear Information System (INIS)

    Pound, Adam

    2010-01-01

    Much of the progress in the gravitational self-force problem has involved the use of singular perturbation techniques. Yet the formalism underlying these techniques is not widely known. I remedy this situation by explicating the foundations and geometrical structure of singular perturbation theory in general relativity. Within that context, I sketch precise formulations of the methods used in the self-force problem: dual expansions (including matched asymptotic expansions), for which I identify precise matching conditions, one of which is a weak condition arising only when multiple coordinate systems are used; multiscale expansions, for which I provide a covariant formulation; and a self-consistent expansion with a fixed worldline, for which I provide a precise statement of the exact problem and its approximation. I then present a detailed analysis of matched asymptotic expansions as they have been utilized in calculating the self-force. Typically, the method has relied on a weak matching condition, which I show cannot determine a unique equation of motion. I formulate a refined condition that is sufficient to determine such an equation. However, I conclude that the method yields significantly weaker results than do alternative methods.

  15. Formulation and numerical analysis of nonisothermal multiphase flow in porous media

    International Nuclear Information System (INIS)

    Martinez, M.J.

    1995-06-01

    A mathematical formulation is presented for describing the transport of air, water and energy through porous media. The development follows a continuum mechanics approach. The theory assumes the existence of various average macroscopic variables which describe the state of the system. Balance equations for mass and energy are formulated in terms of these macroscopic variables. The system is supplemented with constitutive equations relating fluxes to the state variables, and with transport property specifications. Specification of various mixing rules and thermodynamic relations completes the system of equations. A numerical simulation scheme, employing the method of lines, is described for one-dimensional flow. The numerical method is demonstrated on sample problems involving nonisothermal flow of air and water. The implementation is verified by comparison with existing numerical solutions

  16. Formulation of similarity porous media systems

    International Nuclear Information System (INIS)

    Anderson, R.M.; Ford, W.T.; Ruttan, A.; Strauss, M.J.

    1982-01-01

    The mathematical formulation of the Porous Media System (PMS) describing two-phase, immiscible, compressible fluid flow in linear, homogeneous porous media is reviewed and expanded. It is shown that families of common vertex, coaxial parabolas and families of parallel lines are the only families of curves on which solutions of the PMS may be constant. A coordinate transformation is used to change the partial differential equations of the PMS to a system of ordinary differential equations, referred to as a similarity Porous Media System (SPMS), in which the independent variable denotes movement from curve to curve in a selected family of curves. Properties of solutions of the first boundary value problem are developed for the SPMS

  17. The Regularized Fast Hartley Transform Optimal Formulation of Real-Data Fast Fourier Transform for Silicon-Based Implementation in Resource-Constrained Environments

    CERN Document Server

    Jones, Keith

    2010-01-01

    The Regularized Fast Hartley Transform provides the reader with the tools necessary to both understand the proposed new formulation and to implement simple design variations that offer clear implementational advantages, both practical and theoretical, over more conventional complex-data solutions to the problem. The highly-parallel formulation described is shown to lead to scalable and device-independent solutions to the latency-constrained version of the problem which are able to optimize the use of the available silicon resources, and thus to maximize the achievable computational density, th

  18. 3D Finite Volume Modeling of ENDE Using Electromagnetic T-Formulation

    Directory of Open Access Journals (Sweden)

    Yue Li

    2012-01-01

    Full Text Available An improved method which can analyze the eddy current density in conductor materials using finite volume method is proposed on the basis of Maxwell equations and T-formulation. The algorithm is applied to solve 3D electromagnetic nondestructive evaluation (E’NDE benchmark problems. The computing code is applied to study an Inconel 600 work piece with holes or cracks. The impedance change due to the presence of the crack is evaluated and compared with the experimental data of benchmark problems No. 1 and No. 2. The results show a good agreement between both calculated and measured data.

  19. High resolution time integration for Sn radiation transport

    International Nuclear Information System (INIS)

    Thoreson, Greg; McClarren, Ryan G.; Chang, Jae H.

    2008-01-01

    First order, second order and high resolution time discretization schemes are implemented and studied for the S n equations. The high resolution method employs a rate of convergence better than first order, but also suppresses artificial oscillations introduced by second order schemes in hyperbolic differential equations. All three methods were compared for accuracy and convergence rates. For non-absorbing problems, both second order and high resolution converged to the same solution as the first order with better convergence rates. High resolution is more accurate than first order and matches or exceeds the second order method. (authors)

  20. Simulation of hydrogen mitigation in catalytic recombiner. Part-II: Formulation of a CFD model

    International Nuclear Information System (INIS)

    Prabhudharwadkar, Deoras M.; Iyer, Kannan N.

    2011-01-01

    Research highlights: → Hydrogen transport in containment with recombiners is a multi-scale problem. → A novel methodology worked out to lump the recombiner characteristics. → Results obtained using commercial code FLUENT are cast in the form of correlations. → Hence, coarse grids can obtain accurate distribution of H 2 in containment. → Satisfactory working of the methodology is clearly demonstrated. - Abstract: This paper aims at formulation of a model compatible with CFD code to simulate hydrogen distribution and mitigation using a Passive Catalytic Recombiner in the Nuclear power plant containments. The catalytic recombiner is much smaller in size compared to the containment compartments. In order to fully resolve the recombination processes during the containment simulations, it requires the geometric details of the recombiner to be modelled and a very fine mesh size inside the recombiner channels. This component when integrated with containment mixing calculations would result in a large number of mesh elements which may take large computational times to solve the problem. This paper describes a method to resolve this simulation difficulty. In this exercise, the catalytic recombiner alone was first modelled in detail using the best suited option to describe the reaction rate. A detailed parametric study was conducted, from which correlations for the heat of reaction (hence the rate of reaction) and the heat transfer coefficient were obtained. These correlations were then used to model the recombiner channels as single computational cells providing necessary volumetric sources/sinks to the energy and species transport equations. This avoids full resolution of these channels, thereby allowing larger mesh size in the recombiners. The above mentioned method was successfully validated using both steady state and transient test problems and the results indicate very satisfactory modelling of the component.

  1. METHOD FOR SOLVING FUZZY ASSIGNMENT PROBLEM USING MAGNITUDE RANKING TECHNIQUE

    OpenAIRE

    D. Selvi; R. Queen Mary; G. Velammal

    2017-01-01

    Assignment problems have various applications in the real world because of their wide applicability in industry, commerce, management science, etc. Traditional classical assignment problems cannot be successfully used for real life problem, hence the use of fuzzy assignment problems is more appropriate. In this paper, the fuzzy assignment problem is formulated to crisp assignment problem using Magnitude Ranking technique and Hungarian method has been applied to find an optimal solution. The N...

  2. The Goodness of Covariance Selection Problem from AUC Bounds

    OpenAIRE

    Khajavi, Navid Tafaghodi; Kuh, Anthony

    2016-01-01

    We conduct a study of graphical models and discuss the quality of model selection approximation by formulating the problem as a detection problem and examining the area under the curve (AUC). We are specifically looking at the model selection problem for jointly Gaussian random vectors. For Gaussian random vectors, this problem simplifies to the covariance selection problem which is widely discussed in literature by Dempster [1]. In this paper, we give the definition for the correlation appro...

  3. Discrete Control Processes, Dynamic Games and Multicriterion Control Problems

    Directory of Open Access Journals (Sweden)

    Dumitru Lozovanu

    2002-07-01

    Full Text Available The discrete control processes with state evaluation in time of dynamical system is considered. A general model of control problems with integral-time cost criterion by a trajectory is studied and a general scheme for solving such classes of problems is proposed. In addition the game-theoretical and multicriterion models for control problems are formulated and studied.

  4. Well-posedness of inverse problems for systems with time dependent parameters

    DEFF Research Database (Denmark)

    Banks, H. T.; Pedersen, Michael

    2009-01-01

    on the data of the problem. We also consider well-posedness as well as finite element type approximations in associated inverse problems. The problem above is a weak formulation that includes models in abstract differential operator form that include plate, beam and shell equations with several important...

  5. Radiotherapy problem under fuzzy theoretic approach

    International Nuclear Information System (INIS)

    Ammar, E.E.; Hussein, M.L.

    2003-01-01

    A fuzzy set theoretic approach is used for radiotherapy problem. The problem is faced with two goals: the first is to maximize the fraction of surviving normal cells and the second is to minimize the fraction of surviving tumor cells. The theory of fuzzy sets has been employed to formulate and solve the problem. A linguistic variable approach is used for treating the first goal. The solutions obtained by the modified approach are always efficient and best compromise. A sensitivity analysis of the solutions to the differential weights is given

  6. Counting, scoring and classifying hunger to allocate resources targeted to solve the problem

    OpenAIRE

    Afonso Gallegos, Ana; Trueba Jainaga, Jose Ignacio; Tarancon Juanas, Monica

    2011-01-01

    A proper allocation of resources targeted to solve hunger is essential to optimize the efficacy of actions and maximize results. This requires an adequate measurement and formulation of the problem as, paraphrasing Einstein, the formulation of a problem is essential to reach a solution. Different measurement methods have been designed to count, score, classify and compare hunger at local level and to allow comparisons between different places. However, the alternative methods produce sig...

  7. Super resolution reconstruction of infrared images based on classified dictionary learning

    Science.gov (United States)

    Liu, Fei; Han, Pingli; Wang, Yi; Li, Xuan; Bai, Lu; Shao, Xiaopeng

    2018-05-01

    Infrared images always suffer from low-resolution problems resulting from limitations of imaging devices. An economical approach to combat this problem involves reconstructing high-resolution images by reasonable methods without updating devices. Inspired by compressed sensing theory, this study presents and demonstrates a Classified Dictionary Learning method to reconstruct high-resolution infrared images. It classifies features of the samples into several reasonable clusters and trained a dictionary pair for each cluster. The optimal pair of dictionaries is chosen for each image reconstruction and therefore, more satisfactory results is achieved without the increase in computational complexity and time cost. Experiments and results demonstrated that it is a viable method for infrared images reconstruction since it improves image resolution and recovers detailed information of targets.

  8. Granulated decontamination formulations

    Science.gov (United States)

    Tucker, Mark D.

    2007-10-02

    A decontamination formulation and method of making that neutralizes the adverse health effects of both chemical and biological compounds, especially chemical warfare (CW) and biological warfare (BW) agents, and toxic industrial chemicals. The formulation provides solubilizing compounds that serve to effectively render the chemical and biological compounds, particularly CW and BW compounds, susceptible to attack, and at least one reactive compound that serves to attack (and detoxify or kill) the compound. The formulation includes at least one solubilizing agent, a reactive compound, a sorbent additive, and water. A highly adsorbent sorbent additive (e.g., amorphous silica, sorbitol, mannitol, etc.) is used to "dry out" one or more liquid ingredients into a dry, free-flowing powder that has an extended shelf life, and is more convenient to handle and mix in the field.

  9. A Quasi-Feed-In-Tariff policy formulation in micro-grids: A bi-level multi-period approach

    International Nuclear Information System (INIS)

    Taha, Ahmad F.; Hachem, Nadim A.; Panchal, Jitesh H.

    2014-01-01

    A Quasi-Feed-In-Tariff (QFIT) policy formulation is presented for micro-grids that integrates renewable energy generation considering Policy Makers' and Generation Companies' (GENCOs) objectives assuming a bi-level multi-period formulation that integrates physical characteristics of the power-grid. The upper-level problem corresponds to the PM, whereas the lower-level decisions are made by GENCOs. We consider that some GENCOs are green energy producers, while others are black energy producers. Policy makers incentivize green energy producers to generate energy through the payment of optimal time-varying subsidy price. The policy maker's main objective is to maximize an overall social welfare that includes factors such as demand surplus, energy cost, renewable energy subsidy price, and environmental standards. The lower-level problem corresponding to the GENCOs is based on maximizing the players' profits. The proposed QFIT policy differs from the FIT policy in the sense that the subsidy price-based contracts offered to green energy producers dynamically change over time, depending on the physical properties of the grid, demand, and energy price fluctuations. The integrated problem solves for time-varying subsidy price and equilibrium energy quantities that optimize the system welfare under different grid and system conditions. - Highlights: • We present a bi-level optimization problem formulation for Quasi-Feed-In-Tariff (QFIT) policy. • QFIT dictates that subsidy prices dynamically vary over time depending on conditions. • Power grid's physical characteristics affect optimal subsidy prices and energy generation. • To maximize welfare, policy makers ought to increase subsidy prices during the peak-load

  10. A Bi-Modal Routing Problem with Cyclical and One-Way Trips: Formulation and Heuristic Solution

    Directory of Open Access Journals (Sweden)

    Grinde Roger B.

    2017-12-01

    Full Text Available A bi-modal routing problem is solved using a heuristic approach. Motivated by a recreational hiking application, the problem is similar to routing problems in business with two transport modes. The problem decomposes into a set covering problem (SCP and an asymmetric traveling salesperson problem (ATSP, corresponding to a hiking time objective and a driving distance objective. The solution algorithm considers hiking time first, but finds all alternate optimal solutions, as inputs to the driving distance problem. Results show the trade-offs between the two objectives.

  11. Neonates need tailored drug formulations.

    Science.gov (United States)

    Allegaert, Karel

    2013-02-08

    Drugs are very strong tools used to improve outcome in neonates. Despite this fact and in contrast to tailored perfusion equipment, incubators or ventilators for neonates, we still commonly use drug formulations initially developed for adults. We would like to make the point that drug formulations given to neonates need to be tailored for this age group. Besides the obvious need to search for active compounds that take the pathophysiology of the newborn into account, this includes the dosage and formulation. The dosage or concentration should facilitate the administration of low amounts and be flexible since clearance is lower in neonates with additional extensive between-individual variability. Formulations need to be tailored for dosage variability in the low ranges and also to the clinical characteristics of neonates. A specific focus of interest during neonatal drug development therefore is a need to quantify and limit excipient exposure based on the available knowledge of their safety or toxicity. Until such tailored vials and formulations become available, compounding practices for drug formulations in neonates should be evaluated to guarantee the correct dosing, product stability and safety.

  12. The Guderley problem revisited

    International Nuclear Information System (INIS)

    Ramsey, Scott D.; Kamm, James R.; Bolstad, John H.

    2009-01-01

    The self-similar converging-diverging shock wave problem introduced by Guderley in 1942 has been the source of numerous investigations since its publication. In this paper, we review the simplifications and group invariance properties that lead to a self-similar formulation of this problem from the compressible flow equations for a polytropic gas. The complete solution to the self-similar problem reduces to two coupled nonlinear eigenvalue problems: the eigenvalue of the first is the so-called similarity exponent for the converging flow, and that of the second is a trajectory multiplier for the diverging regime. We provide a clear exposition concerning the reflected shock configuration. Additionally, we introduce a new approximation for the similarity exponent, which we compare with other estimates and numerically computed values. Lastly, we use the Guderley problem as the basis of a quantitative verification analysis of a cell-centered, finite volume, Eulerian compressible flow algorithm.

  13. Topical formulations with superoxide dismutase: influence of formulation composition on physical stability and enzymatic activity.

    Science.gov (United States)

    Di Mambro, Valéria M; Borin, Maria F; Fonseca, Maria J V

    2003-04-24

    Three different topical formulations were supplemented with superoxide dismutase (SOD) and evaluated concerning physical and chemical stabilities in order to determine the most stable formulation that would maintain SOD activity. Physical stability was evaluated by storing the formulation at room temperature, and at 37 and 45 degrees C for 28 days. Samples were collected at 7-day intervals for assessment of rheological behavior. Chemical stability was evaluated by the measurement of enzymatic activity in formulations stored at room temperature and at 45 degrees C for 75 days. The formulations showed a pseudoplastic behavior, with a flow index of less than 1. There was no significant difference in the initial values of flow index, hysteresis loop or minimum apparent viscosity. The simple emulsion and the one stabilized with hydroxyethylcellulose showed decreased viscosity by the 21st day and with higher temperature, but no significant changes concerning the presence of SOD. Although there were no significant changes concerning storage time or temperature, the formulation stabilized with hydroxyethylcellulose showed a marked loss of SOD activity. The addition of SOD to the formulations studied did not affect their physical stability. Simple emulsions or emulsions stabilized with carboxypolymethylene seem to be better bases for enzyme addition than emulsion stabilized with hydroxyethylcellulose.

  14. An Integrated Approach to the Ground Crew Rostering Problem with Work Patterns

    DEFF Research Database (Denmark)

    Lusby, Richard Martin; Hansen, Anders Dohn; Range, Troels Martin

    This paper addresses the Ground Crew Rostering Problem with Work Patterns, an important manpower planning problem arising in the ground operations of airline companies. We present a cutting stock based integer programming formulation of the problem and describe a powerful decomposition approach...

  15. Initial value formulation of dynamical Chern-Simons gravity

    Science.gov (United States)

    Delsate, Térence; Hilditch, David; Witek, Helvi

    2015-01-01

    We derive an initial value formulation for dynamical Chern-Simons gravity, a modification of general relativity involving parity-violating higher derivative terms. We investigate the structure of the resulting system of partial differential equations thinking about linearization around arbitrary backgrounds. This type of consideration is necessary if we are to establish well-posedness of the Cauchy problem. Treating the field equations as an effective field theory we find that weak necessary conditions for hyperbolicity are satisfied. For the full field equations we find that there are states from which subsequent evolution is not determined. Generically the evolution system closes, but is not hyperbolic in any sense that requires a first order pseudodifferential reduction. In a cursory mode analysis we find that the equations of motion contain terms that may cause ill-posedness of the initial value problem.

  16. Integrating a logarithmic-strain based hyper-elastic formulation into a three-field mixed finite element formulation to deal with incompressibility in finite-strain elasto-plasticity

    International Nuclear Information System (INIS)

    Dina Al Akhrass; Bruchon, Julien; Drapier, Sylvain; Fayolle, Sebastien

    2014-01-01

    This paper deals with the treatment of incompressibility in solid mechanics in finite-strain elasto-plasticity. A finite-strain model proposed by Miehe, Apel and Lambrecht, which is based on a logarithmic strain measure and its work-conjugate stress tensor is chosen. Its main interest is that it allows for the adoption of standard constitutive models established in a small-strain framework. This model is extended to take into account the plastic incompressibility constraint intrinsically. In that purpose, an extension of this model to a three-field mixed finite element formulation is proposed, involving displacements, a strain variable and pressure as nodal variables with respect to standard finite element. Numerical examples of finite-strain problems are presented to assess the performance of the formulation. To conclude, an industrial case for which the classical under-integrated elements fail is considered. (authors)

  17. THE CORPORATE CONSTITUTIONALISM APPROACH IN FORMULATION OF CSR

    Directory of Open Access Journals (Sweden)

    Victor Imanuel Nalle

    2015-04-01

    Full Text Available The 21st century is the era of the development of corporate social responsibility (CSR. It is encouraged by the development of the company as a business and societal entities that balances public and private interests. If there is a balance of public and private interests in the company, the application of CSR should be able to accommodate the public interest. But many companies in Indonesia do not involve the community in the formulation of CSR implementation model. This resulted in the implementation of CSR is often not well targeted. In that context, the theory of corporate constitutionalism becomes relevant theory to answer these problems. The theory of corporate constitutionalism put deliberation as one of the principles to achieve the legitimacy of decision-making in the corporation. Through a process of deliberation formulation of CSR model with the community, not just the interests of shareholders that can be accommodated but also the interests of stakeholders. Thus, CSR can actually be instrumental in the global and local challenges.

  18. Scalable algorithms for contact problems

    CERN Document Server

    Dostál, Zdeněk; Sadowská, Marie; Vondrák, Vít

    2016-01-01

    This book presents a comprehensive and self-contained treatment of the authors’ newly developed scalable algorithms for the solutions of multibody contact problems of linear elasticity. The brand new feature of these algorithms is theoretically supported numerical scalability and parallel scalability demonstrated on problems discretized by billions of degrees of freedom. The theory supports solving multibody frictionless contact problems, contact problems with possibly orthotropic Tresca’s friction, and transient contact problems. It covers BEM discretization, jumping coefficients, floating bodies, mortar non-penetration conditions, etc. The exposition is divided into four parts, the first of which reviews appropriate facets of linear algebra, optimization, and analysis. The most important algorithms and optimality results are presented in the third part of the volume. The presentation is complete, including continuous formulation, discretization, decomposition, optimality results, and numerical experimen...

  19. Baseline LAW Glass Formulation Testing

    International Nuclear Information System (INIS)

    Kruger, Albert A.; Mooers, Cavin; Bazemore, Gina; Pegg, Ian L.; Hight, Kenneth; Lai, Shan Tao; Buechele, Andrew; Rielley, Elizabeth; Gan, Hao; Muller, Isabelle S.; Cecil, Richard

    2013-01-01

    The major objective of the baseline glass formulation work was to develop and select glass formulations that are compliant with contractual and processing requirements for each of the LAW waste streams. Other objectives of the work included preparation and characterization of glasses with respect to the properties of interest, optimization of sulfate loading in the glasses, evaluation of ability to achieve waste loading limits, testing to demonstrate compatibility of glass melts with melter materials of construction, development of glass formulations to support ILAW qualification activities, and identification of glass formulation issues with respect to contract specifications and processing requirements

  20. Effective properties of linear viscoelastic heterogeneous media: Internal variables formulation and extension to ageing behaviours

    International Nuclear Information System (INIS)

    Ricaud, J.M.; Masson, R.; Masson, R.

    2009-01-01

    The Laplace-Carson transform classically used for homogenization of linear viscoelastic heterogeneous media yields integral formulations of effective behaviours. These are far less convenient than internal variables formulations with respect to computational aspects as well as to theoretical extensions to closely related problems such as ageing viscoelasticity. Noticing that the collocation method is usually adopted to invert the Laplace-Carson transforms, we first remark that this approximation is equivalent to an internal variables formulation which is exact in some specific situations. This result is illustrated for a two-phase composite with phases obeying a compressible Maxwellian behaviour. Next, an incremental formulation allows to extend at each time step the previous general framework to ageing viscoelasticity. Finally, with the help of a creep test of a porous viscoelastic matrix reinforced with elastic inclusions, it is shown that the method yields accurate predictions (comparing to reference results provided by periodic cell finite element computations). (authors)

  1. Decision-Tree Formulation With Order-1 Lateral Execution

    Science.gov (United States)

    James, Mark

    2007-01-01

    A compact symbolic formulation enables mapping of an arbitrarily complex decision tree of a certain type into a highly computationally efficient multidimensional software object. The type of decision trees to which this formulation applies is that known in the art as the Boolean class of balanced decision trees. Parallel lateral slices of an object created by means of this formulation can be executed in constant time considerably less time than would otherwise be required. Decision trees of various forms are incorporated into almost all large software systems. A decision tree is a way of hierarchically solving a problem, proceeding through a set of true/false responses to a conclusion. By definition, a decision tree has a tree-like structure, wherein each internal node denotes a test on an attribute, each branch from an internal node represents an outcome of a test, and leaf nodes represent classes or class distributions that, in turn represent possible conclusions. The drawback of decision trees is that execution of them can be computationally expensive (and, hence, time-consuming) because each non-leaf node must be examined to determine whether to progress deeper into a tree structure or to examine an alternative. The present formulation was conceived as an efficient means of representing a decision tree and executing it in as little time as possible. The formulation involves the use of a set of symbolic algorithms to transform a decision tree into a multi-dimensional object, the rank of which equals the number of lateral non-leaf nodes. The tree can then be executed in constant time by means of an order-one table lookup. The sequence of operations performed by the algorithms is summarized as follows: 1. Determination of whether the tree under consideration can be encoded by means of this formulation. 2. Extraction of decision variables. 3. Symbolic optimization of the decision tree to minimize its form. 4. Expansion and transformation of all nested conjunctive

  2. U(1) problem

    Energy Technology Data Exchange (ETDEWEB)

    McDougall, N.A. (Oxford Univ. (UK). Dept. of Theoretical Physics)

    1984-08-23

    The resolution of the U(1) problem requires the quark condensates to have a specific THETA dependence. We show that the required THETA dependence arises naturally upon application of the index theorem during the calculation of the dynamically generated quark mass.

  3. A minimalist functional group (MFG) approach for surrogate fuel formulation

    KAUST Repository

    Abdul Jameel, Abdul Gani

    2018-03-20

    Surrogate fuel formulation has drawn significant interest due to its relevance towards understanding combustion properties of complex fuel mixtures. In this work, we present a novel approach for surrogate fuel formulation by matching target fuel functional groups, while minimizing the number of surrogate species. Five key functional groups; paraffinic CH, paraffinic CH, paraffinic CH, naphthenic CH–CH and aromatic C–CH groups in addition to structural information provided by the Branching Index (BI) were chosen as matching targets. Surrogates were developed for six FACE (Fuels for Advanced Combustion Engines) gasoline target fuels, namely FACE A, C, F, G, I and J. The five functional groups present in the fuels were qualitatively and quantitatively identified using high resolution H Nuclear Magnetic Resonance (NMR) spectroscopy. A further constraint was imposed in limiting the number of surrogate components to a maximum of two. This simplifies the process of surrogate formulation, facilitates surrogate testing, and significantly reduces the size and time involved in developing chemical kinetic models by reducing the number of thermochemical and kinetic parameters requiring estimation. Fewer species also reduces the computational expenses involved in simulating combustion in practical devices. The proposed surrogate formulation methodology is denoted as the Minimalist Functional Group (MFG) approach. The MFG surrogates were experimentally tested against their target fuels using Ignition Delay Times (IDT) measured in an Ignition Quality Tester (IQT), as specified by the standard ASTM D6890 methodology, and in a Rapid Compression Machine (RCM). Threshold Sooting Index (TSI) and Smoke Point (SP) measurements were also performed to determine the sooting propensities of the surrogates and target fuels. The results showed that MFG surrogates were able to reproduce the aforementioned combustion properties of the target FACE gasolines across a wide range of conditions

  4. A minimalist functional group (MFG) approach for surrogate fuel formulation

    KAUST Repository

    Abdul Jameel, Abdul Gani; Naser, Nimal; Issayev, Gani; Touitou, Jamal; Ghosh, Manik Kumer; Emwas, Abdul-Hamid M.; Farooq, Aamir; Dooley, Stephen; Sarathy, Mani

    2018-01-01

    Surrogate fuel formulation has drawn significant interest due to its relevance towards understanding combustion properties of complex fuel mixtures. In this work, we present a novel approach for surrogate fuel formulation by matching target fuel functional groups, while minimizing the number of surrogate species. Five key functional groups; paraffinic CH, paraffinic CH, paraffinic CH, naphthenic CH–CH and aromatic C–CH groups in addition to structural information provided by the Branching Index (BI) were chosen as matching targets. Surrogates were developed for six FACE (Fuels for Advanced Combustion Engines) gasoline target fuels, namely FACE A, C, F, G, I and J. The five functional groups present in the fuels were qualitatively and quantitatively identified using high resolution H Nuclear Magnetic Resonance (NMR) spectroscopy. A further constraint was imposed in limiting the number of surrogate components to a maximum of two. This simplifies the process of surrogate formulation, facilitates surrogate testing, and significantly reduces the size and time involved in developing chemical kinetic models by reducing the number of thermochemical and kinetic parameters requiring estimation. Fewer species also reduces the computational expenses involved in simulating combustion in practical devices. The proposed surrogate formulation methodology is denoted as the Minimalist Functional Group (MFG) approach. The MFG surrogates were experimentally tested against their target fuels using Ignition Delay Times (IDT) measured in an Ignition Quality Tester (IQT), as specified by the standard ASTM D6890 methodology, and in a Rapid Compression Machine (RCM). Threshold Sooting Index (TSI) and Smoke Point (SP) measurements were also performed to determine the sooting propensities of the surrogates and target fuels. The results showed that MFG surrogates were able to reproduce the aforementioned combustion properties of the target FACE gasolines across a wide range of conditions

  5. Development of polyisocyanurate pour foam formulation for space shuttle external tank thermal protection system

    Science.gov (United States)

    Harvey, James A.; Butler, John M.; Chartoff, Richard P.

    1988-01-01

    Four commercially available polyisocyanurate polyurethane spray-foam insulation formulations are used to coat the external tank of the space shuttle. There are several problems associated with these formulations. For example, some do not perform well as pourable closeout/repair systems. Some do not perform well at cryogenic temperatures (poor adhesion to aluminum at liquid nitrogen temperatures). Their thermal stability at elevated temperatures is not adequate. A major defect in all the systems is the lack of detailed chemical information. The formulations are simply supplied to NASA and Martin Marietta, the primary contractor, as components; Part A (isocyanate) and Part B (poly(s) and additives). Because of the lack of chemical information the performance behavior data for the current system, NASA sought the development of a non-proprietary room temperature curable foam insulation. Requirements for the developed system were that it should exhibit equal or better thermal stability both at elevated and cryogenic temperatures with better adhesion to aluminum as compared to the current system. Several formulations were developed that met these requirements, i.e., thermal stability, good pourability, and good bonding to aluminum.

  6. High-resolution regional climate model evaluation using variable-resolution CESM over California

    Science.gov (United States)

    Huang, X.; Rhoades, A.; Ullrich, P. A.; Zarzycki, C. M.

    2015-12-01

    Understanding the effect of climate change at regional scales remains a topic of intensive research. Though computational constraints remain a problem, high horizontal resolution is needed to represent topographic forcing, which is a significant driver of local climate variability. Although regional climate models (RCMs) have traditionally been used at these scales, variable-resolution global climate models (VRGCMs) have recently arisen as an alternative for studying regional weather and climate allowing two-way interaction between these domains without the need for nudging. In this study, the recently developed variable-resolution option within the Community Earth System Model (CESM) is assessed for long-term regional climate modeling over California. Our variable-resolution simulations will focus on relatively high resolutions for climate assessment, namely 28km and 14km regional resolution, which are much more typical for dynamically downscaled studies. For comparison with the more widely used RCM method, the Weather Research and Forecasting (WRF) model will be used for simulations at 27km and 9km. All simulations use the AMIP (Atmospheric Model Intercomparison Project) protocols. The time period is from 1979-01-01 to 2005-12-31 (UTC), and year 1979 was discarded as spin up time. The mean climatology across California's diverse climate zones, including temperature and precipitation, is analyzed and contrasted with the Weather Research and Forcasting (WRF) model (as a traditional RCM), regional reanalysis, gridded observational datasets and uniform high-resolution CESM at 0.25 degree with the finite volume (FV) dynamical core. The results show that variable-resolution CESM is competitive in representing regional climatology on both annual and seasonal time scales. This assessment adds value to the use of VRGCMs for projecting climate change over the coming century and improve our understanding of both past and future regional climate related to fine

  7. A note on the depth function of combinatorial optimization problems

    NARCIS (Netherlands)

    Woeginger, G.J.

    2001-01-01

    In a recent paper [Discrete Appl. Math. 43 (1993) 115–129], Kern formulates two conjectures on the relationship between the computational complexity of computing the depth function of a discrete optimization problem and the computational complexity of solving this optimization problem to optimality.

  8. Vantage point - A 'wicked' problem.

    Science.gov (United States)

    Naish, Jane

    2015-10-01

    SENIOR NURSES everywhere are facing a 'wicked' problem, wicked in the sense that it seems to defy resolution. The problem is this: the national shortage of nurses, particularly of those at band 5, is forcing us to use agency nurses so that we have enough staff to provide patient care safely.

  9. Characterization of industrial wastes as raw materials for Emulsified Modified Bitumen (EMB) formulation

    Science.gov (United States)

    Najib Razali, Mohd; Isa, Syarifah Nur Ezatie Mohd; Salehan, Noor Adilah Md; Musa, Musfafikri; Aziz, Mohd Aizudin Abd; Nour, Abdurahman Hamid; Yunus, Rosli Mohd

    2018-04-01

    This study was conducted to characterize industrial wastes for formulation of emulsified modified bitumen (EMB) in relation to their physical characteristic and elemental composition. This analysis will give information either raw materials from industrial wastes can be used for EMB formulation. Bitumen is produced from crude oil that is extracted from the ground which categorizes the crude oil as one of the non-renewable form of product. A vast environmental problem issues arises in Malaysia cause by the excessive manufacturing activity that lead to a miss-management of industrial waste has leads to the used of industrial waste in the EMB formulation. Industrial waste such as polystyrene, polyethylene and used automotive oil can be used as alternative to formulate bitumen. Then a suitable emulsifier needs to be added to produce the final product which is EMB. The emulsifier will yield a charge depends on its properties to bind the oily bitumen with water. Physical characteristic studies were performed by thermogravimetric Analysis (TGA), differential scanning calorimetry (DSC), flash point test, density rest and moisture content test. Fourier Transform Infrared Spectroscopy (FTIR) analysis was measured to determine the material’s molecular composition and structure.

  10. Gradient-type methods in inverse parabolic problems

    International Nuclear Information System (INIS)

    Kabanikhin, Sergey; Penenko, Aleksey

    2008-01-01

    This article is devoted to gradient-based methods for inverse parabolic problems. In the first part, we present a priori convergence theorems based on the conditional stability estimates for linear inverse problems. These theorems are applied to backwards parabolic problem and sideways parabolic problem. The convergence conditions obtained coincide with sourcewise representability in the self-adjoint backwards parabolic case but they differ in the sideways case. In the second part, a variational approach is formulated for a coefficient identification problem. Using adjoint equations, a formal gradient of an objective functional is constructed. A numerical test illustrates the performance of conjugate gradient algorithm with the formal gradient.

  11. Finding p-Hub Median Locations: An Empirical Study on Problems and Solution Techniques

    Directory of Open Access Journals (Sweden)

    Xiaoqian Sun

    2017-01-01

    Full Text Available Hub location problems have been studied by many researchers for almost 30 years, and, accordingly, various solution methods have been proposed. In this paper, we implement and evaluate several widely used methods for solving five standard hub location problems. To assess the scalability and solution qualities of these methods, three well-known datasets are used as case studies: Turkish Postal System, Australia Post, and Civil Aeronautics Board. Classical problems in small networks can be solved efficiently using CPLEX because of their low complexity. Genetic algorithms perform well for solving three types of single allocation problems, since the problem formulations can be neatly encoded with chromosomes of reasonable size. Lagrangian relaxation is the only technique that solves reliable multiple allocation problems in large networks. We believe that our work helps other researchers to get an overview on the best solution techniques for the problems investigated in our study and also stipulates further interest on cross-comparing solution techniques for more expressive problem formulations.

  12. High resolution time integration for SN radiation transport

    International Nuclear Information System (INIS)

    Thoreson, Greg; McClarren, Ryan G.; Chang, Jae H.

    2009-01-01

    First-order, second-order, and high resolution time discretization schemes are implemented and studied for the discrete ordinates (S N ) equations. The high resolution method employs a rate of convergence better than first-order, but also suppresses artificial oscillations introduced by second-order schemes in hyperbolic partial differential equations. The high resolution method achieves these properties by nonlinearly adapting the time stencil to use a first-order method in regions where oscillations could be created. We employ a quasi-linear solution scheme to solve the nonlinear equations that arise from the high resolution method. All three methods were compared for accuracy and convergence rates. For non-absorbing problems, both second-order and high resolution converged to the same solution as the first-order with better convergence rates. High resolution is more accurate than first-order and matches or exceeds the second-order method

  13. Robust Consumption-Investment Problem on Infinite Horizon

    Energy Technology Data Exchange (ETDEWEB)

    Zawisza, Dariusz, E-mail: dariusz.zawisza@im.uj.edu.pl [Jagiellonian University in Krakow, Institute of Mathematics, Faculty of Mathematics and Computer Science (Poland)

    2015-12-15

    In our paper we consider an infinite horizon consumption-investment problem under a model misspecification in a general stochastic factor model. We formulate the problem as a stochastic game and finally characterize the saddle point and the value function of that game using an ODE of semilinear type, for which we provide a proof of an existence and uniqueness theorem for its solution. Such equation is interested on its own right, since it generalizes many other equations arising in various infinite horizon optimization problems.

  14. Analysis of magnetic damping problem by the coupled mode superposition method

    International Nuclear Information System (INIS)

    Horie, Tomoyoshi; Niho, Tomoya

    1997-01-01

    In this paper we describe the coupled mode superposition method for the magnetic damping problem, which is produced by the coupled effect between the deformation and the induced eddy current of the structures for future fusion reactors and magnetically levitated vehicles. The formulation of the coupled mode superposition method is based on the matrix equation for the eddy current and the structure using the coupled mode vectors. Symmetric form of the coupled matrix equation is obtained. Coupled problems of a thin plate are solved to verify the formulation and the computer code. These problems are solved efficiently by this method using only a few coupled modes. Consideration of the coupled mode vectors shows that the coupled effects are included completely in each coupled mode. (author)

  15. Risk-averse formulations and methods for a virtual power plant

    KAUST Repository

    Lima, Ricardo M.; Conejo, Antonio J.; Langodan, Sabique; Hoteit, Ibrahim; Knio, Omar M.

    2017-01-01

    In this paper we address the optimal operation of a virtual power plant using stochastic programming. We consider one risk-neutral and two risk-averse formulations that rely on the conditional value at risk. To handle large-scale problems, we implement two decomposition methods with variants using single- and multiple-cuts. We propose the utilization of wind ensembles obtained from the European Centre for Medium Range Weather Forecasts (ECMWF) to quantify the uncertainty of the wind forecast. We present detailed results relative to the computational performance of the risk-averse formulations, the decomposition methods, and risk management and sensitivities analysis as a function of the number of scenarios and risk parameters. The implementation of the two decomposition methods relies on the parallel solution of subproblems, which turns out to be paramount for computational efficiency. The results show that one of the two decomposition methods is the most efficient.

  16. Risk-averse formulations and methods for a virtual power plant

    KAUST Repository

    Lima, Ricardo M.

    2017-12-15

    In this paper we address the optimal operation of a virtual power plant using stochastic programming. We consider one risk-neutral and two risk-averse formulations that rely on the conditional value at risk. To handle large-scale problems, we implement two decomposition methods with variants using single- and multiple-cuts. We propose the utilization of wind ensembles obtained from the European Centre for Medium Range Weather Forecasts (ECMWF) to quantify the uncertainty of the wind forecast. We present detailed results relative to the computational performance of the risk-averse formulations, the decomposition methods, and risk management and sensitivities analysis as a function of the number of scenarios and risk parameters. The implementation of the two decomposition methods relies on the parallel solution of subproblems, which turns out to be paramount for computational efficiency. The results show that one of the two decomposition methods is the most efficient.

  17. High resolution drift chambers

    International Nuclear Information System (INIS)

    Va'vra, J.

    1985-07-01

    High precision drift chambers capable of achieving less than or equal to 50 μm resolutions are discussed. In particular, we compare so called cool and hot gases, various charge collection geometries, several timing techniques and we also discuss some systematic problems. We also present what we would consider an ''ultimate'' design of the vertex chamber. 50 refs., 36 figs., 6 tabs

  18. Model reduction method using variable-separation for stochastic saddle point problems

    Science.gov (United States)

    Jiang, Lijian; Li, Qiuqi

    2018-02-01

    In this paper, we consider a variable-separation (VS) method to solve the stochastic saddle point (SSP) problems. The VS method is applied to obtain the solution in tensor product structure for stochastic partial differential equations (SPDEs) in a mixed formulation. The aim of such a technique is to construct a reduced basis approximation of the solution of the SSP problems. The VS method attempts to get a low rank separated representation of the solution for SSP in a systematic enrichment manner. No iteration is performed at each enrichment step. In order to satisfy the inf-sup condition in the mixed formulation, we enrich the separated terms for the primal system variable at each enrichment step. For the SSP problems by regularization or penalty, we propose a more efficient variable-separation (VS) method, i.e., the variable-separation by penalty method. This can avoid further enrichment of the separated terms in the original mixed formulation. The computation of the variable-separation method decomposes into offline phase and online phase. Sparse low rank tensor approximation method is used to significantly improve the online computation efficiency when the number of separated terms is large. For the applications of SSP problems, we present three numerical examples to illustrate the performance of the proposed methods.

  19. Applicability Problem in Optimum Reinforced Concrete Structures Design

    Directory of Open Access Journals (Sweden)

    Ashara Assedeq

    2016-01-01

    Full Text Available Optimum reinforced concrete structures design is very complex problem, not only considering exactness of calculus but also because of questionable applicability of existing methods in practice. This paper presents the main theoretical mathematical and physical features of the problem formulation as well as the review and analysis of existing methods and solutions considering their exactness and applicability.

  20. An axisymmetric PFEM formulation for bottle forming simulation

    Science.gov (United States)

    Ryzhakov, Pavel B.

    2017-01-01

    A numerical model for bottle forming simulation is proposed. It is based upon the Particle Finite Element Method (PFEM) and is developed for the simulation of bottles characterized by rotational symmetry. The PFEM strategy is adapted to suit the problem of interest. Axisymmetric version of the formulation is developed and a modified contact algorithm is applied. This results in a method characterized by excellent computational efficiency and volume conservation characteristics. The model is validated. An example modelling the final blow process is solved. Bottle wall thickness is estimated and the mass conservation of the method is analysed.

  1. One-Dimensional Problem of a Conducting Viscous Fluid with One Relaxation Time

    Directory of Open Access Journals (Sweden)

    Angail A. Samaan

    2011-01-01

    Full Text Available We introduce a magnetohydrodynamic model of boundary-layer equations for conducting viscous fluids. This model is applied to study the effects of free convection currents with thermal relaxation time on the flow of a viscous conducting fluid. The method of the matrix exponential formulation for these equations is introduced. The resulting formulation together with the Laplace transform technique is applied to a variety problems. The effects of a plane distribution of heat sources on the whole and semispace are studied. Numerical results are given and illustrated graphically for the problem.

  2. Formulation of Pine Tar Antidandruff Shampoo Assessment and Comparison With Some Commercial Formulations

    Directory of Open Access Journals (Sweden)

    M. Gharavi

    1990-07-01

    Full Text Available In this study a pine tar shampoo as a new antidandruff formulation is presented. Assessment of antidandruff preparations has been hampered by the lack of standardized schedules, and reliable methods of evaluation.Some antidandruff agents such as : Zinc pyri-thione pine tar, selenium sulphide and (sulfure were used in shampoos. Samples were coded as numbers 1,2 formulated by us and 3,4 formulated commercially. The grading scheme based on 10 point scale, and corneocyte count was carried out on 50 selected volunte¬ers. Corneocyte count and fungal study proved that pine tor shampoo is effective against pityrosporum ovale. Draize lest was used for determination of the irritancy potential of the samples. Results showed that samples numbered 1,2 were relatively innocous in comparison with the others. I urthermore,s kin sensitination test on rabbit also confirmed the results obtained by Draize test. Consumer judgments proved that all formulations were acceptable.

  3. A New Conflict Resolution Method for Multiple Mobile Robots in Cluttered Environments With Motion-Liveness.

    Science.gov (United States)

    Shahriari, Mohammadali; Biglarbegian, Mohammad

    2018-01-01

    This paper presents a new conflict resolution methodology for multiple mobile robots while ensuring their motion-liveness, especially for cluttered and dynamic environments. Our method constructs a mathematical formulation in a form of an optimization problem by minimizing the overall travel times of the robots subject to resolving all the conflicts in their motion. This optimization problem can be easily solved through coordinating only the robots' speeds. To overcome the computational cost in executing the algorithm for very cluttered environments, we develop an innovative method through clustering the environment into independent subproblems that can be solved using parallel programming techniques. We demonstrate the scalability of our approach through performing extensive simulations. Simulation results showed that our proposed method is capable of resolving the conflicts of 100 robots in less than 1.23 s in a cluttered environment that has 4357 intersections in the paths of the robots. We also developed an experimental testbed and demonstrated that our approach can be implemented in real time. We finally compared our approach with other existing methods in the literature both quantitatively and qualitatively. This comparison shows while our approach is mathematically sound, it is more computationally efficient, scalable for very large number of robots, and guarantees the live and smooth motion of robots.

  4. A Branch and Bound Approach for Truss Topology Design Problems with Valid Inequalities

    International Nuclear Information System (INIS)

    Cerveira, Adelaide; Agra, Agostinho; Bastos, Fernando; Varum, Humberto

    2010-01-01

    One of the classical problems in the structural optimization field is the Truss Topology Design Problem (TTDP) which deals with the selection of optimal configuration for structural systems for applications in mechanical, civil, aerospace engineering, among others. In this paper we consider a TTDP where the goal is to find the stiffest truss, under a given load and with a bound on the total volume. The design variables are the cross-section areas of the truss bars that must be chosen from a given finite set. This results in a large-scale non-convex problem with discrete variables. This problem can be formulated as a Semidefinite Programming Problem (SDP problem) with binary variables. We propose a branch and bound algorithm to solve this problem. In this paper it is considered a binary formulation of the problem, to take advantage of its structure, which admits a Knapsack problem as subproblem. Thus, trying to improve the performance of the Branch and Bound, at each step, some valid inequalities for the Knapsack problem are included.

  5. Resolvent approach for two-dimensional scattering problems. Application to the nonstationary Schroedinger problem and the KPI equation

    International Nuclear Information System (INIS)

    Boiti, M.; Pempinelli, F.; Pogrebkov, A.K.; Polivanov, M.C.

    1993-01-01

    The resolvent operator of the linear problem is determined as the full Green function continued in the complex domain in two variables. An analog of the known Hilbert identity is derived. The authors demonstrate the role of this identity in the study of two-dimensional scattering. Considering the nonstationary Schroedinger equation as an example, it is shown that all types of solutions of the linear problem, as well as spectral data known in the literature, are given as specific values of this unique function - the resolvent function. A new form of the inverse problem is formulated. 7 refs

  6. Mixed FEM for Second Order Elliptic Problems on Polygonal Meshes with BEM-Based Spaces

    KAUST Repository

    Efendiev, Yalchin

    2014-01-01

    We present a Boundary Element Method (BEM)-based FEM for mixed formulations of second order elliptic problems in two dimensions. The challenge, we would like to address, is a proper construction of H(div)-conforming vector valued trial functions on arbitrary polygonal partitions of the domain. The proposed construction generates trial functions on polygonal elements which inherit some of the properties of the unknown solution. In the numerical realization, the relevant local problems are treated by means of boundary integral formulations. We test the accuracy of the method on two model problems. © 2014 Springer-Verlag.

  7. An automatic formulation of inverse free second moment method for algebraic systems

    International Nuclear Information System (INIS)

    Shakshuki, Elhadi; Ponnambalam, Kumaraswamy

    2002-01-01

    In systems with probabilistic uncertainties, an estimation of reliability requires at least the first two moments. In this paper, we focus on probabilistic analysis of linear systems. The important tasks in this analysis are the formulation and the automation of the moment equations. The main objective of the formulation is to provide at least means and variances of the output variables with at least a second-order accuracy. The objective of the automation is to reduce the storage and computational complexities required for implementing (automating) those formulations. This paper extends the recent work done to calculate the first two moments of a set of random algebraic linear equations by developing a stamping procedure to facilitate its automation. The new method has an additional advantage of being able to solve problems when the mean matrix of a system is singular. Lastly, from storage and computational complexities and accuracy point of view, a comparison between the new method and another recently developed first order second moment method is made with numerical examples

  8. On the degeneracy of the IMRT optimization problem

    International Nuclear Information System (INIS)

    Alber, M.; Meedt, G.; Nuesslin, F.; Reemtsen, R.

    2002-01-01

    One approach to the computation of photon IMRT treatment plans is the formulation of an optimization problem with an objective function that derives from an objective density. An investigation of the second-order properties of such an objective function in a neighborhood of the minimizer opens an intuitive access to many traits of this approach. A general finding is that only a small subset of the parameter space has nonzero curvature, while the objective function is entirely flat in a neighborhood of the minimizer in most directions. The dimension of the subspace of vanishing curvature serves as a measure for the degeneracy of the solution. This finding is important both for algorithm design and evaluation of the mathematical model of clinical intuition, expressed by the objective function. The structure of the subspace of great curvature is found to be imposed on the problem by conflicts between objectives of target and critical structures. These conflicts and their corresponding modes of resolution form a common trait between all reasonable treatment plans of a given case. The high degree of degeneracy makes the use of a conjugate gradient optimization algorithm particularly favorable since the number of iterations to convergence is equivalent to the number of different eigenvalues of the curvature tensor and is hence independent from the number of optimization parameters. A high level of degeneracy of the fluence profiles implies that it should be possible to stipulate further delivery-related conditions without causing severe deterioration of the dose distribution

  9. Conceptual formulation on four-dimensional inverse planning for intensity modulated radiation therapy

    International Nuclear Information System (INIS)

    Lee, Louis; Ma Yunzhi; Xing Lei; Ye Yinyu

    2009-01-01

    Four-dimensional computed tomography (4DCT) offers an extra dimension of 'time' on the three-dimensional patient model with which we can incorporate target motion in radiation treatment (RT) planning and delivery in various ways such as in the concept of internal target volume, in gated treatment or in target tracking. However, for all these methodologies, different phases are essentially considered as non-interconnected independent phases for the purpose of optimization, in other words, the 'time' dimension has yet to be incorporated explicitly in the optimization algorithm and fully exploited. In this note, we have formulated a new 4D inverse planning technique that treats all the phases in the 4DCT as one single entity in the optimization. The optimization is formulated as a quadratic problem for disciplined convex programming that enables the problem to be analyzed and solved efficiently. In the proof-of-principle examples illustrated, we show that the temporal information of the spatial relation of the target and organs at risk could be 'exchanged' amongst different phases so that an appropriate weighting of dose deposition could be allocated to each phase, thus enabling a treatment with a tight target margin and a full duty cycle otherwise not achievable by either of the aforementioned methodologies. Yet there are practical issues to be solved in the 4D RT planning and delivery. The 4D concept in the optimization we have formulated here does provide insight on how the 'time' dimension can be exploited in the 4D optimization process. (note)

  10. Entropy Stable Summation-by-Parts Formulations for Compressible Computational Fluid Dynamics

    KAUST Repository

    Carpenter, M.H.

    2016-11-09

    A systematic approach based on a diagonal-norm summation-by-parts (SBP) framework is presented for implementing entropy stable (SS) formulations of any order for the compressible Navier–Stokes equations (NSE). These SS formulations discretely conserve mass, momentum, energy and satisfy a mathematical entropy equality for smooth problems. They are also valid for discontinuous flows provided sufficient dissipation is added at shocks and discontinuities to satisfy an entropy inequality. Admissible SBP operators include all centred diagonal-norm finite-difference (FD) operators and Legendre spectral collocation-finite element methods (LSC-FEM). Entropy stable multiblock FD and FEM operators follows immediately via nonlinear coupling operators that ensure conservation, accuracy and preserve the interior entropy estimates. Nonlinearly stable solid wall boundary conditions are also available. Existing SBP operators that lack a stability proof (e.g. weighted essentially nonoscillatory) may be combined with an entropy stable operator using a comparison technique to guarantee nonlinear stability of the pair. All capabilities extend naturally to a curvilinear form of the NSE provided that the coordinate mappings satisfy a geometric conservation law constraint. Examples are presented that demonstrate the robustness of current state-of-the-art entropy stable SBP formulations.

  11. State space approach to mixed boundary value problems.

    Science.gov (United States)

    Chen, C. F.; Chen, M. M.

    1973-01-01

    A state-space procedure for the formulation and solution of mixed boundary value problems is established. This procedure is a natural extension of the method used in initial value problems; however, certain special theorems and rules must be developed. The scope of the applications of the approach includes beam, arch, and axisymmetric shell problems in structural analysis, boundary layer problems in fluid mechanics, and eigenvalue problems for deformable bodies. Many classical methods in these fields developed by Holzer, Prohl, Myklestad, Thomson, Love-Meissner, and others can be either simplified or unified under new light shed by the state-variable approach. A beam problem is included as an illustration.

  12. Teaching evidence-based medicine using a problem-oriented approach.

    Science.gov (United States)

    Hosny, Somaya; Ghaly, Mona S

    2014-04-01

    Faculty of Medicine, Suez Canal University is adopting an innovative curriculum. Evidence-based medicine (EBM) has been integrated into problem based learning (PBL) sessions as a responsive innovative paradigm for the practice and teaching of clinical medicine. To integrate EBM in the problem based sessions of the sixth-year students, and to assess students' and tutor satisfaction with this change. EBM training was conducted for sixth-year students (196) including four theoretical, and eight practical sessions. Sixteen EBM educational scenarios (problems) were formulated, according to sixth-year curriculum. Each problem was discussed in two sessions through steps of EBM, namely: formulating PICO questions, searching for and appraising evidence, applying the evidence to the clinical scenario and analysing the practice. Students and tutors satisfaction were evaluated using a 3-point ratings questionnaire. The majority of students and faculty expressed their satisfaction about integrating EBM with PBL and agreed that the problems were more stimulating. However, 33.6% of students indicated that available time was insufficient for searching literatures. Integrating EBM into PBL sessions tends to be more interesting and stimulating than traditional PBL sessions for final year students and helps them to practice and implement EBM in clinical context.

  13. Predictors of dental visits for routine check-ups and for the resolution of problems among preschool children.

    Science.gov (United States)

    Camargo, Maria Beatriz Junqueira; Barros, Aluísio J D; Frazão, Paulo; Matijasevich, Alicia; Santos, Iná S; Peres, Marco Aurélio; Peres, Karen Glazer

    2012-02-01

    To estimate the prevalence of dental visits among preschool children and determine the factors associated with using dental services. A cross-sectional study was conducted with 1,129 five-year-old children from the Pelotas Birth Cohort Study in Pelotas (Southern Brazil) 2004, from September 2009 to January 2010. Use of dental services at least once in the child's life and the reason for the child's first dental visit were recorded. The categories assigned for the first dental visit were: routine check-up, resolution of a problem, or never saw a dentist. The oral examinations and interviews were performed in the children's homes. Socioeconomic aspects and independent variables related to the mother and child were analyzed using multivariable logistic regression. The prevalence of dental visits (both categories combined) was 37.0%. The main predictors for a routine visit were higher economic status, mothers with more schooling, and mothers who had received guidance about prevention. Major predictors for a visit because of a problem were having felt pain in the previous six months, mothers with higher education level, and mothers who had received guidance about prevention. Approximately 45.0% of mothers received information about how to prevent cavities, usually from the dentist. Children of mothers who adhered to health programs were more likely to have had a routine dental visit. The rate of preschool visits to dental services was lower than the rate for medical appointments (childcare). In addition to income and education, maternal behavior plays an important role in routine visits. Pain reported in the last six months and a high number of teeth affected by tooth decay, independent of other factors, were associated with visits for a specific problem. It is important to integrate oral health instruction into maternal and child health programs.

  14. New method for solving multidimensional scattering problem

    International Nuclear Information System (INIS)

    Melezhik, V.S.

    1991-01-01

    A new method is developed for solving the quantum mechanical problem of scattering of a particle with internal structure. The multichannel scattering problem is formulated as a system of nonlinear functional equations for the wave function and reaction matrix. The method is successfully tested for the scattering from a nonspherical potential well and a long-range nonspherical scatterer. The method is also applicable to solving the multidimensional Schroedinger equation with a discrete spectrum. As an example the known problem of a hydrogen atom in a homogeneous magnetic field is analyzed

  15. Detectors for high resolution dynamic pet

    International Nuclear Information System (INIS)

    Derenzo, S.E.; Budinger, T.F.; Huesman, R.H.

    1983-05-01

    This report reviews the motivation for high spatial resolution in dynamic positron emission tomography of the head and the technical problems in realizing this objective. We present recent progress in using small silicon photodiodes to measure the energy deposited by 511 keV photons in small BGO crystals with an energy resolution of 9.4% full-width at half-maximum. In conjunction with a suitable phototube coupled to a group of crystals, the photodiode signal to noise ratio is sufficient for the identification of individual crystals both for conventional and time-of-flight positron tomography

  16. OTTER, Resolution Style Theorem Prover

    International Nuclear Information System (INIS)

    McCune, W.W.

    2001-01-01

    1 - Description of program or function: OTTER (Other Techniques for Theorem-proving and Effective Research) is a resolution-style theorem-proving program for first-order logic with equality. OTTER includes the inference rules binary resolution, hyper-resolution, UR-resolution, and binary para-modulation. These inference rules take as small set of clauses and infer a clause. If the inferred clause is new and useful, it is stored and may become available for subsequent inferences. Other capabilities are conversion from first-order formulas to clauses, forward and back subsumption, factoring, weighting, answer literals, term ordering, forward and back demodulation, and evaluable functions and predicates. 2 - Method of solution: For its inference process OTTER uses the given-clause algorithm, which can be viewed as a simple implementation of the set of support strategy. OTTER maintains three lists of clauses: axioms, sos (set of support), and demodulators. OTTER is not automatic. Even after the user has encoded a problem into first-order logic or into clauses, the user must choose inference rules, set options to control the processing of inferred clauses, and decide which input formulae or clauses are to be in the initial set of support and which, if any, equalities are to be demodulators. If OTTER fails to find a proof, the user may try again different initial conditions. 3 - Restrictions on the complexity of the problem - Maxima of: 5000 characters in an input string, 64 distinct variables in a clause, 51 characters in any symbol. The maxima can be changed by finding the appropriate definition in the header.h file, increasing the limit, and recompiling OTTER. There are a few constraints on the order of commands

  17. The Offshore Wind Farm Array Cable Layout Problem

    DEFF Research Database (Denmark)

    Bauer, Joanna; Lysgaard, Jens

    2014-01-01

    In an offshore wind farm (OWF), the turbines are connected to a transformer by cable routes that cannot cross each other. Finding the minimum cost array cable layout thus amounts to a vehicle routing problem with the additional constraints that the routes must be embedded in the plane. For this p......In an offshore wind farm (OWF), the turbines are connected to a transformer by cable routes that cannot cross each other. Finding the minimum cost array cable layout thus amounts to a vehicle routing problem with the additional constraints that the routes must be embedded in the plane....... For this problem, both exact and heuristic methods are of interest. We optimize cable layouts for real-world OWFs by a hop-indexed integer programming formulation, and develop a heuristic for computing layouts based on the Clarke and Wright savings heuristic for vehicle routing. Our heuristic computes layouts...... on average only 2% more expensive than the optimal layout. Finally, we present two problem extensions arising from real-world OWF cable layouts, and adapt the integer programming formulation to one of them. The thus obtained optimal layouts are up to 13% cheaper than the actually installed layouts....

  18. Energy flow in a bound electromagnetic field: resolution of apparent paradoxes

    International Nuclear Information System (INIS)

    Kholmetskii, A L; Yarman, T

    2008-01-01

    In this paper, we present a resolution of apparent paradoxes formulated in (Kholmetskii A L 2006 Apparent paradoxes in classical electrodynamics: the energy-momentum conservation law for a bound electromagnetic field Eur. J. Phys. 27 825-38; Kholmetskii A L and Yarman T 2008 Apparent paradoxes in classical electrodynamics: a fluid medium in an electromagnetic field Eur. J. Phys. 29 1127) and dealing with the energy flux in a bound electromagnetic field

  19. The use of a Cissus quadrangularis formulation in the management of weight loss and metabolic syndrome

    Directory of Open Access Journals (Sweden)

    Agbor Gabriel

    2006-09-01

    Full Text Available Abstract Aim Once considered a problem of developed countries, obesity and obesity-related complications (such as metabolic syndrome are rapidly spreading around the globe. The purpose of the present study was to investigate the use of a Cissus quadrangularis formulation in the management of metabolic syndrome, particularly weight loss and central obesity. Methods The study was a randomized, double-blind, placebo-controlled design involving 123 overweight and obese persons (47.2% male; 52.8% female; ages 19–50. The 92 obese (BMI >30 participants were randomized into three groups; placebo, formulation/no diet, and formulation/diet (2100–2200 calories/day. The 31 overweight participants (BMI = 25–29 formed a fourth (no diet treatment group. All participants received two daily doses of the formulation or placebo and remained on a normal or calorie-controlled diet for 8 weeks. Results At the end of the trial period, statistically significant net reductions in weight and central obesity, as well as in fasting blood glucose, total cholesterol, LDL-cholesterol, triglycerides, and C-reactive protein were observed in participants who received the formulation, regardless of diet. Conclusion Cissus quadrangularis formulation appears to be useful in the management of weight loss and metabolic syndrome.

  20. Nonlinear Vibration of Oscillation Systems using Frequency-Amplitude Formulation

    Directory of Open Access Journals (Sweden)

    A. Fereidoon

    2012-01-01

    Full Text Available In this paper we study the periodic solutions of free vibration of mechanical systems with third and fifth-order nonlinearity for two examples using He's Frequency-Amplitude Formulation (HFAF.The effectiveness and convenience of the method is illustrated in these examples. It will be shown that the solutions obtained with current method have a fabulous conformity with those achieved from time marching solution. HFAF is easy with powerful concepts and the high accuracy, so it can be found widely applicable in vibrations, especially strong nonlinearity oscillatory problems.

  1. Use of the finite element displacement method to solve solid-fluid interaction vibration problems

    International Nuclear Information System (INIS)

    Brown, S.J.; Hsu, K.H.

    1978-01-01

    It is shown through comparison to experimental, theoretical, and other finite element formulations that the finite element displacement method can solve accurately and economically a certain class of solid-fluid eigenvalue problems. The problems considered are small displacements in the absence of viscous damping and are 2-D and 3-D in nature. In this study the advantages of the finite element method (in particular the displacement formulation) is apparent in that a large structure consisting of the cylinders, support flanges, fluid, and other experimental boundaries could be modeled to yield good correlation to experimental data. The ability to handle large problems with standard structural programs is the key advantage of the displacement fluid method. The greatest obstacle is the inability of the analyst to inhibit those rotational degrees of freedom that are unnecessary to his fluid-structure vibration problem. With judicious use of element formulation, boundary conditions and modeling, the displacement finite element method can be successfully used to predict solid-fluid response to vibration and seismic loading

  2. A GPU-Based Genetic Algorithm for the P-Median Problem

    OpenAIRE

    AlBdaiwi, Bader F.; AboElFotoh, Hosam M. F.

    2016-01-01

    The p-median problem is a well-known NP-hard problem. Many heuristics have been proposed in the literature for this problem. In this paper, we exploit a GPGPU parallel computing platform to present a new genetic algorithm implemented in Cuda and based on a Pseudo Boolean formulation of the p-median problem. We have tested the effectiveness of our algorithm using a Tesla K40 (2880 Cuda cores) on 290 different benchmark instances obtained from OR-Library, discrete location problems benchmark li...

  3. A Fast Algorithm for Image Super-Resolution from Blurred Observations

    Directory of Open Access Journals (Sweden)

    Ng Michael K

    2006-01-01

    Full Text Available We study the problem of reconstruction of a high-resolution image from several blurred low-resolution image frames. The image frames consist of blurred, decimated, and noisy versions of a high-resolution image. The high-resolution image is modeled as a Markov random field (MRF, and a maximum a posteriori (MAP estimation technique is used for the restoration. We show that with the periodic boundary condition, a high-resolution image can be restored efficiently by using fast Fourier transforms. We also apply the preconditioned conjugate gradient method to restore high-resolution images in the aperiodic boundary condition. Computer simulations are given to illustrate the effectiveness of the proposed approach.

  4. Some open problems in noncommutative probability

    International Nuclear Information System (INIS)

    Kruszynski, P.

    1981-01-01

    A generalization of probability measures to non-Boolean structures is discussed. The starting point of the theory is the Gleason theorem about the form of measures on closed subspaces of a Hilbert space. The problems are formulated in terms of probability on lattices of projections in arbitrary von Neumann algebras. (Auth.)

  5. Coupled heat conduction and thermal stress formulation using explicit integration

    International Nuclear Information System (INIS)

    Marchertas, A.H.; Kulak, R.F.

    1982-06-01

    The formulation needed for the conductance of heat by means of explicit integration is presented. The implementation of these expressions into a transient structural code, which is also based on explicit temporal integration, is described. Comparisons of theoretical results with code predictions are given both for one-dimensional and two-dimensional problems. The coupled thermal and structural solution of a concrete crucible, when subjected to a sudden temperature increase, shows the history of cracking. The extent of cracking is compared with experimental data

  6. Approximate solutions for the two-dimensional integral transport equation. Solution of complex two-dimensional transport problems

    International Nuclear Information System (INIS)

    Sanchez, Richard.

    1980-11-01

    This work is divided into two parts: the first part deals with the solution of complex two-dimensional transport problems, the second one (note CEA-N-2166) treats the critically mixed methods of resolution. A set of approximate solutions for the isotropic two-dimensional neutron transport problem has been developed using the interface current formalism. The method has been applied to regular lattices of rectangular cells containing a fuel pin, cladding, and water, or homogenized structural material. The cells are divided into zones that are homogeneous. A zone-wise flux expansion is used to formulate a direct collision probability problem within a cell. The coupling of the cells is effected by making extra assumptions on the currents entering and leaving the interfaces. Two codes have been written: CALLIOPE uses a cylindrical cell model and one or three terms for the flux expansion, and NAUSICAA uses a two-dimensional flux representation and does a truly two-dimensional calculation inside each cell. In both codes, one or three terms can be used to make a space-independent expansion of the angular fluxes entering and leaving each side of the cell. The accuracies and computing times achieved with the different approximations are illustrated by numerical studies on two benchmark problems and by calculations performed in the APOLLO multigroup code [fr

  7. Tactile friction of topical formulations.

    Science.gov (United States)

    Skedung, L; Buraczewska-Norin, I; Dawood, N; Rutland, M W; Ringstad, L

    2016-02-01

    The tactile perception is essential for all types of topical formulations (cosmetic, pharmaceutical, medical device) and the possibility to predict the sensorial response by using instrumental methods instead of sensory testing would save time and cost at an early stage product development. Here, we report on an instrumental evaluation method using tactile friction measurements to estimate perceptual attributes of topical formulations. Friction was measured between an index finger and an artificial skin substrate after application of formulations using a force sensor. Both model formulations of liquid crystalline phase structures with significantly different tactile properties, as well as commercial pharmaceutical moisturizing creams being more tactile-similar, were investigated. Friction coefficients were calculated as the ratio of the friction force to the applied load. The structures of the model formulations and phase transitions as a result of water evaporation were identified using optical microscopy. The friction device could distinguish friction coefficients between the phase structures, as well as the commercial creams after spreading and absorption into the substrate. In addition, phase transitions resulting in alterations in the feel of the formulations could be detected. A correlation was established between skin hydration and friction coefficient, where hydrated skin gave rise to higher friction. Also a link between skin smoothening and finger friction was established for the commercial moisturizing creams, although further investigations are needed to analyse this and correlations with other sensorial attributes in more detail. The present investigation shows that tactile friction measurements have potential as an alternative or complement in the evaluation of perception of topical formulations. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  8. A Convex Formulation for Magnetic Particle Imaging X-Space Reconstruction.

    Science.gov (United States)

    Konkle, Justin J; Goodwill, Patrick W; Hensley, Daniel W; Orendorff, Ryan D; Lustig, Michael; Conolly, Steven M

    2015-01-01

    Magnetic Particle Imaging (mpi) is an emerging imaging modality with exceptional promise for clinical applications in rapid angiography, cell therapy tracking, cancer imaging, and inflammation imaging. Recent publications have demonstrated quantitative mpi across rat sized fields of view with x-space reconstruction methods. Critical to any medical imaging technology is the reliability and accuracy of image reconstruction. Because the average value of the mpi signal is lost during direct-feedthrough signal filtering, mpi reconstruction algorithms must recover this zero-frequency value. Prior x-space mpi recovery techniques were limited to 1d approaches which could introduce artifacts when reconstructing a 3d image. In this paper, we formulate x-space reconstruction as a 3d convex optimization problem and apply robust a priori knowledge of image smoothness and non-negativity to reduce non-physical banding and haze artifacts. We conclude with a discussion of the powerful extensibility of the presented formulation for future applications.

  9. Usage of humic materials for formulation of stable microbial inoculants

    Science.gov (United States)

    Kydralieva, K. A.; Khudaibergenova, B. M.; Elchin, A. A.; Gorbunova, N. V.; Muratov, V. S.; Jorobekova, Sh. J.

    2009-04-01

    Some microbes have been domesticated for environment service, for example in a variety of novel applications, including efforts to reduce environmental problems. For instance, antagonistic organisms can be used as biological control agents to reduce the use of chemical pesticides, or efficient degraders can be applied as bioprophylactics to minimise the spread of chemical pollutants. Microorganisms can also be used for the biological clean-up of polluted soil or as plant growth-promoting bacteria that stimulate nutrient uptake. Many microbial applications require large-scale cultivation of the organisms. The biomass production must then be followed by formulation steps to ensure long-term stability and convenient use. However, there remains a need to further develop knowledge on how to optimise fermentation of "non-conventional microorganisms" for environmental applications involving the intact living cells. The goal of presented study is to develop fermentation and formulation techniques for termolabile rhizobacteria isolates - Pseudomonas spp. with major biotechnical potential. Development of efficient and cost-effective media and process parameters giving high cell yields are important priorities. This also involves establishing fermentation parameters yielding cells well adapted to subsequent formulation procedures. Collectively, these strategies will deliver a high proportion of viable cells with good long-term survival. Our main efforts were focused on development of more efficient drying techniques for microorganisms, particularly spray drying and fluidised bed-drying. The advantages of dry formulations are that storage and delivery costs are much lower than for liquid formulations and that long-term survival can be very high if initial packaging is carefully optimised. In order to improve and optimise formulations various kinds of humics-based excipients have been added that have beneficial effects on the viability of the organisms and the storage stability

  10. Stability and attraction domains of traffic equilibria in a day-to-day dynamical system formulation.

    NARCIS (Netherlands)

    Bie, Jing; Lo, Hong K.

    2010-01-01

    We formulate the traffic assignment problem from a dynamical system approach. All exogenous factors are considered to be constant over time and user equilibrium is being pursued through a day-to-day adjustment process. The traffic dynamics is represented by a recurrence function, which governs the

  11. A note on the strong formulation of stochastic control problems with model uncertainty

    OpenAIRE

    Sirbu, Mihai

    2014-01-01

    We consider a  Markovian stochastic control problem with  model uncertainty. The controller (intelligent player) observes only the state, and, therefore, uses feedback (closed-loop) strategies.  The adverse player (nature) who does not have a direct interest in the payoff, chooses open-loop controls that parametrize Knightian uncertainty. This creates a two-step optimization  problem (like half of a game) over feedback strategies and open-loop controls. The main result is to sh...

  12. Determination of flexibility factors in curved pipes with end restraints using a semi-analytic formulation

    International Nuclear Information System (INIS)

    Fonseca, E.M.M.; Melo, F.J.M.Q. de; Oliveira, C.A.M.

    2002-01-01

    Piping systems are structural sets used in the chemical industry, conventional or nuclear power plants and fluid transport in general-purpose process equipment. They include curved elements built as parts of toroidal thin-walled structures. The mechanical behaviour of such structural assemblies is of leading importance for satisfactory performance and safety standards of the installations. This paper presents a semi-analytic formulation based on Fourier trigonometric series for solving the pure bending problem in curved pipes. A pipe element is considered as a part of a toroidal shell. A displacement formulation pipe element was developed with Fourier series. The solution of this problem is solved from a system of differential equations using mathematical software. To build-up the solution, a simple but efficient deformation model, from a semi-membrane behaviour, was followed here, given the geometry and thin shell assumption. The flexibility factors are compared with the ASME code for some elbow dimensions adopted from ISO 1127. The stress field distribution was also calculated

  13. Nonnegative constraint quadratic program technique to enhance the resolution of γ spectra

    Science.gov (United States)

    Li, Jinglun; Xiao, Wuyun; Ai, Xianyun; Chen, Ye

    2018-04-01

    Two concepts of the nonnegative least squares problem (NNLS) and the linear complementarity problem (LCP) are introduced for the resolution enhancement of the γ spectra. The respective algorithms such as the active set method and the primal-dual interior point method are applied to solve the above two problems. In mathematics, the nonnegative constraint results in the sparsity of the optimal solution of the deconvolution, and it is this sparsity that enhances the resolution. Finally, a comparison in the peak position accuracy and the computation time is made between these two methods and the boosted L_R and Gold methods.

  14. Inverse problem for in vivo NMR spatial localization

    International Nuclear Information System (INIS)

    Hasenfeld, A.C.

    1985-11-01

    The basic physical problem of NMR spatial localization is considered. To study diseased sites, one must solve the problem of adequately localizing the NMR signal. We formulate this as an inverse problem. As the NMR Bloch equations determine the motion of nuclear spins in applied magnetic fields, a theoretical study is undertaken to answer the question of how to design magnetic field configurations to achieve these localized excited spin populations. Because of physical constraints in the production of the relevant radiofrequency fields, the problem factors into a temporal one and a spatial one. We formulate the temporal problem as a nonlinear transformation, called the Bloch Transform, from the rf input to the magnetization response. In trying to invert this transformation, both linear (for the Fourier Transform) and nonlinear (for the Bloch Transform) modes of radiofrequency excitation are constructed. The spatial problem is essentially a statics problem for the Maxwell equations of electromagnetism, as the wavelengths of the radiation considered are on the order of ten meters, and so propagation effects are negligible. In the general case, analytic solutions are unavailable, and so the methods of computer simulation are used to map the rf field spatial profiles. Numerical experiments are also performed to verify the theoretical analysis, and experimental confirmation of the theory is carried out on the 0.5 Tesla IBM/Oxford Imaging Spectrometer at the LBL NMR Medical Imaging Facility. While no explicit inverse is constructed to ''solve'' this problem, the combined theoretical/numerical analysis is validated experimentally, justifying the approximations made. 56 refs., 31 figs

  15. Inverse problem for in vivo NMR spatial localization

    Energy Technology Data Exchange (ETDEWEB)

    Hasenfeld, A.C.

    1985-11-01

    The basic physical problem of NMR spatial localization is considered. To study diseased sites, one must solve the problem of adequately localizing the NMR signal. We formulate this as an inverse problem. As the NMR Bloch equations determine the motion of nuclear spins in applied magnetic fields, a theoretical study is undertaken to answer the question of how to design magnetic field configurations to achieve these localized excited spin populations. Because of physical constraints in the production of the relevant radiofrequency fields, the problem factors into a temporal one and a spatial one. We formulate the temporal problem as a nonlinear transformation, called the Bloch Transform, from the rf input to the magnetization response. In trying to invert this transformation, both linear (for the Fourier Transform) and nonlinear (for the Bloch Transform) modes of radiofrequency excitation are constructed. The spatial problem is essentially a statics problem for the Maxwell equations of electromagnetism, as the wavelengths of the radiation considered are on the order of ten meters, and so propagation effects are negligible. In the general case, analytic solutions are unavailable, and so the methods of computer simulation are used to map the rf field spatial profiles. Numerical experiments are also performed to verify the theoretical analysis, and experimental confirmation of the theory is carried out on the 0.5 Tesla IBM/Oxford Imaging Spectrometer at the LBL NMR Medical Imaging Facility. While no explicit inverse is constructed to ''solve'' this problem, the combined theoretical/numerical analysis is validated experimentally, justifying the approximations made. 56 refs., 31 figs.

  16. Low-resolution refinement tools in REFMAC5

    International Nuclear Information System (INIS)

    Nicholls, Robert A.; Long, Fei; Murshudov, Garib N.

    2012-01-01

    Low-resolution refinement tools implemented in REFMAC5 are described, including the use of external structural restraints, helical restraints and regularized anisotropic map sharpening. Two aspects of low-resolution macromolecular crystal structure analysis are considered: (i) the use of reference structures and structural units for provision of structural prior information and (ii) map sharpening in the presence of noise and the effects of Fourier series termination. The generation of interatomic distance restraints by ProSMART and their subsequent application in REFMAC5 is described. It is shown that the use of such external structural information can enhance the reliability of derived atomic models and stabilize refinement. The problem of map sharpening is considered as an inverse deblurring problem and is solved using Tikhonov regularizers. It is demonstrated that this type of map sharpening can automatically produce a map with more structural features whilst maintaining connectivity. Tests show that both of these directions are promising, although more work needs to be performed in order to further exploit structural information and to address the problem of reliable electron-density calculation

  17. Measuring the Level of Transfer Learning by an AP Physics Problem-Solver

    National Research Council Canada - National Science Library

    Klenk, Matthew; Forbus, Ken

    2007-01-01

    .... We approach this problem by focusing on model formulation, i.e., how to move from the unruly, broad set of concepts used in everyday life to a concise, formal vocabulary of abstractions that can be used effectively for problem solving...

  18. Completion strategy or emphasis manipulation? Task support for teaching information problem solving

    NARCIS (Netherlands)

    Frerejean, Jimmy; Van Strien, Johan; Kirschner, Paul A.; Brand-Gruwel, Saskia

    2016-01-01

    While most students seem to solve information problems effortlessly, research shows that the cognitive skills for effective information problem solving are often underdeveloped. Students manage to find information and formulate solutions, but the quality of their process and product is questionable.

  19. Completion strategy or emphasis manipulation? : Task support for teaching information problem solving

    NARCIS (Netherlands)

    Frerejean, Jimmy; van Strien, J.L.H.; Kirschner, Paul A.; Brand-Gruwel, Saskia

    While most students seem to solve information problems effortlessly, research shows that the cognitive skills for effective information problem solving are often underdeveloped. Students manage to find information and formulate solutions, but the quality of their process and product is questionable.

  20. ADOPTING THE PROBLEM BASED LEARNING APPROACH IN A GIS PROJECT MANAGEMENT CLASS

    Science.gov (United States)

    Problem Based Learning (PBL) is a process that emphasizes the need for developing problem solving skills through hands-on project formulation and management. A class adopting the PBL method provides students with an environment to acquire necessary knowledge to encounter, unders...