WorldWideScience

Sample records for constrained weighted-integration method

  1. Null Space Integration Method for Constrained Multibody Systems with No Constraint Violation

    International Nuclear Information System (INIS)

    Terze, Zdravko; Lefeber, Dirk; Muftic, Osman

    2001-01-01

    A method for integrating equations of motion of constrained multibody systems with no constraint violation is presented. A mathematical model, shaped as a differential-algebraic system of index 1, is transformed into a system of ordinary differential equations using the null-space projection method. Equations of motion are set in a non-minimal form. During integration, violations of constraints are corrected by solving constraint equations at the position and velocity level, utilizing the metric of the system's configuration space, and projective criterion to the coordinate partitioning method. The method is applied to dynamic simulation of 3D constrained biomechanical system. The simulation results are evaluated by comparing them to the values of characteristic parameters obtained by kinematics analysis of analyzed motion based unmeasured kinematics data

  2. Priority classes and weighted constrained equal awards rules for the claims problem

    DEFF Research Database (Denmark)

    Szwagrzak, Karol

    2015-01-01

    . They are priority-augmented versions of the standard weighted constrained equal awards rules, also known as weighted gains methods (Moulin, 2000): individuals are sorted into priority classes; the resource is distributed among the individuals in the first priority class using a weighted constrained equal awards...... rule; if some of the resource is left over, then it is distributed among the individuals in the second priority class, again using a weighted constrained equal awards rule; the distribution carries on in this way until the resource is exhausted. Our characterization extends to a generalized version...

  3. Free and constrained symplectic integrators for numerical general relativity

    International Nuclear Information System (INIS)

    Richter, Ronny; Lubich, Christian

    2008-01-01

    We consider symplectic time integrators in numerical general relativity and discuss both free and constrained evolution schemes. For free evolution of ADM-like equations we propose the use of the Stoermer-Verlet method, a standard symplectic integrator which here is explicit in the computationally expensive curvature terms. For the constrained evolution we give a formulation of the evolution equations that enforces the momentum constraints in a holonomically constrained Hamiltonian system and turns the Hamilton constraint function from a weak to a strong invariant of the system. This formulation permits the use of the constraint-preserving symplectic RATTLE integrator, a constrained version of the Stoermer-Verlet method. The behavior of the methods is illustrated on two effectively (1+1)-dimensional versions of Einstein's equations, which allow us to investigate a perturbed Minkowski problem and the Schwarzschild spacetime. We compare symplectic and non-symplectic integrators for free evolution, showing very different numerical behavior for nearly-conserved quantities in the perturbed Minkowski problem. Further we compare free and constrained evolution, demonstrating in our examples that enforcing the momentum constraints can turn an unstable free evolution into a stable constrained evolution. This is demonstrated in the stabilization of a perturbed Minkowski problem with Dirac gauge, and in the suppression of the propagation of boundary instabilities into the interior of the domain in Schwarzschild spacetime

  4. Changes of gait parameters following constrained-weight shift training in patients with stroke

    OpenAIRE

    Nam, Seok Hyun; Son, Sung Min; Kim, Kyoung

    2017-01-01

    [Purpose] This study aimed to investigate the effects of training involving compelled weight shift on the paretic lower limb on gait parameters and plantar pressure distribution in patients with stroke. [Subjects and Methods] Forty-five stroke patients participated in the study and were randomly divided into: group with a 5-mm lift on the non-paretic side for constrained weight shift training (5: constrained weight shift training) (n=15); group with a 10-mm lift on the non-paretic side for co...

  5. An integration weighting method to evaluate extremum coordinates

    International Nuclear Information System (INIS)

    Ilyushchenko, V.I.

    1990-01-01

    The numerical version of the Laplace asymptotics has been used to evaluate the coordinates of extrema of multivariate continuous and discontinuous test functions. The performed computer experiments demonstrate the high efficiency of the integration method proposed. The saturating dependence of extremum coordinates on such parameters as a number of integration subregions and that of K going /theoretically/ to infinity has been studied in detail for the limitand being a ratio of two Laplace integrals with exponentiated K. The given method is an integral equivalent of that of weighted means. As opposed to the standard optimization methods of the zero, first and second order the proposed method can be successfully applied to optimize discontinuous objective functions, too. There are possibilities of applying the integration method in the cases, when the conventional techniques fail due to poor analytical properties of the objective functions near extremal points. The proposed method is efficient in searching for both local and global extrema of multimodal objective functions. 12 refs.; 4 tabs

  6. Path integral methods for primordial density perturbations - sampling of constrained Gaussian random fields

    International Nuclear Information System (INIS)

    Bertschinger, E.

    1987-01-01

    Path integrals may be used to describe the statistical properties of a random field such as the primordial density perturbation field. In this framework the probability distribution is given for a Gaussian random field subjected to constraints such as the presence of a protovoid or supercluster at a specific location in the initial conditions. An algorithm has been constructed for generating samples of a constrained Gaussian random field on a lattice using Monte Carlo techniques. The method makes possible a systematic study of the density field around peaks or other constrained regions in the biased galaxy formation scenario, and it is effective for generating initial conditions for N-body simulations with rare objects in the computational volume. 21 references

  7. Measurement of H'(0.07) with pulse height weighting integration method

    International Nuclear Information System (INIS)

    Liye, LIU; Gang, JIN; Jizeng, MA

    2002-01-01

    H'(0.07) is an important quantity for radiation field measurement in health physics. One of the plastic scintillator measurement methods is employing the weak current produced by PMT. However, there are some weaknesses in the current method. For instance: sensitive to environment humidity and temperature, non-linearity energy response. In order to increase the precision of H'(0.07) measurement, a Pulse Height Weighting Integration Method is introduced for its advantages: low noise, high sensitivity, data processable, wide measurement range. Pulse Height Weighting Integration Method seems to be acceptable to measure directional dose equivalent. The representative theoretical energy response of the pre-described method accords with the preliminary experiment result

  8. A Local Weighted Nearest Neighbor Algorithm and a Weighted and Constrained Least-Squared Method for Mixed Odor Analysis by Electronic Nose Systems

    Directory of Open Access Journals (Sweden)

    Jyuo-Min Shyu

    2010-11-01

    Full Text Available A great deal of work has been done to develop techniques for odor analysis by electronic nose systems. These analyses mostly focus on identifying a particular odor by comparing with a known odor dataset. However, in many situations, it would be more practical if each individual odorant could be determined directly. This paper proposes two methods for such odor components analysis for electronic nose systems. First, a K-nearest neighbor (KNN-based local weighted nearest neighbor (LWNN algorithm is proposed to determine the components of an odor. According to the component analysis, the odor training data is firstly categorized into several groups, each of which is represented by its centroid. The examined odor is then classified as the class of the nearest centroid. The distance between the examined odor and the centroid is calculated based on a weighting scheme, which captures the local structure of each predefined group. To further determine the concentration of each component, odor models are built by regressions. Then, a weighted and constrained least-squares (WCLS method is proposed to estimate the component concentrations. Experiments were carried out to assess the effectiveness of the proposed methods. The LWNN algorithm is able to classify mixed odors with different mixing ratios, while the WCLS method can provide good estimates on component concentrations.

  9. New Internet search volume-based weighting method for integrating various environmental impacts

    Energy Technology Data Exchange (ETDEWEB)

    Ji, Changyoon, E-mail: changyoon@yonsei.ac.kr; Hong, Taehoon, E-mail: hong7@yonsei.ac.kr

    2016-01-15

    Weighting is one of the steps in life cycle impact assessment that integrates various characterized environmental impacts as a single index. Weighting factors should be based on the society's preferences. However, most previous studies consider only the opinion of some people. Thus, this research proposes a new weighting method that determines the weighting factors of environmental impact categories by considering public opinion on environmental impacts using the Internet search volumes for relevant terms. To validate the new weighting method, the weighting factors for six environmental impacts calculated by the new weighting method were compared with the existing weighting factors. The resulting Pearson's correlation coefficient between the new and existing weighting factors was from 0.8743 to 0.9889. It turned out that the new weighting method presents reasonable weighting factors. It also requires less time and lower cost compared to existing methods and likewise meets the main requirements of weighting methods such as simplicity, transparency, and reproducibility. The new weighting method is expected to be a good alternative for determining the weighting factor. - Highlight: • A new weighting method using Internet search volume is proposed in this research. • The new weighting method reflects the public opinion using Internet search volume. • The correlation coefficient between new and existing weighting factors is over 0.87. • The new weighting method can present the reasonable weighting factors. • The proposed method can be a good alternative for determining the weighting factors.

  10. New Internet search volume-based weighting method for integrating various environmental impacts

    International Nuclear Information System (INIS)

    Ji, Changyoon; Hong, Taehoon

    2016-01-01

    Weighting is one of the steps in life cycle impact assessment that integrates various characterized environmental impacts as a single index. Weighting factors should be based on the society's preferences. However, most previous studies consider only the opinion of some people. Thus, this research proposes a new weighting method that determines the weighting factors of environmental impact categories by considering public opinion on environmental impacts using the Internet search volumes for relevant terms. To validate the new weighting method, the weighting factors for six environmental impacts calculated by the new weighting method were compared with the existing weighting factors. The resulting Pearson's correlation coefficient between the new and existing weighting factors was from 0.8743 to 0.9889. It turned out that the new weighting method presents reasonable weighting factors. It also requires less time and lower cost compared to existing methods and likewise meets the main requirements of weighting methods such as simplicity, transparency, and reproducibility. The new weighting method is expected to be a good alternative for determining the weighting factor. - Highlight: • A new weighting method using Internet search volume is proposed in this research. • The new weighting method reflects the public opinion using Internet search volume. • The correlation coefficient between new and existing weighting factors is over 0.87. • The new weighting method can present the reasonable weighting factors. • The proposed method can be a good alternative for determining the weighting factors.

  11. Improving Allergen Prediction in Main Crops Using a Weighted Integrative Method.

    Science.gov (United States)

    Li, Jing; Wang, Jing; Li, Jing

    2017-12-01

    As a public health problem, food allergy is frequently caused by food allergy proteins, which trigger a type-I hypersensitivity reaction in the immune system of atopic individuals. The food allergens in our daily lives are mainly from crops including rice, wheat, soybean and maize. However, allergens in these main crops are far from fully uncovered. Although some bioinformatics tools or methods predicting the potential allergenicity of proteins have been proposed, each method has their limitation. In this paper, we built a novel algorithm PREAL W , which integrated PREAL, FAO/WHO criteria and motif-based method by a weighted average score, to benefit the advantages of different methods. Our results illustrated PREAL W has better performance significantly in the crops' allergen prediction. This integrative allergen prediction algorithm could be useful for critical food safety matters. The PREAL W could be accessed at http://lilab.life.sjtu.edu.cn:8080/prealw .

  12. GIS-Based Integration of Subjective and Objective Weighting Methods for Regional Landslides Susceptibility Mapping

    Directory of Open Access Journals (Sweden)

    Suhua Zhou

    2016-04-01

    Full Text Available The development of landslide susceptibility maps is of great importance due to rapid urbanization. The purpose of this study is to present a method to integrate the subjective weight with objective weight for regional landslide susceptibility mapping on the geographical information system (GIS platform. The analytical hierarchy process (AHP, which is subjective, was employed to weight predictive factors’ contribution to landslide occurrence. The frequency ratio (FR method, which is objective, was used to derive subclasses’ frequency ratio with respect to landslides that indicate the relative importance of a subclass within each predictive factor. A case study was carried out at Tsushima Island, Japan, using a historical inventory of 534 landslides and seven predictive factors: elevation, slope, aspect, terrain roughness index (TRI, lithology, land cover and mean annual precipitation (MAP. The landslide susceptibility index (LSI was calculated using the weighted linear combination of factors’ weights and subclasses’ weights. The study area was classified into five susceptibility zones according to the LSI. In addition, the produced susceptibility map was compared with maps generated using the conventional FR and AHP method and validated using the relative landslide index (RLI. The validation result showed that the proposed method performed better than the conventional application of the FR method and AHP method. The obtained landslide susceptibility maps could serve as a scientific basis for urban planning and landslide hazard management.

  13. A Few Expanding Integrable Models, Hamiltonian Structures and Constrained Flows

    International Nuclear Information System (INIS)

    Zhang Yufeng

    2011-01-01

    Two kinds of higher-dimensional Lie algebras and their loop algebras are introduced, for which a few expanding integrable models including the coupling integrable couplings of the Broer-Kaup (BK) hierarchy and the dispersive long wave (DLW) hierarchy as well as the TB hierarchy are obtained. From the reductions of the coupling integrable couplings, the corresponding coupled integrable couplings of the BK equation, the DLW equation, and the TB equation are obtained, respectively. Especially, the coupling integrable coupling of the TB equation reduces to a few integrable couplings of the well-known mKdV equation. The Hamiltonian structures of the coupling integrable couplings of the three kinds of soliton hierarchies are worked out, respectively, by employing the variational identity. Finally, we decompose the BK hierarchy of evolution equations into x-constrained flows and t n -constrained flows whose adjoint representations and the Lax pairs are given. (general)

  14. A constrained Hartree-Fock-Bogoliubov equation derived from the double variational method

    International Nuclear Information System (INIS)

    Onishi, Naoki; Horibata, Takatoshi.

    1980-01-01

    The double variational method is applied to the intrinsic state of the generalized BCS wave function. A constrained Hartree-Fock-Bogoliubov equation is derived explicitly in the form of an eigenvalue equation. A method of obtaining approximate overlap and energy overlap integrals is proposed. This will help development of numerical calculations of the angular momentum projection method, especially for general intrinsic wave functions without any symmetry restrictions. (author)

  15. Weight Constrained DEA Measurement of the Quality of Life in Spanish Municipalities in 2011.

    Science.gov (United States)

    González, Eduardo; Cárcaba, Ana; Ventura, Juan

    2018-01-01

    This paper measures quality of life (QoL) in the 393 largest Spanish municipalities in 2011. We follow recent descriptions of QoL dimensions to propose an integrated framework composed of eight dimensions: material living conditions, health, education, environment, economic and physical safety, governance and political voice, social interaction, and personal activities. Using different sources of information we construct 16 indicators, two per each of the QoL dimensions considered. Weight constrained data envelopment analysis (DEA) is then used to estimate a composite indicator of the QoL of each municipality. Robustness is checked by altering the weight ranges introduced within the DEA specification. Results show that the Northern and Central regions in Spain attain the highest levels of QoL, while the Southern and Mediterranean regions report lower scores. These figures are consistent with those obtained by González et al. ( Soc Ind Res 82:111-145 2011) for the Spanish municipalities in 2001, although both the sample and the indicators used are different. The analysis also shows that, while it is important to restrict weights in DEA, the specific restrictions used are less important, since all the composite indicators computed are highly correlated. The results also show important differences between per capita gross domestic product and QoL at the provincial level.

  16. A constrained regularization method for inverting data represented by linear algebraic or integral equations

    Science.gov (United States)

    Provencher, Stephen W.

    1982-09-01

    CONTIN is a portable Fortran IV package for inverting noisy linear operator equations. These problems occur in the analysis of data from a wide variety experiments. They are generally ill-posed problems, which means that errors in an unregularized inversion are unbounded. Instead, CONTIN seeks the optimal solution by incorporating parsimony and any statistical prior knowledge into the regularizor and absolute prior knowledge into equallity and inequality constraints. This can be greatly increase the resolution and accuracyh of the solution. CONTIN is very flexible, consisting of a core of about 50 subprograms plus 13 small "USER" subprograms, which the user can easily modify to specify special-purpose constraints, regularizors, operator equations, simulations, statistical weighting, etc. Specjial collections of USER subprograms are available for photon correlation spectroscopy, multicomponent spectra, and Fourier-Bessel, Fourier and Laplace transforms. Numerically stable algorithms are used throughout CONTIN. A fairly precise definition of information content in terms of degrees of freedom is given. The regularization parameter can be automatically chosen on the basis of an F-test and confidence region. The interpretation of the latter and of error estimates based on the covariance matrix of the constrained regularized solution are discussed. The strategies, methods and options in CONTIN are outlined. The program itself is described in the following paper.

  17. Canonical Drude Weight for Non-integrable Quantum Spin Chains

    Science.gov (United States)

    Mastropietro, Vieri; Porta, Marcello

    2018-03-01

    The Drude weight is a central quantity for the transport properties of quantum spin chains. The canonical definition of Drude weight is directly related to Kubo formula of conductivity. However, the difficulty in the evaluation of such expression has led to several alternative formulations, accessible to different methods. In particular, the Euclidean, or imaginary-time, Drude weight can be studied via rigorous renormalization group. As a result, in the past years several universality results have been proven for such quantity at zero temperature; remarkably, the proofs work for both integrable and non-integrable quantum spin chains. Here we establish the equivalence of Euclidean and canonical Drude weights at zero temperature. Our proof is based on rigorous renormalization group methods, Ward identities, and complex analytic ideas.

  18. Constrained KP models as integrable matrix hierarchies

    International Nuclear Information System (INIS)

    Aratyn, H.; Ferreira, L.A.; Gomes, J.F.; Zimerman, A.H.

    1997-01-01

    We formulate the constrained KP hierarchy (denoted by cKP K+1,M ) as an affine [cflx sl](M+K+1) matrix integrable hierarchy generalizing the Drinfeld endash Sokolov hierarchy. Using an algebraic approach, including the graded structure of the generalized Drinfeld endash Sokolov hierarchy, we are able to find several new universal results valid for the cKP hierarchy. In particular, our method yields a closed expression for the second bracket obtained through Dirac reduction of any untwisted affine Kac endash Moody current algebra. An explicit example is given for the case [cflx sl](M+K+1), for which a closed expression for the general recursion operator is also obtained. We show how isospectral flows are characterized and grouped according to the semisimple non-regular element E of sl(M+K+1) and the content of the center of the kernel of E. copyright 1997 American Institute of Physics

  19. The integration of weighted gene association networks based on information entropy.

    Science.gov (United States)

    Yang, Fan; Wu, Duzhi; Lin, Limei; Yang, Jian; Yang, Tinghong; Zhao, Jing

    2017-01-01

    Constructing genome scale weighted gene association networks (WGAN) from multiple data sources is one of research hot spots in systems biology. In this paper, we employ information entropy to describe the uncertain degree of gene-gene links and propose a strategy for data integration of weighted networks. We use this method to integrate four existing human weighted gene association networks and construct a much larger WGAN, which includes richer biology information while still keeps high functional relevance between linked gene pairs. The new WGAN shows satisfactory performance in disease gene prediction, which suggests the reliability of our integration strategy. Compared with existing integration methods, our method takes the advantage of the inherent characteristics of the component networks and pays less attention to the biology background of the data. It can make full use of existing biological networks with low computational effort.

  20. OCT despeckling via weighted nuclear norm constrained non-local low-rank representation

    Science.gov (United States)

    Tang, Chang; Zheng, Xiao; Cao, Lijuan

    2017-10-01

    As a non-invasive imaging modality, optical coherence tomography (OCT) plays an important role in medical sciences. However, OCT images are always corrupted by speckle noise, which can mask image features and pose significant challenges for medical analysis. In this work, we propose an OCT despeckling method by using non-local, low-rank representation with weighted nuclear norm constraint. Unlike previous non-local low-rank representation based OCT despeckling methods, we first generate a guidance image to improve the non-local group patches selection quality, then a low-rank optimization model with a weighted nuclear norm constraint is formulated to process the selected group patches. The corrupted probability of each pixel is also integrated into the model as a weight to regularize the representation error term. Note that each single patch might belong to several groups, hence different estimates of each patch are aggregated to obtain its final despeckled result. Both qualitative and quantitative experimental results on real OCT images show the superior performance of the proposed method compared with other state-of-the-art speckle removal techniques.

  1. Constrained multi-degree reduction with respect to Jacobi norms

    KAUST Repository

    Ait-Haddou, Rachid; Barton, Michael

    2015-01-01

    We show that a weighted least squares approximation of Bézier coefficients with factored Hahn weights provides the best constrained polynomial degree reduction with respect to the Jacobi L2L2-norm. This result affords generalizations to many previous findings in the field of polynomial degree reduction. A solution method to the constrained multi-degree reduction with respect to the Jacobi L2L2-norm is presented.

  2. Constrained multi-degree reduction with respect to Jacobi norms

    KAUST Repository

    Ait-Haddou, Rachid

    2015-12-31

    We show that a weighted least squares approximation of Bézier coefficients with factored Hahn weights provides the best constrained polynomial degree reduction with respect to the Jacobi L2L2-norm. This result affords generalizations to many previous findings in the field of polynomial degree reduction. A solution method to the constrained multi-degree reduction with respect to the Jacobi L2L2-norm is presented.

  3. An Experimental Comparison of Similarity Assessment Measures for 3D Models on Constrained Surface Deformation

    Science.gov (United States)

    Quan, Lulin; Yang, Zhixin

    2010-05-01

    To address the issues in the area of design customization, this paper expressed the specification and application of the constrained surface deformation, and reported the experimental performance comparison of three prevail effective similarity assessment algorithms on constrained surface deformation domain. Constrained surface deformation becomes a promising method that supports for various downstream applications of customized design. Similarity assessment is regarded as the key technology for inspecting the success of new design via measuring the difference level between the deformed new design and the initial sample model, and indicating whether the difference level is within the limitation. According to our theoretical analysis and pre-experiments, three similarity assessment algorithms are suitable for this domain, including shape histogram based method, skeleton based method, and U system moment based method. We analyze their basic functions and implementation methodologies in detail, and do a series of experiments on various situations to test their accuracy and efficiency using precision-recall diagram. Shoe model is chosen as an industrial example for the experiments. It shows that shape histogram based method gained an optimal performance in comparison. Based on the result, we proposed a novel approach that integrating surface constrains and shape histogram description with adaptive weighting method, which emphasize the role of constrains during the assessment. The limited initial experimental result demonstrated that our algorithm outperforms other three algorithms. A clear direction for future development is also drawn at the end of the paper.

  4. Extended shadow test approach for constrained adaptive testing

    NARCIS (Netherlands)

    Veldkamp, Bernard P.; Ariel, A.

    2002-01-01

    Several methods have been developed for use on constrained adaptive testing. Item pool partitioning, multistage testing, and testlet-based adaptive testing are methods that perform well for specific cases of adaptive testing. The weighted deviation model and the Shadow Test approach can be more

  5. Mining method selection by integrated AHP and PROMETHEE method.

    Science.gov (United States)

    Bogdanovic, Dejan; Nikolic, Djordje; Ilic, Ivana

    2012-03-01

    Selecting the best mining method among many alternatives is a multicriteria decision making problem. The aim of this paper is to demonstrate the implementation of an integrated approach that employs AHP and PROMETHEE together for selecting the most suitable mining method for the "Coka Marin" underground mine in Serbia. The related problem includes five possible mining methods and eleven criteria to evaluate them. Criteria are accurately chosen in order to cover the most important parameters that impact on the mining method selection, such as geological and geotechnical properties, economic parameters and geographical factors. The AHP is used to analyze the structure of the mining method selection problem and to determine weights of the criteria, and PROMETHEE method is used to obtain the final ranking and to make a sensitivity analysis by changing the weights. The results have shown that the proposed integrated method can be successfully used in solving mining engineering problems.

  6. Exact and heuristic solution approaches for the Integrated Job Scheduling and Constrained Network Routing Problem

    DEFF Research Database (Denmark)

    Gamst, M.

    2014-01-01

    problem. The methods are computationally evaluated on test instances arising from telecommunications with up to 500 jobs and 500 machines. Results show that solving the integrated job scheduling and constrained network routing problem to optimality is very difficult. The exact solution approach performs......This paper examines the problem of scheduling a number of jobs on a finite set of machines such that the overall profit of executed jobs is maximized. Each job has a certain demand, which must be sent to the executing machine via constrained paths. A job cannot start before all its demands have...... arrived at the machine. Furthermore, two resource demand transmissions cannot use the same edge in the same time period. The problem has application in grid computing, where a number of geographically distributed machines work together for solving large problems. The machines are connected through...

  7. Antifungal susceptibility testing method for resource constrained laboratories

    Directory of Open Access Journals (Sweden)

    Khan S

    2006-01-01

    Full Text Available Purpose: In resource-constrained laboratories of developing countries determination of antifungal susceptibility testing by NCCLS/CLSI method is not always feasible. We describe herein a simple yet comparable method for antifungal susceptibility testing. Methods: Reference MICs of 72 fungal isolates including two quality control strains were determined by NCCLS/CLSI methods against fluconazole, itraconazole, voriconazole, amphotericin B and cancidas. Dermatophytes were also tested against terbinafine. Subsequently, on selection of optimum conditions, MIC was determined for all the fungal isolates by semisolid antifungal agar susceptibility method in Brain heart infusion broth supplemented with 0.5% agar (BHIA without oil overlay and results were compared with those obtained by reference NCCLS/CLSI methods. Results: Comparable results were obtained by NCCLS/CLSI and semisolid agar susceptibility (SAAS methods against quality control strains. MICs for 72 isolates did not differ by more than one dilution for all drugs by SAAS. Conclusions: SAAS using BHIA without oil overlay provides a simple and reproducible method for obtaining MICs against yeast, filamentous fungi and dermatophytes in resource-constrained laboratories.

  8. Measurement of the Top Quark Mass in Dilepton Final States with the Neutrino Weighting Method

    Energy Technology Data Exchange (ETDEWEB)

    Ilchenko, Yuriy [Southern Methodist Univ., Dallas, TX (United States)

    2012-12-15

    The top quark is the heaviest fundamental particle observed to date. The mass of the top quark is a free parameter in the Standard Model (SM). A precise measurement of its mass is particularly important as it sets an indirect constraint on the mass of the Higgs boson. It is also a useful constraint on contributions from physics beyond the SM and may play a fundamental role in the electroweak symmetry breaking mechanism. I present a measurement of the top quark mass in the dilepton channel using the Neutrino Weighting Method. The data sample corresponds to an integrated luminosity of 4.3 fb-1 of p$\\bar{p}$ collisions at Tevatron with √s = 1.96 TeV, collected with the DØ detector. Kinematically under-constrained dilepton events are analyzed by integrating over neutrino rapidity. Weight distributions of t$\\bar{t}$ signal and background are produced as a function of the top quark mass for different top quark mass hypotheses. The measurement is performed by constructing templates from the moments of the weight distributions and input top quark mass, followed by a subsequent likelihood t to data. The dominant systematic uncertainties from jet energy calibration is reduced by using a correction from `+jets channel. To replicate the quark avor dependence of the jet response in data, jets in the simulated events are additionally corrected. The result is combined with our preceding measurement on 1 fb-1 and yields mt = 174.0± 2.4 (stat.) ±1.4 (syst.) GeV.

  9. Clustering Using Boosted Constrained k-Means Algorithm

    Directory of Open Access Journals (Sweden)

    Masayuki Okabe

    2018-03-01

    Full Text Available This article proposes a constrained clustering algorithm with competitive performance and less computation time to the state-of-the-art methods, which consists of a constrained k-means algorithm enhanced by the boosting principle. Constrained k-means clustering using constraints as background knowledge, although easy to implement and quick, has insufficient performance compared with metric learning-based methods. Since it simply adds a function into the data assignment process of the k-means algorithm to check for constraint violations, it often exploits only a small number of constraints. Metric learning-based methods, which exploit constraints to create a new metric for data similarity, have shown promising results although the methods proposed so far are often slow depending on the amount of data or number of feature dimensions. We present a method that exploits the advantages of the constrained k-means and metric learning approaches. It incorporates a mechanism for accepting constraint priorities and a metric learning framework based on the boosting principle into a constrained k-means algorithm. In the framework, a metric is learned in the form of a kernel matrix that integrates weak cluster hypotheses produced by the constrained k-means algorithm, which works as a weak learner under the boosting principle. Experimental results for 12 data sets from 3 data sources demonstrated that our method has performance competitive to those of state-of-the-art constrained clustering methods for most data sets and that it takes much less computation time. Experimental evaluation demonstrated the effectiveness of controlling the constraint priorities by using the boosting principle and that our constrained k-means algorithm functions correctly as a weak learner of boosting.

  10. Integrated Design Optimization of a 5-DOF Assistive Light-weight Anthropomorphic Arm

    DEFF Research Database (Denmark)

    Zhou, Lelai; Bai, Shaoping; Hansen, Michael Rygaard

    2011-01-01

    An integrated dimensional and drive train optimization method was developed for light-weight robotic arm design. The method deals with the determination of optimal link lengths and the optimal selection of motors and gearboxes from commercially available components. Constraints are formulated...... on the basis of kinematic performance and dynamic requirements, whereas the main objective is to minimize the weight. The design of a human-like arm, which is 10 kg in weight with a load capacity of 5 kg, is described....

  11. GPS-based ionospheric tomography with a constrained adaptive ...

    Indian Academy of Sciences (India)

    Gauss weighted function is introduced to constrain the tomography system in the new method. It can resolve the ... the research focus in the fields of space geodesy and ... ment of GNSS such as GPS, Glonass, Galileo and. Compass, as these ...

  12. A penalty method for PDE-constrained optimization in inverse problems

    International Nuclear Information System (INIS)

    Leeuwen, T van; Herrmann, F J

    2016-01-01

    Many inverse and parameter estimation problems can be written as PDE-constrained optimization problems. The goal is to infer the parameters, typically coefficients of the PDE, from partial measurements of the solutions of the PDE for several right-hand sides. Such PDE-constrained problems can be solved by finding a stationary point of the Lagrangian, which entails simultaneously updating the parameters and the (adjoint) state variables. For large-scale problems, such an all-at-once approach is not feasible as it requires storing all the state variables. In this case one usually resorts to a reduced approach where the constraints are explicitly eliminated (at each iteration) by solving the PDEs. These two approaches, and variations thereof, are the main workhorses for solving PDE-constrained optimization problems arising from inverse problems. In this paper, we present an alternative method that aims to combine the advantages of both approaches. Our method is based on a quadratic penalty formulation of the constrained optimization problem. By eliminating the state variable, we develop an efficient algorithm that has roughly the same computational complexity as the conventional reduced approach while exploiting a larger search space. Numerical results show that this method indeed reduces some of the nonlinearity of the problem and is less sensitive to the initial iterate. (paper)

  13. Use of stratigraphic models as soft information to constrain stochastic modeling of rock properties: Development of the GSLIB-Lynx integration module

    International Nuclear Information System (INIS)

    Cromer, M.V.; Rautman, C.A.

    1995-10-01

    Rock properties in volcanic units at Yucca Mountain are controlled largely by relatively deterministic geologic processes related to the emplacement, cooling, and alteration history of the tuffaceous lithologic sequence. Differences in the lithologic character of the rocks have been used to subdivide the rock sequence into stratigraphic units, and the deterministic nature of the processes responsible for the character of the different units can be used to infer the rock material properties likely to exist in unsampled regions. This report proposes a quantitative, theoretically justified method of integrating interpretive geometric models, showing the three-dimensional distribution of different stratigraphic units, with numerical stochastic simulation techniques drawn from geostatistics. This integration of soft, constraining geologic information with hard, quantitative measurements of various material properties can produce geologically reasonable, spatially correlated models of rock properties that are free from stochastic artifacts for use in subsequent physical-process modeling, such as the numerical representation of ground-water flow and radionuclide transport. Prototype modeling conducted using the GSLIB-Lynx Integration Module computer program, known as GLINTMOD, has successfully demonstrated the proposed integration technique. The method involves the selection of stratigraphic-unit-specific material-property expected values that are then used to constrain the probability function from which a material property of interest at an unsampled location is simulated

  14. A New Self-Constrained Inversion Method of Potential Fields Based on Probability Tomography

    Science.gov (United States)

    Sun, S.; Chen, C.; WANG, H.; Wang, Q.

    2014-12-01

    The self-constrained inversion method of potential fields uses a priori information self-extracted from potential field data. Differing from external a priori information, the self-extracted information are generally parameters derived exclusively from the analysis of the gravity and magnetic data (Paoletti et al., 2013). Here we develop a new self-constrained inversion method based on probability tomography. Probability tomography doesn't need any priori information, as well as large inversion matrix operations. Moreover, its result can describe the sources, especially the distribution of which is complex and irregular, entirely and clearly. Therefore, we attempt to use the a priori information extracted from the probability tomography results to constrain the inversion for physical properties. The magnetic anomaly data was taken as an example in this work. The probability tomography result of magnetic total field anomaly(ΔΤ) shows a smoother distribution than the anomalous source and cannot display the source edges exactly. However, the gradients of ΔΤ are with higher resolution than ΔΤ in their own direction, and this characteristic is also presented in their probability tomography results. So we use some rules to combine the probability tomography results of ∂ΔΤ⁄∂x, ∂ΔΤ⁄∂y and ∂ΔΤ⁄∂z into a new result which is used for extracting a priori information, and then incorporate the information into the model objective function as spatial weighting functions to invert the final magnetic susceptibility. Some magnetic synthetic examples incorporated with and without a priori information extracted from the probability tomography results were made to do comparison, results of which show that the former are more concentrated and with higher resolution of the source body edges. This method is finally applied in an iron mine in China with field measured ΔΤ data and performs well. ReferencesPaoletti, V., Ialongo, S., Florio, G., Fedi, M

  15. Proven Weight Loss Methods

    Science.gov (United States)

    Fact Sheet Proven Weight Loss Methods What can weight loss do for you? Losing weight can improve your health in a number of ways. It can lower ... at www.hormone.org/Spanish . Proven Weight Loss Methods Fact Sheet www.hormone.org

  16. Subspace Barzilai-Borwein Gradient Method for Large-Scale Bound Constrained Optimization

    International Nuclear Information System (INIS)

    Xiao Yunhai; Hu Qingjie

    2008-01-01

    An active set subspace Barzilai-Borwein gradient algorithm for large-scale bound constrained optimization is proposed. The active sets are estimated by an identification technique. The search direction consists of two parts: some of the components are simply defined; the other components are determined by the Barzilai-Borwein gradient method. In this work, a nonmonotone line search strategy that guarantees global convergence is used. Preliminary numerical results show that the proposed method is promising, and competitive with the well-known method SPG on a subset of bound constrained problems from CUTEr collection

  17. GPS-based ionospheric tomography with a constrained adaptive ...

    Indian Academy of Sciences (India)

    According to the continuous smoothness of the variations of ionospheric electron density (IED) among neighbouring voxels, Gauss weighted function is introduced to constrain the tomography system in the new method. It can resolve the dependence on the initial values for those voxels without any GPS rays traversing them ...

  18. Performance Analysis of Constrained Loosely Coupled GPS/INS Integration Solutions

    Directory of Open Access Journals (Sweden)

    Fabio Dovis

    2012-11-01

    Full Text Available The paper investigates approaches for loosely coupled GPS/INS integration. Error performance is calculated using a reference trajectory. A performance improvement can be obtained by exploiting additional map information (for example, a road boundary. A constrained solution has been developed and its performance compared with an unconstrained one. The case of GPS outages is also investigated showing how a Kalman filter that operates on the last received GPS position and velocity measurements provides a performance benefit. Results are obtained by means of simulation studies and real data.

  19. Composite Differential Evolution with Modified Oracle Penalty Method for Constrained Optimization Problems

    Directory of Open Access Journals (Sweden)

    Minggang Dong

    2014-01-01

    Full Text Available Motivated by recent advancements in differential evolution and constraints handling methods, this paper presents a novel modified oracle penalty function-based composite differential evolution (MOCoDE for constrained optimization problems (COPs. More specifically, the original oracle penalty function approach is modified so as to satisfy the optimization criterion of COPs; then the modified oracle penalty function is incorporated in composite DE. Furthermore, in order to solve more complex COPs with discrete, integer, or binary variables, a discrete variable handling technique is introduced into MOCoDE to solve complex COPs with mix variables. This method is assessed on eleven constrained optimization benchmark functions and seven well-studied engineering problems in real life. Experimental results demonstrate that MOCoDE achieves competitive performance with respect to some other state-of-the-art approaches in constrained optimization evolutionary algorithms. Moreover, the strengths of the proposed method include few parameters and its ease of implementation, rendering it applicable to real life. Therefore, MOCoDE can be an efficient alternative to solving constrained optimization problems.

  20. Environmental conflict analysis using an integrated grey clustering and entropy-weight method: A case study of a mining project in Peru.

    OpenAIRE

    Delgado-Villanueva, Kiko Alexi; Romero Gil, Inmaculada

    2016-01-01

    [EN] Environmental conflict analysis (henceforth ECA) has become a key factor for the viability of projects and welfare of affected populations. In this study, we propose an approach for ECA using an integrated grey clustering and entropy-weight method (The IGCEW method). The case study considered a mining project in northern Peru. Three stakeholder groups and seven criteria were identified. The data were gathered by conducting field interviews. The results revealed that for the groups urban ...

  1. A Globally Convergent Matrix-Free Method for Constrained Equations and Its Linear Convergence Rate

    Directory of Open Access Journals (Sweden)

    Min Sun

    2014-01-01

    Full Text Available A matrix-free method for constrained equations is proposed, which is a combination of the well-known PRP (Polak-Ribière-Polyak conjugate gradient method and the famous hyperplane projection method. The new method is not only derivative-free, but also completely matrix-free, and consequently, it can be applied to solve large-scale constrained equations. We obtain global convergence of the new method without any differentiability requirement on the constrained equations. Compared with the existing gradient methods for solving such problem, the new method possesses linear convergence rate under standard conditions, and a relax factor γ is attached in the update step to accelerate convergence. Preliminary numerical results show that it is promising in practice.

  2. Superalloy design - A Monte Carlo constrained optimization method

    CSIR Research Space (South Africa)

    Stander, CM

    1996-01-01

    Full Text Available optimization method C. M. Stander Division of Materials Science and Technology, CSIR, PO Box 395, Pretoria, Republic of South Africa Received 74 March 1996; accepted 24 June 1996 A method, based on Monte Carlo constrained... successful hit, i.e. when Liow < LMP,,, < Lhiph, and for all the properties, Pj?, < P, < Pi@?. If successful this hit falls within the ROA. Repeat steps 4 and 5 to find at least ten (or more) successful hits with values...

  3. Optimization of PID Parameters Utilizing Variable Weight Grey-Taguchi Method and Particle Swarm Optimization

    Science.gov (United States)

    Azmi, Nur Iffah Mohamed; Arifin Mat Piah, Kamal; Yusoff, Wan Azhar Wan; Romlay, Fadhlur Rahman Mohd

    2018-03-01

    Controller that uses PID parameters requires a good tuning method in order to improve the control system performance. Tuning PID control method is divided into two namely the classical methods and the methods of artificial intelligence. Particle swarm optimization algorithm (PSO) is one of the artificial intelligence methods. Previously, researchers had integrated PSO algorithms in the PID parameter tuning process. This research aims to improve the PSO-PID tuning algorithms by integrating the tuning process with the Variable Weight Grey- Taguchi Design of Experiment (DOE) method. This is done by conducting the DOE on the two PSO optimizing parameters: the particle velocity limit and the weight distribution factor. Computer simulations and physical experiments were conducted by using the proposed PSO- PID with the Variable Weight Grey-Taguchi DOE and the classical Ziegler-Nichols methods. They are implemented on the hydraulic positioning system. Simulation results show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE has reduced the rise time by 48.13% and settling time by 48.57% compared to the Ziegler-Nichols method. Furthermore, the physical experiment results also show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE tuning method responds better than Ziegler-Nichols tuning. In conclusion, this research has improved the PSO-PID parameter by applying the PSO-PID algorithm together with the Variable Weight Grey-Taguchi DOE method as a tuning method in the hydraulic positioning system.

  4. A New Method for Improving the Discrimination Power and Weights Dispersion in the Data Envelopment Analysis

    Directory of Open Access Journals (Sweden)

    S. Kordrostami

    2013-06-01

    Full Text Available The appropriate choice of input-output weights is necessary to have a successful DEA model. Generally, if the number of DMUs i.e., n, is less than number of inputs and outputs i.e., m+s, then many of DMUs are introduced as efficient then the discrimination between DMUs is not possible. Besides, DEA models are free to choose the best weights. For resolving the problems that are resulted from freedom of weights, some constraints are set on the input-output weights. Symmetric weight constraints are a kind of weight constrains. In this paper, we represent a new model based on a multi-criterion data envelopment analysis (MCDEA are developed to moderate the homogeneity of weights distribution by using symmetric weight constrains.Consequently, we show that the improvement of the dispersal of unrealistic input-output weights and the increasing discrimination power for our suggested models. Finally, as an application of the new model, we use this model to evaluate and ranking guilan selected hospitals.

  5. A Hybrid Method for the Modelling and Optimisation of Constrained Search Problems

    Directory of Open Access Journals (Sweden)

    Sitek Pawel

    2014-08-01

    Full Text Available The paper presents a concept and the outline of the implementation of a hybrid approach to modelling and solving constrained problems. Two environments of mathematical programming (in particular, integer programming and declarative programming (in particular, constraint logic programming were integrated. The strengths of integer programming and constraint logic programming, in which constraints are treated in a different way and different methods are implemented, were combined to use the strengths of both. The hybrid method is not worse than either of its components used independently. The proposed approach is particularly important for the decision models with an objective function and many discrete decision variables added up in multiple constraints. To validate the proposed approach, two illustrative examples are presented and solved. The first example is the authors’ original model of cost optimisation in the supply chain with multimodal transportation. The second one is the two-echelon variant of the well-known capacitated vehicle routing problem.

  6. A constrained Delaunay discretization method for adaptively meshing highly discontinuous geological media

    Science.gov (United States)

    Wang, Yang; Ma, Guowei; Ren, Feng; Li, Tuo

    2017-12-01

    A constrained Delaunay discretization method is developed to generate high-quality doubly adaptive meshes of highly discontinuous geological media. Complex features such as three-dimensional discrete fracture networks (DFNs), tunnels, shafts, slopes, boreholes, water curtains, and drainage systems are taken into account in the mesh generation. The constrained Delaunay triangulation method is used to create adaptive triangular elements on planar fractures. Persson's algorithm (Persson, 2005), based on an analogy between triangular elements and spring networks, is enriched to automatically discretize a planar fracture into mesh points with varying density and smooth-quality gradient. The triangulated planar fractures are treated as planar straight-line graphs (PSLGs) to construct piecewise-linear complex (PLC) for constrained Delaunay tetrahedralization. This guarantees the doubly adaptive characteristic of the resulted mesh: the mesh is adaptive not only along fractures but also in space. The quality of elements is compared with the results from an existing method. It is verified that the present method can generate smoother elements and a better distribution of element aspect ratios. Two numerical simulations are implemented to demonstrate that the present method can be applied to various simulations of complex geological media that contain a large number of discontinuities.

  7. The general 2-D moments via integral transform method for acoustic radiation and scattering

    Science.gov (United States)

    Smith, Jerry R.; Mirotznik, Mark S.

    2004-05-01

    The moments via integral transform method (MITM) is a technique to analytically reduce the 2-D method of moments (MoM) impedance double integrals into single integrals. By using a special integral representation of the Green's function, the impedance integral can be analytically simplified to a single integral in terms of transformed shape and weight functions. The reduced expression requires fewer computations and reduces the fill times of the MoM impedance matrix. Furthermore, the resulting integral is analytic for nearly arbitrary shape and weight function sets. The MITM technique is developed for mixed boundary conditions and predictions with basic shape and weight function sets are presented. Comparisons of accuracy and speed between MITM and brute force are presented. [Work sponsored by ONR and NSWCCD ILIR Board.

  8. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    Directory of Open Access Journals (Sweden)

    Jan Hasenauer

    2014-07-01

    Full Text Available Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  9. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    Science.gov (United States)

    Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J

    2014-07-01

    Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  10. An Anatomically Constrained Model for Path Integration in the Bee Brain.

    Science.gov (United States)

    Stone, Thomas; Webb, Barbara; Adden, Andrea; Weddig, Nicolai Ben; Honkanen, Anna; Templin, Rachel; Wcislo, William; Scimeca, Luca; Warrant, Eric; Heinze, Stanley

    2017-10-23

    Path integration is a widespread navigational strategy in which directional changes and distance covered are continuously integrated on an outward journey, enabling a straight-line return to home. Bees use vision for this task-a celestial-cue-based visual compass and an optic-flow-based visual odometer-but the underlying neural integration mechanisms are unknown. Using intracellular electrophysiology, we show that polarized-light-based compass neurons and optic-flow-based speed-encoding neurons converge in the central complex of the bee brain, and through block-face electron microscopy, we identify potential integrator cells. Based on plausible output targets for these cells, we propose a complete circuit for path integration and steering in the central complex, with anatomically identified neurons suggested for each processing step. The resulting model circuit is thus fully constrained biologically and provides a functional interpretation for many previously unexplained architectural features of the central complex. Moreover, we show that the receptive fields of the newly discovered speed neurons can support path integration for the holonomic motion (i.e., a ground velocity that is not precisely aligned with body orientation) typical of bee flight, a feature not captured in any previously proposed model of path integration. In a broader context, the model circuit presented provides a general mechanism for producing steering signals by comparing current and desired headings-suggesting a more basic function for central complex connectivity, from which path integration may have evolved. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. A Study of Interactions between Mixing and Chemical Reaction Using the Rate-Controlled Constrained-Equilibrium Method

    Science.gov (United States)

    Hadi, Fatemeh; Janbozorgi, Mohammad; Sheikhi, M. Reza H.; Metghalchi, Hameed

    2016-10-01

    The rate-controlled constrained-equilibrium (RCCE) method is employed to study the interactions between mixing and chemical reaction. Considering that mixing can influence the RCCE state, the key objective is to assess the accuracy and numerical performance of the method in simulations involving both reaction and mixing. The RCCE formulation includes rate equations for constraint potentials, density and temperature, which allows taking account of mixing alongside chemical reaction without splitting. The RCCE is a dimension reduction method for chemical kinetics based on thermodynamics laws. It describes the time evolution of reacting systems using a series of constrained-equilibrium states determined by RCCE constraints. The full chemical composition at each state is obtained by maximizing the entropy subject to the instantaneous values of the constraints. The RCCE is applied to a spatially homogeneous constant pressure partially stirred reactor (PaSR) involving methane combustion in oxygen. Simulations are carried out over a wide range of initial temperatures and equivalence ratios. The chemical kinetics, comprised of 29 species and 133 reaction steps, is represented by 12 RCCE constraints. The RCCE predictions are compared with those obtained by direct integration of the same kinetics, termed detailed kinetics model (DKM). The RCCE shows accurate prediction of combustion in PaSR with different mixing intensities. The method also demonstrates reduced numerical stiffness and overall computational cost compared to DKM.

  12. Integrative analysis of many weighted co-expression networks using tensor computation.

    Directory of Open Access Journals (Sweden)

    Wenyuan Li

    2011-06-01

    Full Text Available The rapid accumulation of biological networks poses new challenges and calls for powerful integrative analysis tools. Most existing methods capable of simultaneously analyzing a large number of networks were primarily designed for unweighted networks, and cannot easily be extended to weighted networks. However, it is known that transforming weighted into unweighted networks by dichotomizing the edges of weighted networks with a threshold generally leads to information loss. We have developed a novel, tensor-based computational framework for mining recurrent heavy subgraphs in a large set of massive weighted networks. Specifically, we formulate the recurrent heavy subgraph identification problem as a heavy 3D subtensor discovery problem with sparse constraints. We describe an effective approach to solving this problem by designing a multi-stage, convex relaxation protocol, and a non-uniform edge sampling technique. We applied our method to 130 co-expression networks, and identified 11,394 recurrent heavy subgraphs, grouped into 2,810 families. We demonstrated that the identified subgraphs represent meaningful biological modules by validating against a large set of compiled biological knowledge bases. We also showed that the likelihood for a heavy subgraph to be meaningful increases significantly with its recurrence in multiple networks, highlighting the importance of the integrative approach to biological network analysis. Moreover, our approach based on weighted graphs detects many patterns that would be overlooked using unweighted graphs. In addition, we identified a large number of modules that occur predominately under specific phenotypes. This analysis resulted in a genome-wide mapping of gene network modules onto the phenome. Finally, by comparing module activities across many datasets, we discovered high-order dynamic cooperativeness in protein complex networks and transcriptional regulatory networks.

  13. Weighted estimates for the averaging integral operator

    Czech Academy of Sciences Publication Activity Database

    Opic, Bohumír; Rákosník, Jiří

    2010-01-01

    Roč. 61, č. 3 (2010), s. 253-262 ISSN 0010-0757 R&D Projects: GA ČR GA201/05/2033; GA ČR GA201/08/0383 Institutional research plan: CEZ:AV0Z10190503 Keywords : averaging integral operator * weighted Lebesgue spaces * weights Subject RIV: BA - General Mathematics Impact factor: 0.474, year: 2010 http://link.springer.com/article/10.1007%2FBF03191231

  14. Computation of conditional Wiener integrals by the composite approximation formulae with weight

    International Nuclear Information System (INIS)

    Lobanov, Yu.Yu.; Sidorova, O.V.; Zhidkov, E.P.

    1988-01-01

    New approximation formulae with weight for the functional integrals with conditional Wiener measure are derived. The formulae are exact on a class of polynomial functionals of a given degree. The convergence of approximations to the exact value of integral is proved, the estimate of the remainder is obtained. The results are illustrated with numerical examples. The advantages of the formulae over lattice Monte Carlo method are demonstrated in computation of some quantities in Euclidean quantum mechanics

  15. Exact methods for time constrained routing and related scheduling problems

    DEFF Research Database (Denmark)

    Kohl, Niklas

    1995-01-01

    of customers. In the VRPTW customers must be serviced within a given time period - a so called time window. The objective can be to minimize operating costs (e.g. distance travelled), fixed costs (e.g. the number of vehicles needed) or a combination of these component costs. During the last decade optimization......This dissertation presents a number of optimization methods for the Vehicle Routing Problem with Time Windows (VRPTW). The VRPTW is a generalization of the well known capacity constrained Vehicle Routing Problem (VRP), where a fleet of vehicles based at a central depot must service a set...... of J?rnsten, Madsen and S?rensen (1986), which has been tested computationally by Halse (1992). Both methods decompose the problem into a series of time and capacity constrained shotest path problems. This yields a tight lower bound on the optimal objective, and the dual gap can often be closed...

  16. Comparative analysis of methods for integrating various environmental impacts as a single index in life cycle assessment

    International Nuclear Information System (INIS)

    Ji, Changyoon; Hong, Taehoon

    2016-01-01

    Previous studies have proposed several methods for integrating characterized environmental impacts as a single index in life cycle assessment. Each of them, however, may lead to different results. This study presents internal and external normalization methods, weighting factors proposed by panel methods, and a monetary valuation based on an endpoint life cycle impact assessment method as the integration methods. Furthermore, this study investigates the differences among the integration methods and identifies the causes of the differences through a case study in which five elementary school buildings were used. As a result, when using internal normalization with weighting factors, the weighting factors had a significant influence on the total environmental impacts whereas the normalization had little influence on the total environmental impacts. When using external normalization with weighting factors, the normalization had more significant influence on the total environmental impacts than weighing factors. Due to such differences, the ranking of the five buildings varied depending on the integration methods. The ranking calculated by the monetary valuation method was significantly different from that calculated by the normalization and weighting process. The results aid decision makers in understanding the differences among these integration methods, and, finally, help them select the method most appropriate for the goal at hand.

  17. Comparative analysis of methods for integrating various environmental impacts as a single index in life cycle assessment

    Energy Technology Data Exchange (ETDEWEB)

    Ji, Changyoon, E-mail: changyoon@yonsei.ac.kr; Hong, Taehoon, E-mail: hong7@yonsei.ac.kr

    2016-02-15

    Previous studies have proposed several methods for integrating characterized environmental impacts as a single index in life cycle assessment. Each of them, however, may lead to different results. This study presents internal and external normalization methods, weighting factors proposed by panel methods, and a monetary valuation based on an endpoint life cycle impact assessment method as the integration methods. Furthermore, this study investigates the differences among the integration methods and identifies the causes of the differences through a case study in which five elementary school buildings were used. As a result, when using internal normalization with weighting factors, the weighting factors had a significant influence on the total environmental impacts whereas the normalization had little influence on the total environmental impacts. When using external normalization with weighting factors, the normalization had more significant influence on the total environmental impacts than weighing factors. Due to such differences, the ranking of the five buildings varied depending on the integration methods. The ranking calculated by the monetary valuation method was significantly different from that calculated by the normalization and weighting process. The results aid decision makers in understanding the differences among these integration methods, and, finally, help them select the method most appropriate for the goal at hand.

  18. An Integrative Review of Multicomponent Weight Management Interventions for Adults with Intellectual Disabilities

    Science.gov (United States)

    Doherty, Alison J.; Jones, Stephanie P.; Chauhan, Umesh; Gibson, Josephine M. E.

    2018-01-01

    Background: Obesity is more prevalent in people with intellectual disabilities and increases the risk of developing serious medical conditions. UK guidance recommends multicomponent weight management interventions (MCIs), tailored for different population groups. Methods: An integrative review utilizing systematic review methodology was conducted…

  19. Probability-Weighted LMP and RCP for Day-Ahead Energy Markets using Stochastic Security-Constrained Unit Commitment: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Ela, E.; O' Malley, M.

    2012-06-01

    Variable renewable generation resources are increasing their penetration on electric power grids. These resources have weather-driven fuel sources that vary on different time scales and are difficult to predict in advance. These characteristics create challenges for system operators managing the load balance on different timescales. Research is looking into new operational techniques and strategies that show great promise on facilitating greater integration of variable resources. Stochastic Security-Constrained Unit Commitment models are one strategy that has been discussed in literature and shows great benefit. However, it is rarely used outside the research community due to its computational limits and difficulties integrating with electricity markets. This paper discusses how it can be integrated into day-ahead energy markets and especially on what pricing schemes should be used to ensure an efficient and fair market.

  20. Method and system to estimate variables in an integrated gasification combined cycle (IGCC) plant

    Science.gov (United States)

    Kumar, Aditya; Shi, Ruijie; Dokucu, Mustafa

    2013-09-17

    System and method to estimate variables in an integrated gasification combined cycle (IGCC) plant are provided. The system includes a sensor suite to measure respective plant input and output variables. An extended Kalman filter (EKF) receives sensed plant input variables and includes a dynamic model to generate a plurality of plant state estimates and a covariance matrix for the state estimates. A preemptive-constraining processor is configured to preemptively constrain the state estimates and covariance matrix to be free of constraint violations. A measurement-correction processor may be configured to correct constrained state estimates and a constrained covariance matrix based on processing of sensed plant output variables. The measurement-correction processor is coupled to update the dynamic model with corrected state estimates and a corrected covariance matrix. The updated dynamic model may be configured to estimate values for at least one plant variable not originally sensed by the sensor suite.

  1. Weighted Anisotropic Integral Representations of Holomorphic Functions in the Unit Ball of

    Directory of Open Access Journals (Sweden)

    Arman Karapetyan

    2010-01-01

    Full Text Available We obtain weighted integral representations for spaces of functions holomorphic in the unit ball and belonging to area-integrable weighted -classes with “anisotropic” weight function of the type ∏=1(1−|1|2−|2|2−⋯−||2, =(1,2,…,∈. The corresponding kernels of these representations are estimated, written in an integral form, and even written out in an explicit form (for =2.

  2. On Tree-Constrained Matchings and Generalizations

    NARCIS (Netherlands)

    S. Canzar (Stefan); K. Elbassioni; G.W. Klau (Gunnar); J. Mestre

    2011-01-01

    htmlabstractWe consider the following \\textsc{Tree-Constrained Bipartite Matching} problem: Given two rooted trees $T_1=(V_1,E_1)$, $T_2=(V_2,E_2)$ and a weight function $w: V_1\\times V_2 \\mapsto \\mathbb{R}_+$, find a maximum weight matching $\\mathcal{M}$ between nodes of the two trees, such that

  3. A Concept Lattice for Semantic Integration of Geo-Ontologies Based on Weight of Inclusion Degree Importance and Information Entropy

    Directory of Open Access Journals (Sweden)

    Jia Xiao

    2016-11-01

    Full Text Available Constructing a merged concept lattice with formal concept analysis (FCA is an important research direction in the field of integrating multi-source geo-ontologies. Extracting essential geographical properties and reducing the concept lattice are two key points of previous research. A formal integration method is proposed to address the challenges in these two areas. We first extract essential properties from multi-source geo-ontologies and use FCA to build a merged formal context. Second, the combined importance weight of each single attribute of the formal context is calculated by introducing the inclusion degree importance from rough set theory and information entropy; then a weighted formal context is built from the merged formal context. Third, a combined weighted concept lattice is established from the weighted formal context with FCA and the importance weight value of every concept is defined as the sum of weight of attributes belonging to the concept’s intent. Finally, semantic granularity of concept is defined by its importance weight; we, then gradually reduce the weighted concept lattice by setting up diminishing threshold of semantic granularity. Additionally, all of those reduced lattices are organized into a regular hierarchy structure based on the threshold of semantic granularity. A workflow is designed to demonstrate this procedure. A case study is conducted to show feasibility and validity of this method and the procedure to integrate multi-source geo-ontologies.

  4. Multi-example feature-constrained back-projection method for image super-resolution

    Institute of Scientific and Technical Information of China (English)

    Junlei Zhang; Dianguang Gai; Xin Zhang; Xuemei Li

    2017-01-01

    Example-based super-resolution algorithms,which predict unknown high-resolution image information using a relationship model learnt from known high- and low-resolution image pairs, have attracted considerable interest in the field of image processing. In this paper, we propose a multi-example feature-constrained back-projection method for image super-resolution. Firstly, we take advantage of a feature-constrained polynomial interpolation method to enlarge the low-resolution image. Next, we consider low-frequency images of different resolutions to provide an example pair. Then, we use adaptive k NN search to find similar patches in the low-resolution image for every image patch in the high-resolution low-frequency image, leading to a regression model between similar patches to be learnt. The learnt model is applied to the low-resolution high-frequency image to produce high-resolution high-frequency information. An iterative back-projection algorithm is used as the final step to determine the final high-resolution image.Experimental results demonstrate that our method improves the visual quality of the high-resolution image.

  5. Basis set approach in the constrained interpolation profile method

    International Nuclear Information System (INIS)

    Utsumi, T.; Koga, J.; Yabe, T.; Ogata, Y.; Matsunaga, E.; Aoki, T.; Sekine, M.

    2003-07-01

    We propose a simple polynomial basis-set that is easily extendable to any desired higher-order accuracy. This method is based on the Constrained Interpolation Profile (CIP) method and the profile is chosen so that the subgrid scale solution approaches the real solution by the constraints from the spatial derivative of the original equation. Thus the solution even on the subgrid scale becomes consistent with the master equation. By increasing the order of the polynomial, this solution quickly converges. 3rd and 5th order polynomials are tested on the one-dimensional Schroedinger equation and are proved to give solutions a few orders of magnitude higher in accuracy than conventional methods for lower-lying eigenstates. (author)

  6. Sufficient Descent Conjugate Gradient Methods for Solving Convex Constrained Nonlinear Monotone Equations

    Directory of Open Access Journals (Sweden)

    San-Yang Liu

    2014-01-01

    Full Text Available Two unified frameworks of some sufficient descent conjugate gradient methods are considered. Combined with the hyperplane projection method of Solodov and Svaiter, they are extended to solve convex constrained nonlinear monotone equations. Their global convergence is proven under some mild conditions. Numerical results illustrate that these methods are efficient and can be applied to solve large-scale nonsmooth equations.

  7. GTX Reference Vehicle Structural Verification Methods and Weight Summary

    Science.gov (United States)

    Hunter, J. E.; McCurdy, D. R.; Dunn, P. W.

    2002-01-01

    The design of a single-stage-to-orbit air breathing propulsion system requires the simultaneous development of a reference launch vehicle in order to achieve the optimal mission performance. Accordingly, for the GTX study a 300-lb payload reference vehicle was preliminarily sized to a gross liftoff weight (GLOW) of 238,000 lb. A finite element model of the integrated vehicle/propulsion system was subjected to the trajectory environment and subsequently optimized for structural efficiency. This study involved the development of aerodynamic loads mapped to finite element models of the integrated system in order to assess vehicle margins of safety. Commercially available analysis codes were used in the process along with some internally developed spreadsheets and FORTRAN codes specific to the GTX geometry for mapping of thermal and pressure loads. A mass fraction of 0.20 for the integrated system dry weight has been the driver for a vehicle design consisting of state-of-the-art composite materials in order to meet the rigid weight requirements. This paper summarizes the methodology used for preliminary analyses and presents the current status of the weight optimization for the structural components of the integrated system.

  8. Maximum Constrained Directivity of Oversteered End-Fire Sensor Arrays

    Directory of Open Access Journals (Sweden)

    Andrea Trucco

    2015-06-01

    Full Text Available For linear arrays with fixed steering and an inter-element spacing smaller than one half of the wavelength, end-fire steering of a data-independent beamformer offers better directivity than broadside steering. The introduction of a lower bound on the white noise gain ensures the necessary robustness against random array errors and sensor mismatches. However, the optimum broadside performance can be obtained using a simple processing architecture, whereas the optimum end-fire performance requires a more complicated system (because complex weight coefficients are needed. In this paper, we reconsider the oversteering technique as a possible way to simplify the processing architecture of equally spaced end-fire arrays. We propose a method for computing the amount of oversteering and the related real-valued weight vector that allows the constrained directivity to be maximized for a given inter-element spacing. Moreover, we verify that the maximized oversteering performance is very close to the optimum end-fire performance. We conclude that optimized oversteering is a viable method for designing end-fire arrays that have better constrained directivity than broadside arrays but with a similar implementation complexity. A numerical simulation is used to perform a statistical analysis, which confirms that the maximized oversteering performance is robust against sensor mismatches.

  9. Constrained Balancing of Two Industrial Rotor Systems: Least Squares and Min-Max Approaches

    Directory of Open Access Journals (Sweden)

    Bin Huang

    2009-01-01

    Full Text Available Rotor vibrations caused by rotor mass unbalance distributions are a major source of maintenance problems in high-speed rotating machinery. Minimizing this vibration by balancing under practical constraints is quite important to industry. This paper considers balancing of two large industrial rotor systems by constrained least squares and min-max balancing methods. In current industrial practice, the weighted least squares method has been utilized to minimize rotor vibrations for many years. One of its disadvantages is that it cannot guarantee that the maximum value of vibration is below a specified value. To achieve better balancing performance, the min-max balancing method utilizing the Second Order Cone Programming (SOCP with the maximum correction weight constraint, the maximum residual response constraint as well as the weight splitting constraint has been utilized for effective balancing. The min-max balancing method can guarantee a maximum residual vibration value below an optimum value and is shown by simulation to significantly outperform the weighted least squares method.

  10. A generalized fuzzy credibility-constrained linear fractional programming approach for optimal irrigation water allocation under uncertainty

    Science.gov (United States)

    Zhang, Chenglong; Guo, Ping

    2017-10-01

    The vague and fuzzy parametric information is a challenging issue in irrigation water management problems. In response to this problem, a generalized fuzzy credibility-constrained linear fractional programming (GFCCFP) model is developed for optimal irrigation water allocation under uncertainty. The model can be derived from integrating generalized fuzzy credibility-constrained programming (GFCCP) into a linear fractional programming (LFP) optimization framework. Therefore, it can solve ratio optimization problems associated with fuzzy parameters, and examine the variation of results under different credibility levels and weight coefficients of possibility and necessary. It has advantages in: (1) balancing the economic and resources objectives directly; (2) analyzing system efficiency; (3) generating more flexible decision solutions by giving different credibility levels and weight coefficients of possibility and (4) supporting in-depth analysis of the interrelationships among system efficiency, credibility level and weight coefficient. The model is applied to a case study of irrigation water allocation in the middle reaches of Heihe River Basin, northwest China. Therefore, optimal irrigation water allocation solutions from the GFCCFP model can be obtained. Moreover, factorial analysis on the two parameters (i.e. λ and γ) indicates that the weight coefficient is a main factor compared with credibility level for system efficiency. These results can be effective for support reasonable irrigation water resources management and agricultural production.

  11. A first-order multigrid method for bound-constrained convex optimization

    Czech Academy of Sciences Publication Activity Database

    Kočvara, Michal; Mohammed, S.

    2016-01-01

    Roč. 31, č. 3 (2016), s. 622-644 ISSN 1055-6788 R&D Projects: GA ČR(CZ) GAP201/12/0671 Grant - others:European Commission - EC(XE) 313781 Institutional support: RVO:67985556 Keywords : bound-constrained optimization * multigrid methods * linear complementarity problems Subject RIV: BA - General Mathematics Impact factor: 1.023, year: 2016 http://library.utia.cas.cz/separaty/2016/MTR/kocvara-0460326.pdf

  12. The integration of weighted human gene association networks based on link prediction.

    Science.gov (United States)

    Yang, Jian; Yang, Tinghong; Wu, Duzhi; Lin, Limei; Yang, Fan; Zhao, Jing

    2017-01-31

    Physical and functional interplays between genes or proteins have important biological meaning for cellular functions. Some efforts have been made to construct weighted gene association meta-networks by integrating multiple biological resources, where the weight indicates the confidence of the interaction. However, it is found that these existing human gene association networks share only quite limited overlapped interactions, suggesting their incompleteness and noise. Here we proposed a workflow to construct a weighted human gene association network using information of six existing networks, including two weighted specific PPI networks and four gene association meta-networks. We applied link prediction algorithm to predict possible missing links of the networks, cross-validation approach to refine each network and finally integrated the refined networks to get the final integrated network. The common information among the refined networks increases notably, suggesting their higher reliability. Our final integrated network owns much more links than most of the original networks, meanwhile its links still keep high functional relevance. Being used as background network in a case study of disease gene prediction, the final integrated network presents good performance, implying its reliability and application significance. Our workflow could be insightful for integrating and refining existing gene association data.

  13. Integrating job scheduling and constrained network routing

    DEFF Research Database (Denmark)

    Gamst, Mette

    2010-01-01

    This paper examines the NP-hard problem of scheduling jobs on resources such that the overall profit of executed jobs is maximized. Job demand must be sent through a constrained network to the resource before execution can begin. The problem has application in grid computing, where a number...

  14. Methods for enhancing numerical integration

    International Nuclear Information System (INIS)

    Doncker, Elise de

    2003-01-01

    We give a survey of common strategies for numerical integration (adaptive, Monte-Carlo, Quasi-Monte Carlo), and attempt to delineate their realm of applicability. The inherent accuracy and error bounds for basic integration methods are given via such measures as the degree of precision of cubature rules, the index of a family of lattice rules, and the discrepancy of uniformly distributed point sets. Strategies incorporating these basic methods often use paradigms to reduce the error by, e.g., increasing the number of points in the domain or decreasing the mesh size, locally or uniformly. For these processes the order of convergence of the strategy is determined by the asymptotic behavior of the error, and may be too slow in practice for the type of problem at hand. For certain problem classes we may be able to improve the effectiveness of the method or strategy by such techniques as transformations, absorbing a difficult part of the integrand into a weight function, suitable partitioning of the domain, transformations and extrapolation or convergence acceleration. Situations warranting the use of these techniques (possibly in an 'automated' way) are described and illustrated by sample applications

  15. The Mastery Matrix for Integration Praxis: The development of a rubric for integration practice in addressing weight-related public health problems.

    Science.gov (United States)

    Berge, Jerica M; Adamek, Margaret; Caspi, Caitlin; Grannon, Katherine Y; Loth, Katie A; Trofholz, Amanda; Nanney, Marilyn S

    2018-06-01

    In response to the limitations of siloed weight-related intervention approaches, scholars have called for greater integration that is intentional, strategic, and thoughtful between researchers, health care clinicians, community members, and policy makers as a way to more effectively address weight and weight-related (e.g., obesity, diabetes, cardiovascular disease, cancer) public health problems. The Mastery Matrix for Integration Praxis was developed by the Healthy Eating and Activity across the Lifespan (HEAL) team in 2017 to advance the science and praxis of integration across the domains of research, clinical practice, community, and policy to address weight-related public health problems. Integrator functions were identified and developmental stages were created to generate a rubric for measuring mastery of integration. Creating a means to systematically define and evaluate integration praxis and expertise will allow for more individuals and teams to master integration in order to work towards promoting a culture of health. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Simulation electromagnetic scattering on bodies through integral equation and neural networks methods

    Science.gov (United States)

    Lvovich, I. Ya; Preobrazhenskiy, A. P.; Choporov, O. N.

    2018-05-01

    The paper deals with the issue of electromagnetic scattering on a perfectly conducting diffractive body of a complex shape. Performance calculation of the body scattering is carried out through the integral equation method. Fredholm equation of the second time was used for calculating electric current density. While solving the integral equation through the moments method, the authors have properly described the core singularity. The authors determined piecewise constant functions as basic functions. The chosen equation was solved through the moments method. Within the Kirchhoff integral approach it is possible to define the scattered electromagnetic field, in some way related to obtained electrical currents. The observation angles sector belongs to the area of the front hemisphere of the diffractive body. To improve characteristics of the diffractive body, the authors used a neural network. All the neurons contained a logsigmoid activation function and weighted sums as discriminant functions. The paper presents the matrix of weighting factors of the connectionist model, as well as the results of the optimized dimensions of the diffractive body. The paper also presents some basic steps in calculation technique of the diffractive bodies, based on the combination of integral equation and neural networks methods.

  17. Application of Numerical Integration and Data Fusion in Unit Vector Method

    Science.gov (United States)

    Zhang, J.

    2012-01-01

    The Unit Vector Method (UVM) is a series of orbit determination methods which are designed by Purple Mountain Observatory (PMO) and have been applied extensively. It gets the conditional equations for different kinds of data by projecting the basic equation to different unit vectors, and it suits for weighted process for different kinds of data. The high-precision data can play a major role in orbit determination, and accuracy of orbit determination is improved obviously. The improved UVM (PUVM2) promoted the UVM from initial orbit determination to orbit improvement, and unified the initial orbit determination and orbit improvement dynamically. The precision and efficiency are improved further. In this thesis, further research work has been done based on the UVM: Firstly, for the improvement of methods and techniques for observation, the types and decision of the observational data are improved substantially, it is also asked to improve the decision of orbit determination. The analytical perturbation can not meet the requirement. So, the numerical integration for calculating the perturbation has been introduced into the UVM. The accuracy of dynamical model suits for the accuracy of the real data, and the condition equations of UVM are modified accordingly. The accuracy of orbit determination is improved further. Secondly, data fusion method has been introduced into the UVM. The convergence mechanism and the defect of weighted strategy have been made clear in original UVM. The problem has been solved in this method, the calculation of approximate state transition matrix is simplified and the weighted strategy has been improved for the data with different dimension and different precision. Results of orbit determination of simulation and real data show that the work of this thesis is effective: (1) After the numerical integration has been introduced into the UVM, the accuracy of orbit determination is improved obviously, and it suits for the high-accuracy data of

  18. Critical Analysis of Methods for Integrating Economic and Environmental Indicators

    NARCIS (Netherlands)

    Huguet Ferran, Pau; Heijungs, Reinout; Vogtländer, Joost G.

    2018-01-01

    The application of environmental strategies requires scoring and evaluation methods that provide an integrated vision of the economic and environmental performance of systems. The vector optimisation, ratio and weighted addition of indicators are the three most prevalent techniques for addressing

  19. Quantization of soluble classical constrained systems

    International Nuclear Information System (INIS)

    Belhadi, Z.; Menas, F.; Bérard, A.; Mohrbach, H.

    2014-01-01

    The derivation of the brackets among coordinates and momenta for classical constrained systems is a necessary step toward their quantization. Here we present a new approach for the determination of the classical brackets which does neither require Dirac’s formalism nor the symplectic method of Faddeev and Jackiw. This approach is based on the computation of the brackets between the constants of integration of the exact solutions of the equations of motion. From them all brackets of the dynamical variables of the system can be deduced in a straightforward way

  20. Quantization of soluble classical constrained systems

    Energy Technology Data Exchange (ETDEWEB)

    Belhadi, Z. [Laboratoire de physique et chimie quantique, Faculté des sciences, Université Mouloud Mammeri, BP 17, 15000 Tizi Ouzou (Algeria); Laboratoire de physique théorique, Faculté des sciences exactes, Université de Bejaia, 06000 Bejaia (Algeria); Menas, F. [Laboratoire de physique et chimie quantique, Faculté des sciences, Université Mouloud Mammeri, BP 17, 15000 Tizi Ouzou (Algeria); Ecole Nationale Préparatoire aux Etudes d’ingéniorat, Laboratoire de physique, RN 5 Rouiba, Alger (Algeria); Bérard, A. [Equipe BioPhysStat, Laboratoire LCP-A2MC, ICPMB, IF CNRS No 2843, Université de Lorraine, 1 Bd Arago, 57078 Metz Cedex (France); Mohrbach, H., E-mail: herve.mohrbach@univ-lorraine.fr [Equipe BioPhysStat, Laboratoire LCP-A2MC, ICPMB, IF CNRS No 2843, Université de Lorraine, 1 Bd Arago, 57078 Metz Cedex (France)

    2014-12-15

    The derivation of the brackets among coordinates and momenta for classical constrained systems is a necessary step toward their quantization. Here we present a new approach for the determination of the classical brackets which does neither require Dirac’s formalism nor the symplectic method of Faddeev and Jackiw. This approach is based on the computation of the brackets between the constants of integration of the exact solutions of the equations of motion. From them all brackets of the dynamical variables of the system can be deduced in a straightforward way.

  1. Integral type operators from normal weighted Bloch spaces to QT,S spaces

    Directory of Open Access Journals (Sweden)

    Yongyi GU

    2016-08-01

    Full Text Available Operator theory is an important research content of the analytic function space theory. The discussion of simultaneous operator and function space is an effective way to study operator and function space. Assuming that  is an analytic self map on the unit disk Δ, and the normal weighted bloch space μ-B is a Banach space on the unit disk Δ, defining a composition operator C∶C(f=f on μ-B for all f∈μ-B, integral type operator JhC and CJh are generalized by integral operator and composition operator. The boundeness and compactness of the integral type operator JhC acting from normal weighted Bloch spaces to QT,S spaces are discussed, as well as the boundeness of the integral type operators CJh acting from normal weighted Bloch spaces to QT,S spaces. The related sufficient and necessary conditions are given.

  2. Neural substrates of reliability-weighted visual-tactile multisensory integration

    Directory of Open Access Journals (Sweden)

    Michael S Beauchamp

    2010-06-01

    Full Text Available As sensory systems deteriorate in aging or disease, the brain must relearn the appropriate weights to assign each modality during multisensory integration. Using blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI of human subjects, we tested a model for the neural mechanisms of sensory weighting, termed “weighted connections”. This model holds that the connection weights between early and late areas vary depending on the reliability of the modality, independent of the level of early sensory cortex activity. When subjects detected viewed and felt touches to the hand, a network of brain areas was active, including visual areas in lateral occipital cortex, somatosensory areas in inferior parietal lobe, and multisensory areas in the intraparietal sulcus (IPS. In agreement with the weighted connection model, the connection weight measured with structural equation modeling between somatosensory cortex and IPS increased for somatosensory-reliable stimuli, and the connection weight between visual cortex and IPS increased for visual-reliable stimuli. This double dissociation of connection strengths was similar to the pattern of behavioral responses during incongruent multisensory stimulation, suggesting that weighted connections may be a neural mechanism for behavioral reliability weighting.for behavioral reliability weighting.

  3. The evaluation of interblock mobility using a modified midpoint weighting scheme

    Energy Technology Data Exchange (ETDEWEB)

    Ito, Y

    1981-01-01

    A modified midpoint weighting scheme is a technique which can be used for increasing the accuracy and stability of finite difference numerical simulations. Generally, if midpoint weighting is used to evaluate the transmissibility at an interface between adjacent blocks in an oil reservoir, explicit methods may not produce the correct solution and implicit methods may lead to an oscillatory behavior. Reasons for this behavior have been investigated and it has been found that these problems occur because of numerical limitations during accumulation of the displacing fluid within the upstream block. A proposed modified version of midpoint weighting appears to eliminate this problem and several linear displacement test runs have indicated that the local truncation errors are comparable to those in the two-point upsteam scheme the use of which is constrained due to its assymetric charcter. The results were also compared to the single-point upsteam scheme the use of which is constrained due to its assymetric character. The results were also compared to the singlepoint upstream weighting method and it was found that the modified midpoint weighting scheme allowed the use of a coarser grid while maintaining similar accuracy. An additional advantage to this new technique is that it can also be used in an implicit formulation. 8 refs., 11 figs.

  4. Application of pattern search method to power system security constrained economic dispatch with non-smooth cost function

    International Nuclear Information System (INIS)

    Al-Othman, A.K.; El-Naggar, K.M.

    2008-01-01

    Direct search methods are evolutionary algorithms used to solve optimization problems. (DS) methods do not require any information about the gradient of the objective function at hand while searching for an optimum solution. One of such methods is Pattern Search (PS) algorithm. This paper presents a new approach based on a constrained pattern search algorithm to solve a security constrained power system economic dispatch problem (SCED) with non-smooth cost function. Operation of power systems demands a high degree of security to keep the system satisfactorily operating when subjected to disturbances, while and at the same time it is required to pay attention to the economic aspects. Pattern recognition technique is used first to assess dynamic security. Linear classifiers that determine the stability of electric power system are presented and added to other system stability and operational constraints. The problem is formulated as a constrained optimization problem in a way that insures a secure-economic system operation. Pattern search method is then applied to solve the constrained optimization formulation. In particular, the method is tested using three different test systems. Simulation results of the proposed approach are compared with those reported in literature. The outcome is very encouraging and proves that pattern search (PS) is very applicable for solving security constrained power system economic dispatch problem (SCED). In addition, valve-point effect loading and total system losses are considered to further investigate the potential of the PS technique. Based on the results, it can be concluded that the PS has demonstrated ability in handling highly nonlinear discontinuous non-smooth cost function of the SCED. (author)

  5. A New Integrated Weighted Model in SNOW-V10: Verification of Categorical Variables

    Science.gov (United States)

    Huang, Laura X.; Isaac, George A.; Sheng, Grant

    2014-01-01

    This paper presents the verification results for nowcasts of seven categorical variables from an integrated weighted model (INTW) and the underlying numerical weather prediction (NWP) models. Nowcasting, or short range forecasting (0-6 h), over complex terrain with sufficient accuracy is highly desirable but a very challenging task. A weighting, evaluation, bias correction and integration system (WEBIS) for generating nowcasts by integrating NWP forecasts and high frequency observations was used during the Vancouver 2010 Olympic and Paralympic Winter Games as part of the Science of Nowcasting Olympic Weather for Vancouver 2010 (SNOW-V10) project. Forecast data from Canadian high-resolution deterministic NWP system with three nested grids (at 15-, 2.5- and 1-km horizontal grid-spacing) were selected as background gridded data for generating the integrated nowcasts. Seven forecast variables of temperature, relative humidity, wind speed, wind gust, visibility, ceiling and precipitation rate are treated as categorical variables for verifying the integrated weighted forecasts. By analyzing the verification of forecasts from INTW and the NWP models among 15 sites, the integrated weighted model was found to produce more accurate forecasts for the 7 selected forecast variables, regardless of location. This is based on the multi-categorical Heidke skill scores for the test period 12 February to 21 March 2010.

  6. Constrained Optimization Based on Hybrid Evolutionary Algorithm and Adaptive Constraint-Handling Technique

    DEFF Research Database (Denmark)

    Wang, Yong; Cai, Zixing; Zhou, Yuren

    2009-01-01

    A novel approach to deal with numerical and engineering constrained optimization problems, which incorporates a hybrid evolutionary algorithm and an adaptive constraint-handling technique, is presented in this paper. The hybrid evolutionary algorithm simultaneously uses simplex crossover and two...... mutation operators to generate the offspring population. Additionally, the adaptive constraint-handling technique consists of three main situations. In detail, at each situation, one constraint-handling mechanism is designed based on current population state. Experiments on 13 benchmark test functions...... and four well-known constrained design problems verify the effectiveness and efficiency of the proposed method. The experimental results show that integrating the hybrid evolutionary algorithm with the adaptive constraint-handling technique is beneficial, and the proposed method achieves competitive...

  7. The equivalence of multi-criteria methods for radiotherapy plan optimization

    International Nuclear Information System (INIS)

    Breedveld, Sebastiaan; Storchi, Pascal R M; Heijmen, Ben J M

    2009-01-01

    Several methods can be used to achieve multi-criteria optimization of radiation therapy treatment planning, which strive for Pareto-optimality. The property of the solution being Pareto optimal is desired, because it guarantees that no criteria can be improved without deteriorating another criteria. The most widely used methods are the weighted-sum method, in which the different treatment objectives are weighted, and constrained optimization methods, in which treatment goals are set and the algorithm has to find the best plan fulfilling these goals. The constrained method used in this paper, the 2pεc (2-phase ε-constraint) method is based on the ε-constraint method, which generates Pareto-optimal solutions. Both approaches are uniquely related to each other. In this paper, we will show that it is possible to switch from the constrained method to the weighted-sum method by using the Lagrange multipliers from the constrained optimization problem, and vice versa by setting the appropriate constraints. In general, the theory presented in this paper can be useful in cases where a new situation is slightly different from the original situation, e.g. in online treatment planning, with deformations of the volumes of interest, or in automated treatment planning, where changes to the automated plan have to be made. An example of the latter is given where the planner is not satisfied with the result from the constrained method and wishes to decrease the dose in a structure. By using the Lagrange multipliers, a weighted-sum optimization problem is constructed, which generates a Pareto-optimal solution in the neighbourhood of the original plan, but fulfills the new treatment objectives.

  8. Choosing health, constrained choices.

    Science.gov (United States)

    Chee Khoon Chan

    2009-12-01

    In parallel with the neo-liberal retrenchment of the welfarist state, an increasing emphasis on the responsibility of individuals in managing their own affairs and their well-being has been evident. In the health arena for instance, this was a major theme permeating the UK government's White Paper Choosing Health: Making Healthy Choices Easier (2004), which appealed to an ethos of autonomy and self-actualization through activity and consumption which merited esteem. As a counterpoint to this growing trend of informed responsibilization, constrained choices (constrained agency) provides a useful framework for a judicious balance and sense of proportion between an individual behavioural focus and a focus on societal, systemic, and structural determinants of health and well-being. Constrained choices is also a conceptual bridge between responsibilization and population health which could be further developed within an integrative biosocial perspective one might refer to as the social ecology of health and disease.

  9. National Options for a Sustainable Nuclear Energy System: MCDM Evaluation Using an Improved Integrated Weighting Approach

    Directory of Open Access Journals (Sweden)

    Ruxing Gao

    2017-12-01

    Full Text Available While the prospects look bright for nuclear energy development in China, no consensus about an optimum transitional path towards sustainability of the nuclear fuel cycle has been achieved. Herein, we present a preliminary study of decision making for China’s future nuclear energy systems, combined with a dynamic analysis model. In terms of sustainability assessment based on environmental, economic, and social considerations, we compared and ranked the four candidate options of nuclear fuel cycles combined with an integrated evaluation analysis using the Multi-Criteria Decision Making (MCDM method. An improved integrated weighting method was first applied in the nuclear fuel cycle evaluation study. This method synthesizes diverse subjective/objective weighting methods to evaluate conflicting criteria among the competing decision makers at different levels of expertise and experience. The results suggest that the fuel cycle option of direct recycling of spent fuel through fast reactors is the most competitive candidate, while the fuel cycle option of direct disposal of all spent fuel without recycling is the least attractive for China, from a sustainability perspective. In summary, this study provided a well-informed decision-making tool to support the development of national nuclear energy strategies.

  10. Using mixed methods to develop and evaluate an online weight management intervention.

    Science.gov (United States)

    Bradbury, Katherine; Dennison, Laura; Little, Paul; Yardley, Lucy

    2015-02-01

    This article illustrates the use of mixed methods in the development and evaluation of the Positive Online Weight Reduction (POWeR) programme, an e-health intervention designed to support sustainable weight loss. The studies outlined also explore how human support might enhance intervention usage and weight loss. Mixed methods were used to develop and evaluate POWeR. In the development phase, we drew on both quantitative and qualitative findings to plan and gain feedback on the intervention. Next, a feasibility trial, with nested qualitative study, explored what level of human support might lead to the most sustainable weight loss. Finally, a large community-based trial of POWeR, with nested qualitative study, explored whether the addition of brief telephone coaching enhances usage. Findings suggest that POWeR is acceptable and potentially effective. Providing human support enhanced usage in our trials, but was not unproblematic. Interestingly, there were some indications that more basic (brief) human support may produce more sustainable weight loss outcomes than more regular support. Qualitative interviews suggested that more regular support might foster reliance, meaning patients cannot sustain their weight losses when support ends. Qualitative findings in the community trial also suggested explanations for why many people may not take up the opportunity for human support. Integrating findings from both our qualitative and quantitative studies provided far richer insights than would have been gained using only a single method of inquiry. Further research should investigate the optimum delivery of human support needed to maximize sustainable weight loss in online interventions. Statement of contribution What is already known on this subject? There is evidence that human support may increase the effectiveness of e-health interventions. It is unclear what level of human support might be optimal or how human support improves effectiveness. Triangulation of

  11. Comparison of preconditioned Krylov subspace iteration methods for PDE-constrained optimization problems

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe; Farouq, S.; Neytcheva, M.

    2017-01-01

    Roč. 74, č. 1 (2017), s. 19-37 ISSN 1017-1398 Institutional support: RVO:68145535 Keywords : PDE-constrained optimization problems * finite elements * iterative solution methods * preconditioning Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.241, year: 2016 https://link.springer.com/article/10.1007%2Fs11075-016-0136-5

  12. A method of estimating log weights.

    Science.gov (United States)

    Charles N. Mann; Hilton H. Lysons

    1972-01-01

    This paper presents a practical method of estimating the weights of logs before they are yarded. Knowledge of log weights is required to achieve optimum loading of modern yarding equipment. Truckloads of logs are weighed and measured to obtain a local density index (pounds per cubic foot) for a species of logs. The density index is then used to estimate the weights of...

  13. Evaluation of three paediatric weight estimation methods in Singapore.

    Science.gov (United States)

    Loo, Pei Ying; Chong, Shu-Ling; Lek, Ngee; Bautista, Dianne; Ng, Kee Chong

    2013-04-01

    Rapid paediatric weight estimation methods in the emergency setting have not been evaluated for South East Asian children. This study aims to assess the accuracy and precision of three such methods in Singapore children: Broselow-Luten (BL) tape, Advanced Paediatric Life Support (APLS) (estimated weight (kg) = 2 (age + 4)) and Luscombe (estimated weight (kg) = 3 (age) + 7) formulae. We recruited 875 patients aged 1-10 years in a Paediatric Emergency Department in Singapore over a 2-month period. For each patient, true weight and height were determined. True height was cross-referenced to the BL tape markings and used to derive estimated weight (virtual BL tape method), while patient's round-down age (in years) was used to derive estimated weights using APLS and Luscombe formulae, respectively. The percentage difference between the true and estimated weights was calculated. For each method, the bias and extent of agreement were quantified using Bland-Altman method (mean percentage difference (MPD) and 95% limits of agreement (LOA)). The proportion of weight estimates within 10% of true weight (p₁₀) was determined. The BL tape method marginally underestimated weights (MPD +0.6%; 95% LOA -26.8% to +28.1%; p₁₀ 58.9%). The APLS formula underestimated weights (MPD +7.6%; 95% LOA -26.5% to +41.7%; p₁₀ 45.7%). The Luscombe formula overestimated weights (MPD -7.4%; 95% LOA -51.0% to +36.2%; p₁₀ 37.7%). Of the three methods we evaluated, the BL tape method provided the most accurate and precise weight estimation for Singapore children. The APLS and Luscombe formulae underestimated and overestimated the children's weights, respectively, and were considerably less precise. © 2013 The Authors. Journal of Paediatrics and Child Health © 2013 Paediatrics and Child Health Division (Royal Australasian College of Physicians).

  14. Comparison of preconditioned Krylov subspace iteration methods for PDE-constrained optimization problems

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe; Farouq, S.; Neytcheva, M.

    2017-01-01

    Roč. 74, č. 1 (2017), s. 19-37 ISSN 1017-1398 Institutional support: RVO:68145535 Keywords : PDE-constrained optimization problems * finite elements * iterative solution method s * preconditioning Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.241, year: 2016 https://link.springer.com/article/10.1007%2Fs11075-016-0136-5

  15. Integrated Multidisciplinary Constrained Optimization of Offshore Support Structures

    International Nuclear Information System (INIS)

    Haghi, Rad; Molenaar, David P; Ashuri, Turaj; Van der Valk, Paul L C

    2014-01-01

    In the current offshore wind turbine support structure design method, the tower and foundation, which form the support structure are designed separately by the turbine and foundation designer. This method yields a suboptimal design and it results in a heavy, overdesigned and expensive support structure. This paper presents an integrated multidisciplinary approach to design the tower and foundation simultaneously. Aerodynamics, hydrodynamics, structure and soil mechanics are the modeled disciplines to capture the full dynamic behavior of the foundation and tower under different environmental conditions. The objective function to be minimized is the mass of the support structure. The model includes various design constraints: local and global buckling, modal frequencies, and fatigue damage along different stations of the structure. To show the usefulness of the method, an existing SWT-3.6-107 offshore wind turbine where its tower and foundation are designed separately is used as a case study. The result of the integrated multidisciplinary design optimization shows 12.1% reduction in the mass of the support structure, while satisfying all the design constraints

  16. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems.

    Science.gov (United States)

    Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D

    2015-07-10

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  17. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems

    Directory of Open Access Journals (Sweden)

    Shoaib Ehsan

    2015-07-01

    Full Text Available The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF, allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video. Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44% in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  18. Reliability-Weighted Integration of Audiovisual Signals Can Be Modulated by Top-down Attention

    Science.gov (United States)

    Noppeney, Uta

    2018-01-01

    Abstract Behaviorally, it is well established that human observers integrate signals near-optimally weighted in proportion to their reliabilities as predicted by maximum likelihood estimation. Yet, despite abundant behavioral evidence, it is unclear how the human brain accomplishes this feat. In a spatial ventriloquist paradigm, participants were presented with auditory, visual, and audiovisual signals and reported the location of the auditory or the visual signal. Combining psychophysics, multivariate functional MRI (fMRI) decoding, and models of maximum likelihood estimation (MLE), we characterized the computational operations underlying audiovisual integration at distinct cortical levels. We estimated observers’ behavioral weights by fitting psychometric functions to participants’ localization responses. Likewise, we estimated the neural weights by fitting neurometric functions to spatial locations decoded from regional fMRI activation patterns. Our results demonstrate that low-level auditory and visual areas encode predominantly the spatial location of the signal component of a region’s preferred auditory (or visual) modality. By contrast, intraparietal sulcus forms spatial representations by integrating auditory and visual signals weighted by their reliabilities. Critically, the neural and behavioral weights and the variance of the spatial representations depended not only on the sensory reliabilities as predicted by the MLE model but also on participants’ modality-specific attention and report (i.e., visual vs. auditory). These results suggest that audiovisual integration is not exclusively determined by bottom-up sensory reliabilities. Instead, modality-specific attention and report can flexibly modulate how intraparietal sulcus integrates sensory signals into spatial representations to guide behavioral responses (e.g., localization and orienting). PMID:29527567

  19. A numerical method for solving singular De`s

    Energy Technology Data Exchange (ETDEWEB)

    Mahaver, W.T.

    1996-12-31

    A numerical method is developed for solving singular differential equations using steepest descent based on weighted Sobolev gradients. The method is demonstrated on a variety of first and second order problems, including linear constrained, unconstrained, and partially constrained first order problems, a nonlinear first order problem with irregular singularity, and two second order variational problems.

  20. Practicable group testing method to evaluate weight/weight GMO content in maize grains.

    Science.gov (United States)

    Mano, Junichi; Yanaka, Yuka; Ikezu, Yoko; Onishi, Mari; Futo, Satoshi; Minegishi, Yasutaka; Ninomiya, Kenji; Yotsuyanagi, Yuichi; Spiegelhalter, Frank; Akiyama, Hiroshi; Teshima, Reiko; Hino, Akihiro; Naito, Shigehiro; Koiwa, Tomohiro; Takabatake, Reona; Furui, Satoshi; Kitta, Kazumi

    2011-07-13

    Because of the increasing use of maize hybrids with genetically modified (GM) stacked events, the established and commonly used bulk sample methods for PCR quantification of GM maize in non-GM maize are prone to overestimate the GM organism (GMO) content, compared to the actual weight/weight percentage of GM maize in the grain sample. As an alternative method, we designed and assessed a group testing strategy in which the GMO content is statistically evaluated based on qualitative analyses of multiple small pools, consisting of 20 maize kernels each. This approach enables the GMO content evaluation on a weight/weight basis, irrespective of the presence of stacked-event kernels. To enhance the method's user-friendliness in routine application, we devised an easy-to-use PCR-based qualitative analytical method comprising a sample preparation step in which 20 maize kernels are ground in a lysis buffer and a subsequent PCR assay in which the lysate is directly used as a DNA template. This method was validated in a multilaboratory collaborative trial.

  1. Dynamic Optimization of Constrained Layer Damping Structure for the Headstock of Machine Tools with Modal Strain Energy Method

    Directory of Open Access Journals (Sweden)

    Yakai Xu

    2017-01-01

    Full Text Available Dynamic stiffness and damping of the headstock, which is a critical component of precision horizontal machining center, are two main factors that influence machining accuracy and surface finish quality. Constrained Layer Damping (CLD structure is proved to be effective in raising damping capacity for the thin plate and shell structures. In this paper, one kind of high damping material is utilized on the headstock to improve damping capacity. The dynamic characteristic of the hybrid headstock is investigated analytically and experimentally. The results demonstrate that the resonant response amplitudes of the headstock with damping material can decrease significantly compared to original cast structure. To obtain the optimal configuration of damping material, a topology optimization method based on the Evolutionary Structural Optimization (ESO is implemented. Modal Strain Energy (MSE method is employed to analyze the damping and to derive the sensitivity of the modal loss factor. The optimization results indicate that the added weight of damping material decreases by 50%; meanwhile the first two orders of modal loss factor decrease by less than 23.5% compared to the original structure.

  2. The discrete null space method for the energy-consistent integration of constrained mechanical systems. Part III: Flexible multibody dynamics

    International Nuclear Information System (INIS)

    Leyendecker, Sigrid; Betsch, Peter; Steinmann, Paul

    2008-01-01

    In the present work, the unified framework for the computational treatment of rigid bodies and nonlinear beams developed by Betsch and Steinmann (Multibody Syst. Dyn. 8, 367-391, 2002) is extended to the realm of nonlinear shells. In particular, a specific constrained formulation of shells is proposed which leads to the semi-discrete equations of motion characterized by a set of differential-algebraic equations (DAEs). The DAEs provide a uniform description for rigid bodies, semi-discrete beams and shells and, consequently, flexible multibody systems. The constraints may be divided into two classes: (i) internal constraints which are intimately connected with the assumption of rigidity of the bodies, and (ii) external constraints related to the presence of joints in a multibody framework. The present approach thus circumvents the use of rotational variables throughout the whole time discretization, facilitating the design of energy-momentum methods for flexible multibody dynamics. After the discretization has been completed a size-reduction of the discrete system is performed by eliminating the constraint forces. Numerical examples dealing with a spatial slider-crank mechanism and with intersecting shells illustrate the performance of the proposed method

  3. Analysis of neutron and x-ray reflectivity data by constrained least-squares methods

    DEFF Research Database (Denmark)

    Pedersen, J.S.; Hamley, I.W.

    1994-01-01

    . The coefficients in the series are determined by constrained nonlinear least-squares methods, in which the smoothest solution that agrees with the data is chosen. In the second approach the profile is expressed as a series of sine and cosine terms. A smoothness constraint is used which reduces the coefficients...

  4. New methods for solving a vertex p-center problem with uncertain demand-weighted distance: A real case study

    Directory of Open Access Journals (Sweden)

    Javad Nematian

    2015-04-01

    Full Text Available Vertex and p-center problems are two well-known types of the center problem. In this paper, a p-center problem with uncertain demand-weighted distance will be introduced in which the demands are considered as fuzzy random variables (FRVs and the objective of the problem is to minimize the maximum distance between a node and its nearest facility. Then, by introducing new methods, the proposed problem is converted to deterministic integer programming (IP problems where these methods will be obtained through the implementation of the possibility theory and fuzzy random chance-constrained programming (FRCCP. Finally, the proposed methods are applied for locating bicycle stations in the city of Tabriz in Iran as a real case study. The computational results of our study show that these methods can be implemented for the center problem with uncertain frameworks.

  5. Medical weight loss versus bariatric surgery: does method affect body composition and weight maintenance after 15% reduction in body weight?

    Science.gov (United States)

    Kulovitz, Michelle G; Kolkmeyer, Deborah; Conn, Carole A; Cohen, Deborah A; Ferraro, Robert T

    2014-01-01

    The aim of this study was to investigate body composition changes in fat mass (FM) to lean body mass (LBM) ratios following 15% body weight loss (WL) in both integrated medical treatment and bariatric surgery groups. Obese patients (body mass index [BMI] 46.6 ± 6.5 kg/m(2)) who underwent laparoscopic gastric bypass surgery (BS), were matched with 24 patients undergoing integrated medical and behavioral treatment (MT). The BS and MT groups were evaluated for body weight, BMI, body composition, and waist circumference (WC) at baseline and after 15% WL. Following 15% body WL, there were significant decreases in %FM and increased %LBM (P maintenance of WL at 1 y were found. For both groups, baseline FM was found to be negatively correlated with percentage of weight regained (%WR) at 1 y post-WL (r = -0.457; P = 0.007). Baseline WC and rate of WL to 15% were significant predictors of %WR only in the BS group (r = 0.713; P = 0.020). If followed closely by professionals during the first 15% body WL, patients losing 15% weight by either medical or surgical treatments can attain similar FM:LBM loss ratios and can maintain WL for 1 y. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. About One Approach to Determine the Weights of the State Space Method

    Directory of Open Access Journals (Sweden)

    I. K. Romanova

    2015-01-01

    Full Text Available The article studies methods of determining weight coefficients, also called coefficients of criteria importance in multiobjective optimization (MOO. It is assumed that these coefficients indicate a degree of individual criteria influence on the final selection (final or summary assessment: the more is coefficient, the greater is contribution of its corresponding criterion.Today in the framework of modern information systems to support decision making for various purposes a number of methods for determining relative importance of criteria has been developed. Among those methods we can distinguish a utility method, method of weighted power average; weighted median; method of matching clustered rankings, method of paired comparison of importance, etc.However, it should be noted that different techniques available for calculating weights does not eliminate the main problem of multicriteria optimization namely, the inconsistency of individual criteria. The basis for solving multicriteria problems is a fundamental principle of multi-criteria selection i.e. Edgeworth - Pareto principle.Despite a large number of methods to determine the weights, the task remains relevant not only for reasons of evaluations subjectivity, but also because of the mathematical aspects. Today, recognized is the fact that, for example, such a popular method as linear convolution of private criteria, essentially, represents one of the heuristic approaches and, applying it, you can have got not the best final choice. Carlin lemma reflects the limits of the method application.The aim of this work is to offer one of the methods to calculate the weights applied to the problem of dynamic system optimization, the quality of which is determined by the criterion of a special type, namely integral quadratic quality criterion. The main challenge relates to the method of state space, which in the literature also is called the method of analytical design of optimal controllers.Despite the

  7. Weighted mining of massive collections of [Formula: see text]-values by convex optimization.

    Science.gov (United States)

    Dobriban, Edgar

    2018-06-01

    Researchers in data-rich disciplines-think of computational genomics and observational cosmology-often wish to mine large bodies of [Formula: see text]-values looking for significant effects, while controlling the false discovery rate or family-wise error rate. Increasingly, researchers also wish to prioritize certain hypotheses, for example, those thought to have larger effect sizes, by upweighting, and to impose constraints on the underlying mining, such as monotonicity along a certain sequence. We introduce Princessp , a principled method for performing weighted multiple testing by constrained convex optimization. Our method elegantly allows one to prioritize certain hypotheses through upweighting and to discount others through downweighting, while constraining the underlying weights involved in the mining process. When the [Formula: see text]-values derive from monotone likelihood ratio families such as the Gaussian means model, the new method allows exact solution of an important optimal weighting problem previously thought to be non-convex and computationally infeasible. Our method scales to massive data set sizes. We illustrate the applications of Princessp on a series of standard genomics data sets and offer comparisons with several previous 'standard' methods. Princessp offers both ease of operation and the ability to scale to extremely large problem sizes. The method is available as open-source software from github.com/dobriban/pvalue_weighting_matlab (accessed 11 October 2017).

  8. Evaluation of the filtered leapfrog-trapezoidal time integration method

    International Nuclear Information System (INIS)

    Roache, P.J.; Dietrich, D.E.

    1988-01-01

    An analysis and evaluation are presented for a new method of time integration for fluid dynamic proposed by Dietrich. The method, called the filtered leapfrog-trapezoidal (FLT) scheme, is analyzed for the one-dimensional constant-coefficient advection equation and is shown to have some advantages for quasi-steady flows. A modification (FLTW) using a weighted combination of FLT and leapfrog is developed which retains the advantages for steady flows, increases accuracy for time-dependent flows, and involves little coding effort. Merits and applicability are discussed

  9. An improved partial bundle method for linearly constrained minimax problems

    Directory of Open Access Journals (Sweden)

    Chunming Tang

    2016-02-01

    Full Text Available In this paper, we propose an improved partial bundle method for solving linearly constrained minimax problems. In order to reduce the number of component function evaluations, we utilize a partial cutting-planes model to substitute for the traditional one. At each iteration, only one quadratic programming subproblem needs to be solved to obtain a new trial point. An improved descent test criterion is introduced to simplify the algorithm. The method produces a sequence of feasible trial points, and ensures that the objective function is monotonically decreasing on the sequence of stability centers. Global convergence of the algorithm is established. Moreover, we utilize the subgradient aggregation strategy to control the size of the bundle and therefore overcome the difficulty of computation and storage. Finally, some preliminary numerical results show that the proposed method is effective.

  10. Weighted particle method for solving the Boltzmann equation

    International Nuclear Information System (INIS)

    Tohyama, M.; Suraud, E.

    1990-01-01

    We propose a new, deterministic, method of solution of the nuclear Boltzmann equation. In this Weighted Particle Method two-body collisions are treated by a Master equation for an occupation probability of each numerical particle. We apply the method to the quadrupole motion of 12 C. A comparison with usual stochastic methods is made. Advantages and disadvantages of the Weighted Particle Method are discussed

  11. Augmenting Ordinal Methods of Attribute Weight Approximation

    DEFF Research Database (Denmark)

    Daneilson, Mats; Ekenberg, Love; He, Ying

    2014-01-01

    of the obstacles and methods for introducing so-called surrogate weights have proliferated in the form of ordinal ranking methods for criteria weights. Considering the decision quality, one main problem is that the input information allowed in ordinal methods is sometimes too restricted. At the same time, decision...... makers often possess more background information, for example, regarding the relative strengths of the criteria, and might want to use that. We propose combined methods for facilitating the elicitation process and show how this provides a way to use partial information from the strength of preference...

  12. Lowest-order constrained variational method for simple many-fermion systems

    International Nuclear Information System (INIS)

    Alexandrov, I.; Moszkowski, S.A.; Wong, C.W.

    1975-01-01

    The authors study the potential energy of many-fermion systems calculated by the lowest-order constrained variational (LOCV) method of Pandharipande. Two simple two-body interactions are used. For a simple hard-core potential in a dilute Fermi gas, they find that the Huang-Yang exclusion correction can be used to determine a healing distance. The result is close to the older Pandharipande prescription for the healing distance. For a hard core plus attractive exponential potential, the LOCV result agrees closely with the lowest-order separation method of Moszkowski and Scott. They find that the LOCV result has a shallow minimum as a function of the healing distance at the Moszkowski-Scott separation distance. The significance of the absence of a Brueckner dispersion correction in the LOCV result is discussed. (Auth.)

  13. A Sequential Quadratically Constrained Quadratic Programming Method of Feasible Directions

    International Nuclear Information System (INIS)

    Jian Jinbao; Hu Qingjie; Tang Chunming; Zheng Haiyan

    2007-01-01

    In this paper, a sequential quadratically constrained quadratic programming method of feasible directions is proposed for the optimization problems with nonlinear inequality constraints. At each iteration of the proposed algorithm, a feasible direction of descent is obtained by solving only one subproblem which consist of a convex quadratic objective function and simple quadratic inequality constraints without the second derivatives of the functions of the discussed problems, and such a subproblem can be formulated as a second-order cone programming which can be solved by interior point methods. To overcome the Maratos effect, an efficient higher-order correction direction is obtained by only one explicit computation formula. The algorithm is proved to be globally convergent and superlinearly convergent under some mild conditions without the strict complementarity. Finally, some preliminary numerical results are reported

  14. Direct integral linear least square regression method for kinetic evaluation of hepatobiliary scintigraphy

    International Nuclear Information System (INIS)

    Shuke, Noriyuki

    1991-01-01

    In hepatobiliary scintigraphy, kinetic model analysis, which provides kinetic parameters like hepatic extraction or excretion rate, have been done for quantitative evaluation of liver function. In this analysis, unknown model parameters are usually determined using nonlinear least square regression method (NLS method) where iterative calculation and initial estimate for unknown parameters are required. As a simple alternative to NLS method, direct integral linear least square regression method (DILS method), which can determine model parameters by a simple calculation without initial estimate, is proposed, and tested the applicability to analysis of hepatobiliary scintigraphy. In order to see whether DILS method could determine model parameters as good as NLS method, or to determine appropriate weight for DILS method, simulated theoretical data based on prefixed parameters were fitted to 1 compartment model using both DILS method with various weightings and NLS method. The parameter values obtained were then compared with prefixed values which were used for data generation. The effect of various weights on the error of parameter estimate was examined, and inverse of time was found to be the best weight to make the error minimum. When using this weight, DILS method could give parameter values close to those obtained by NLS method and both parameter values were very close to prefixed values. With appropriate weighting, the DILS method could provide reliable parameter estimate which is relatively insensitive to the data noise. In conclusion, the DILS method could be used as a simple alternative to NLS method, providing reliable parameter estimate. (author)

  15. Improving method for calculating integral index of personnel security of company

    Directory of Open Access Journals (Sweden)

    Chjan Khao Yui

    2016-06-01

    Full Text Available The paper improves the method of calculating the integral index of personnel security of a company. The author has identified four components of personnel security (social and motivational safety, occupational safety, not confliction security, life safety which are characterized by certain indicators. Integral index of personnel security is designed for the enterprises of machine-building sector in Kharkov region, taking into account theweight coefficients j-th component of bj, and weighting factors that determine the degree of contribution of the ith parameter in the integral index aіj as defined by experts.

  16. New Exact Penalty Functions for Nonlinear Constrained Optimization Problems

    Directory of Open Access Journals (Sweden)

    Bingzhuang Liu

    2014-01-01

    Full Text Available For two kinds of nonlinear constrained optimization problems, we propose two simple penalty functions, respectively, by augmenting the dimension of the primal problem with a variable that controls the weight of the penalty terms. Both of the penalty functions enjoy improved smoothness. Under mild conditions, it can be proved that our penalty functions are both exact in the sense that local minimizers of the associated penalty problem are precisely the local minimizers of the original constrained problem.

  17. Comprehensive Evaluation of the Sustainable Development of Power Grid Enterprises Based on the Model of Fuzzy Group Ideal Point Method and Combination Weighting Method with Improved Group Order Relation Method and Entropy Weight Method

    Directory of Open Access Journals (Sweden)

    Shuyu Dai

    2017-10-01

    Full Text Available As an important implementing body of the national energy strategy, grid enterprises bear the important responsibility of optimizing the allocation of energy resources and serving the economic and social development, and their levels of sustainable development have a direct impact on the national economy and social life. In this paper, the model of fuzzy group ideal point method and combination weighting method with improved group order relation method and entropy weight method is proposed to evaluate the sustainable development of power grid enterprises. Firstly, on the basis of consulting a large amount of literature, the important criteria of the comprehensive evaluation of the sustainable development of power grid enterprises are preliminarily selected. The opinions of the industry experts are consulted and fed back for many rounds through the Delphi method and the evaluation criteria system for sustainable development of power grid enterprises is determined, then doing the consistent and non dimensional processing of the evaluation criteria. After that, based on the basic order relation method, the weights of each expert judgment matrix are synthesized to construct the compound matter elements. By using matter element analysis, the subjective weights of the criteria are obtained. And entropy weight method is used to determine the objective weights of the preprocessed criteria. Then, combining the subjective and objective information with the combination weighting method based on the subjective and objective weighted attribute value consistency, a more comprehensive, reasonable and accurate combination weight is calculated. Finally, based on the traditional TOPSIS method, the triangular fuzzy numbers are introduced to better realize the scientific processing of the data information which is difficult to quantify, and the queuing indication value of each object and the ranking result are obtained. A numerical example is taken to prove that the

  18. Iterative methods for weighted least-squares

    Energy Technology Data Exchange (ETDEWEB)

    Bobrovnikova, E.Y.; Vavasis, S.A. [Cornell Univ., Ithaca, NY (United States)

    1996-12-31

    A weighted least-squares problem with a very ill-conditioned weight matrix arises in many applications. Because of round-off errors, the standard conjugate gradient method for solving this system does not give the correct answer even after n iterations. In this paper we propose an iterative algorithm based on a new type of reorthogonalization that converges to the solution.

  19. Integrating Iris and Signature Traits for Personal Authentication Using User-SpecificWeighting

    Directory of Open Access Journals (Sweden)

    Serestina Viriri

    2012-03-01

    Full Text Available Biometric systems based on uni-modal traits are characterized by noisy sensor data, restricted degrees of freedom, non-universality and are susceptible to spoof attacks. Multi-modal biometric systems seek to alleviate some of these drawbacks by providing multiple evidences of the same identity. In this paper, a user-score-based weighting technique for integrating the iris and signature traits is presented. This user-specific weighting technique has proved to be an efficient and effective fusion scheme which increases the authentication accuracy rate of multi-modal biometric systems. The weights are used to indicate the importance of matching scores output by each biometrics trait. The experimental results show that our biometric system based on the integration of iris and signature traits achieve a false rejection rate (FRR of 0.08% and a false acceptance rate (FAR of 0.01%.

  20. The weighted-sum-of-gray-gases model for arbitrary solution methods in radiative transfer

    International Nuclear Information System (INIS)

    Modest, M.F.

    1991-01-01

    In this paper the weighted-sum-of-gray-gases approach for radiative transfer in non-gray participating media, first developed by Hottel in the context of the zonal method, has been shown to be applicable to the general radiative equation of transfer. Within the limits of the weighted-sum-of-gray-gases model (non-scattering media within a black-walled enclosure) any non-gray radiation problem can be solved by any desired solution method after replacing the medium by an equivalent small number of gray media with constant absorption coefficients. Some examples are presented for isothermal media and media at radiative equilibrium, using the exact integral equations as well as the popular P-1 approximation of the equivalent gray media solution. The results demonstrate the equivalency of the method with the quadrature of spectral results, as well as the tremendous computer times savings (by a minimum of 95%) which are achieved

  1. Coordinated trajectory planning of dual-arm space robot using constrained particle swarm optimization

    Science.gov (United States)

    Wang, Mingming; Luo, Jianjun; Yuan, Jianping; Walter, Ulrich

    2018-05-01

    Application of the multi-arm space robot will be more effective than single arm especially when the target is tumbling. This paper investigates the application of particle swarm optimization (PSO) strategy to coordinated trajectory planning of the dual-arm space robot in free-floating mode. In order to overcome the dynamics singularities issue, the direct kinematics equations in conjunction with constrained PSO are employed for coordinated trajectory planning of dual-arm space robot. The joint trajectories are parametrized with Bézier curve to simplify the calculation. Constrained PSO scheme with adaptive inertia weight is implemented to find the optimal solution of joint trajectories while specific objectives and imposed constraints are satisfied. The proposed method is not sensitive to the singularity issue due to the application of forward kinematic equations. Simulation results are presented for coordinated trajectory planning of two kinematically redundant manipulators mounted on a free-floating spacecraft and demonstrate the effectiveness of the proposed method.

  2. Numerical evaluation of integrals containing a spherical Bessel function by product integration

    International Nuclear Information System (INIS)

    Lehman, D.R.; Parke, W.C.; Maximon, L.C.

    1981-01-01

    A method is developed for numerical evaluation of integrals with k-integration range from 0 to infinity that contain a spherical Bessel function j/sub l/(kr) explicitly. The required quadrature weights are easily calculated and the rate of convergence is rapid: only a relatively small number of quadrature points is needed: for an accurate evaluation even when r is large. The quadrature rule is obtained by the method of product integration. With the abscissas chosen to be those of Clenshaw--Curtis and the Chebyshev polynomials as the interpolating polynomials, quadrature weights are obtained that depend on the spherical Bessel function. An inhomogenous recurrence relation is derived from which the weights can be calculated without accumulation of roundoff error. The procedure is summarized as an easily implementable algorithm. Questions of convergence are discussed and the rate of convergence demonstrated for several test integrals. Alternative procedures are given for generating the integration weights and an error analysis of the method is presented

  3. A Variant of the Topkis-Veinott Method for Solving Inequality Constrained Optimization Problems

    International Nuclear Information System (INIS)

    Birge, J. R.; Qi, L.; Wei, Z.

    2000-01-01

    In this paper we give a variant of the Topkis-Veinott method for solving inequality constrained optimization problems. This method uses a linearly constrained positive semidefinite quadratic problem to generate a feasible descent direction at each iteration. Under mild assumptions, the algorithm is shown to be globally convergent in the sense that every accumulation point of the sequence generated by the algorithm is a Fritz-John point of the problem. We introduce a Fritz-John (FJ) function, an FJ1 strong second-order sufficiency condition (FJ1-SSOSC), and an FJ2 strong second-order sufficiency condition (FJ2-SSOSC), and then show, without any constraint qualification (CQ), that (i) if an FJ point z satisfies the FJ1-SSOSC, then there exists a neighborhood N(z) of z such that, for any FJ point y element of N(z) {z } , f 0 (y) ≠ f 0 (z) , where f 0 is the objective function of the problem; (ii) if an FJ point z satisfies the FJ2-SSOSC, then z is a strict local minimum of the problem. The result (i) implies that the entire iteration point sequence generated by the method converges to an FJ point. We also show that if the parameters are chosen large enough, a unit step length can be accepted by the proposed algorithm

  4. Dynamic re-weighted total variation technique and statistic Iterative reconstruction method for x-ray CT metal artifact reduction

    Science.gov (United States)

    Peng, Chengtao; Qiu, Bensheng; Zhang, Cheng; Ma, Changyu; Yuan, Gang; Li, Ming

    2017-07-01

    Over the years, the X-ray computed tomography (CT) has been successfully used in clinical diagnosis. However, when the body of the patient to be examined contains metal objects, the image reconstructed would be polluted by severe metal artifacts, which affect the doctor's diagnosis of disease. In this work, we proposed a dynamic re-weighted total variation (DRWTV) technique combined with the statistic iterative reconstruction (SIR) method to reduce the artifacts. The DRWTV method is based on the total variation (TV) and re-weighted total variation (RWTV) techniques, but it provides a sparser representation than TV and protects the tissue details better than RWTV. Besides, the DRWTV can suppress the artifacts and noise, and the SIR convergence speed is also accelerated. The performance of the algorithm is tested on both simulated phantom dataset and clinical dataset, which are the teeth phantom with two metal implants and the skull with three metal implants, respectively. The proposed algorithm (SIR-DRWTV) is compared with two traditional iterative algorithms, which are SIR and SIR constrained by RWTV regulation (SIR-RWTV). The results show that the proposed algorithm has the best performance in reducing metal artifacts and protecting tissue details.

  5. The Weighted Burgers Vector: a new quantity for constraining dislocation densities and types using electron backscatter diffraction on 2D sections through crystalline materials.

    Science.gov (United States)

    Wheeler, J; Mariani, E; Piazolo, S; Prior, D J; Trimby, P; Drury, M R

    2009-03-01

    The Weighted Burgers Vector (WBV) is defined here as the sum, over all types of dislocations, of [(density of intersections of dislocation lines with a map) x (Burgers vector)]. Here we show that it can be calculated, for any crystal system, solely from orientation gradients in a map view, unlike the full dislocation density tensor, which requires gradients in the third dimension. No assumption is made about gradients in the third dimension and they may be non-zero. The only assumption involved is that elastic strains are small so the lattice distortion is entirely due to dislocations. Orientation gradients can be estimated from gridded orientation measurements obtained by EBSD mapping, so the WBV can be calculated as a vector field on an EBSD map. The magnitude of the WBV gives a lower bound on the magnitude of the dislocation density tensor when that magnitude is defined in a coordinate invariant way. The direction of the WBV can constrain the types of Burgers vectors of geometrically necessary dislocations present in the microstructure, most clearly when it is broken down in terms of lattice vectors. The WBV has three advantages over other measures of local lattice distortion: it is a vector and hence carries more information than a scalar quantity, it has an explicit mathematical link to the individual Burgers vectors of dislocations and, since it is derived via tensor calculus, it is not dependent on the map coordinate system. If a sub-grain wall is included in the WBV calculation, the magnitude of the WBV becomes dependent on the step size but its direction still carries information on the Burgers vectors in the wall. The net Burgers vector content of dislocations intersecting an area of a map can be simply calculated by an integration round the edge of that area, a method which is fast and complements point-by-point WBV calculations.

  6. Integral methods of solving boundary-value problems of nonstationary heat conduction and their comparative analysis

    Science.gov (United States)

    Kot, V. A.

    2017-11-01

    The modern state of approximate integral methods used in applications, where the processes of heat conduction and heat and mass transfer are of first importance, is considered. Integral methods have found a wide utility in different fields of knowledge: problems of heat conduction with different heat-exchange conditions, simulation of thermal protection, Stefantype problems, microwave heating of a substance, problems on a boundary layer, simulation of a fluid flow in a channel, thermal explosion, laser and plasma treatment of materials, simulation of the formation and melting of ice, inverse heat problems, temperature and thermal definition of nanoparticles and nanoliquids, and others. Moreover, polynomial solutions are of interest because the determination of a temperature (concentration) field is an intermediate stage in the mathematical description of any other process. The following main methods were investigated on the basis of the error norms: the Tsoi and Postol’nik methods, the method of integral relations, the Gudman integral method of heat balance, the improved Volkov integral method, the matched integral method, the modified Hristov method, the Mayer integral method, the Kudinov method of additional boundary conditions, the Fedorov boundary method, the method of weighted temperature function, the integral method of boundary characteristics. It was established that the two last-mentioned methods are characterized by high convergence and frequently give solutions whose accuracy is not worse that the accuracy of numerical solutions.

  7. Coaching and barriers to weight loss: an integrative review

    Directory of Open Access Journals (Sweden)

    Muñoz Obino KF

    2016-12-01

    Full Text Available Karen Fernanda Muñoz Obino,1 Caroline Aguiar Pereira,1 Rafaela Siviero Caron-Lienert2 1Nutrology/Clinical Nutrition Unit, Ernesto Dornelles Hospital, 2Nutrition of the Educational and Research Institute of Moinhos de Vento Hospital, Porto Alegre, Brazil Introduction: Coaching is proposed to raise a patient’s awareness and responsibility for their health behaviour change by transforming the professional–patient relationship.Objective: To review the scientific literature on how coaching can assist in weight loss and improve a patient’s state of health.Methodology: An integrative literature search was performed using PubMed, Latin American and Caribbean Literature in Health Sciences, and Scientific Electronic Library Online. We selected articles that were published in Portuguese, English, and Spanish over the last 10 years. Data analysis was performed using a validated data collection instrument.Results: Among the 289 articles identified in the search, 276 were excluded because they did not address the leading research question, their full texts were not available on the Internet, or they were duplicate publications. Therefore, for the analysis, we selected 13 articles that we classified as randomized clinical studies (46.15%; n=6, cohort studies (30.76%; n=4, cross-sectional studies (7.69%; n=1, case studies (7.69%; n=1, and review articles (7.69%; n=1. Joint intervention (combined in-person and telecoaching sessions constituted the majority of session types. The use of technical coaching was superior in reducing anthropometric measurements and increasing the levels of motivation and personal satisfaction compared with formal health education alone.Conclusion: Coaching is an efficient, cost-effective method for combining formal education and treatment of health in the weight-loss process. Additional randomized studies are needed to demonstrate its effectiveness with respect to chronic disease indicators. Keywords: coaching, weight loss

  8. On a path integral description of the dynamics of an inextensible chain and its connection to constrained stochastic dynamics

    International Nuclear Information System (INIS)

    Ferrari, Franco; Paturej, Jaroslaw

    2009-01-01

    The dynamics of a freely jointed chain in the continuous limit is described by a field theory which closely resembles the nonlinear sigma model. The generating functional Ψ[J] of this field theory contains nonholonomic constraints, which are imposed by inserting in the path integral expressing Ψ[J] a suitable product of delta functions. The same procedure is commonly applied in statistical mechanics in order to enforce topological conditions on a system of linked polymers. The disadvantage of this method is that the contact with the stochastic process governing the diffusion of the chain is apparently lost. The main goal of this work is to re-establish this contact. For this purpose, it is shown here that the generating functional Ψ[J] coincides with the generating functional of the correlation functions of the solutions of a constrained Langevin equation. In the discrete case, this Langevin equation describes as expected the Brownian motion of beads connected together by links of fixed length

  9. Constrained systems described by Nambu mechanics

    International Nuclear Information System (INIS)

    Lassig, C.C.; Joshi, G.C.

    1996-01-01

    Using the framework of Nambu's generalised mechanics, we obtain a new description of constrained Hamiltonian dynamics, involving the introduction of another degree of freedom in phase space, and the necessity of defining the action integral on a world sheet. We also discuss the problem of quantizing Nambu mechanics. (authors). 5 refs

  10. A nomograph method for assessing body weight.

    Science.gov (United States)

    Thomas, A E; McKay, D A; Cutlip, M B

    1976-03-01

    The ratio of weight/height emerges from varied epidemiological studies as the most generally useful index of relative body mass in adults. The authors present a nomograph to facilitate use of this relationship in clinical situations. While showing the range of weight given as desirable in life insurance studies, the scale expresses relative weight as a continuous variable. This method encourages use of clinical judgment in interpreting "overweight" and "underweight" and in accounting for muscular and skeletal contributions to measured mass.

  11. New weighting methods for phylogenetic tree reconstruction using multiple loci.

    Science.gov (United States)

    Misawa, Kazuharu; Tajima, Fumio

    2012-08-01

    Efficient determination of evolutionary distances is important for the correct reconstruction of phylogenetic trees. The performance of the pooled distance required for reconstructing a phylogenetic tree can be improved by applying large weights to appropriate distances for reconstructing phylogenetic trees and small weights to inappropriate distances. We developed two weighting methods, the modified Tajima-Takezaki method and the modified least-squares method, for reconstructing phylogenetic trees from multiple loci. By computer simulations, we found that both of the new methods were more efficient in reconstructing correct topologies than the no-weight method. Hence, we reconstructed hominoid phylogenetic trees from mitochondrial DNA using our new methods, and found that the levels of bootstrap support were significantly increased by the modified Tajima-Takezaki and by the modified least-squares method.

  12. Reflected stochastic differential equation models for constrained animal movement

    Science.gov (United States)

    Hanks, Ephraim M.; Johnson, Devin S.; Hooten, Mevin B.

    2017-01-01

    Movement for many animal species is constrained in space by barriers such as rivers, shorelines, or impassable cliffs. We develop an approach for modeling animal movement constrained in space by considering a class of constrained stochastic processes, reflected stochastic differential equations. Our approach generalizes existing methods for modeling unconstrained animal movement. We present methods for simulation and inference based on augmenting the constrained movement path with a latent unconstrained path and illustrate this augmentation with a simulation example and an analysis of telemetry data from a Steller sea lion (Eumatopias jubatus) in southeast Alaska.

  13. Coaching and barriers to weight loss: an integrative review.

    Science.gov (United States)

    Muñoz Obino, Karen Fernanda; Aguiar Pereira, Caroline; Caron-Lienert, Rafaela Siviero

    2017-01-01

    Coaching is proposed to raise a patient's awareness and responsibility for their health behaviour change by transforming the professional-patient relationship. To review the scientific literature on how coaching can assist in weight loss and improve a patient's state of health. An integrative literature search was performed using PubMed, Latin American and Caribbean Literature in Health Sciences, and Scientific Electronic Library Online. We selected articles that were published in Portuguese, English, and Spanish over the last 10 years. Data analysis was performed using a validated data collection instrument. Among the 289 articles identified in the search, 276 were excluded because they did not address the leading research question, their full texts were not available on the Internet, or they were duplicate publications. Therefore, for the analysis, we selected 13 articles that we classified as randomized clinical studies (46.15%; n=6), cohort studies (30.76%; n=4), cross-sectional studies (7.69%; n=1), case studies (7.69%; n=1), and review articles (7.69%; n=1). Joint intervention (combined in-person and telecoaching sessions) constituted the majority of session types. The use of technical coaching was superior in reducing anthropometric measurements and increasing the levels of motivation and personal satisfaction compared with formal health education alone. Coaching is an efficient, cost-effective method for combining formal education and treatment of health in the weight-loss process. Additional randomized studies are needed to demonstrate its effectiveness with respect to chronic disease indicators.

  14. How recalibration method, pricing, and coding affect DRG weights

    Science.gov (United States)

    Carter, Grace M.; Rogowski, Jeannette A.

    1992-01-01

    We compared diagnosis-related group (DRG) weights calculated using the hospital-specific relative-value (HSR V) methodology with those calculated using the standard methodology for each year from 1985 through 1989 and analyzed differences between the two methods in detail for 1989. We provide evidence suggesting that classification error and subsidies of higher weighted cases by lower weighted cases caused compression in the weights used for payment as late as the fifth year of the prospective payment system. However, later weights calculated by the standard method are not compressed because a statistical correlation between high markups and high case-mix indexes offsets the cross-subsidization. HSR V weights from the same files are compressed because this methodology is more sensitive to cross-subsidies. However, both sets of weights produce equally good estimates of hospital-level costs net of those expenses that are paid by outlier payments. The greater compression of the HSR V weights is counterbalanced by the fact that more high-weight cases qualify as outliers. PMID:10127456

  15. Hybrid real-code ant colony optimisation for constrained mechanical design

    Science.gov (United States)

    Pholdee, Nantiwat; Bureerat, Sujin

    2016-01-01

    This paper proposes a hybrid meta-heuristic based on integrating a local search simplex downhill (SDH) method into the search procedure of real-code ant colony optimisation (ACOR). This hybridisation leads to five hybrid algorithms where a Monte Carlo technique, a Latin hypercube sampling technique (LHS) and a translational propagation Latin hypercube design (TPLHD) algorithm are used to generate an initial population. Also, two numerical schemes for selecting an initial simplex are investigated. The original ACOR and its hybrid versions along with a variety of established meta-heuristics are implemented to solve 17 constrained test problems where a fuzzy set theory penalty function technique is used to handle design constraints. The comparative results show that the hybrid algorithms are the top performers. Using the TPLHD technique gives better results than the other sampling techniques. The hybrid optimisers are a powerful design tool for constrained mechanical design problems.

  16. Constraining the cross section of 82Se(n, γ)83Se to validate the β-Oslo method

    Science.gov (United States)

    Childers, K.; Liddick, S. N.; Crider, B. P.; Dombos, A. C.; Lewis, R.; Spyrou, A.; Couture, A.; Mosby, S.; Prokop, C. J.; Naqvi, F.; Larsen, A. C.; Guttormsen, M.; Campo, L. C.; Renstrom, T.; Siem, S.; Bleuel, D. L.; Perdikakis, G.; Quinn, S.

    2017-09-01

    Neutron capture cross sections of short-lived nuclei are important for a variety of basic and applied nuclear science problems. However, because of the short half-lives of the nuclei involved and the nonexistence of a neutron target, indirect measurement methods are required. One such method is the β-Oslo method. The nuclear level density and γ strength function of a nucleus are extracted after β-decay and used in a statistical reaction model to constrain the neutron capture cross section. This method has been used previously, but must be validated against a directly measured neutron capture cross section. The neutron capture cross section of 82Se has been measured previously, and 83Se can be accessed by the β-decay of 83As. The β-decay of 83As to 83Se was studied using the SuN detector at the NSCL and the β-Oslo method was utilized to constrain the neutron capture cross section of 82Se, which is compared to the directly measured value.

  17. Minimum weight protection - Gradient method; Protection de poids minimum - Methode du gradient

    Energy Technology Data Exchange (ETDEWEB)

    Danon, R.

    1958-12-15

    After having recalled that, when considering a mobile installation, total weight has a crucial importance, and that, in the case of a nuclear reactor, a non neglectable part of weight is that of protection, this note presents an iterative method which results, for a given protection, to a configuration with a minimum weight. After a description of the problem, the author presents the theoretical formulation of the gradient method as it is applied to the concerned case. This application is then discussed, as well as its validity in terms of convergence and uniqueness. Its actual application is then reported, and possibilities of practical applications are evoked.

  18. Topology Optimization of Constrained Layer Damping on Plates Using Method of Moving Asymptote (MMA Approach

    Directory of Open Access Journals (Sweden)

    Zheng Ling

    2011-01-01

    Full Text Available Damping treatments have been extensively used as a powerful means to damp out structural resonant vibrations. Usually, damping materials are fully covered on the surface of plates. The drawbacks of this conventional treatment are also obvious due to an added mass and excess material consumption. Therefore, it is not always economical and effective from an optimization design view. In this paper, a topology optimization approach is presented to maximize the modal damping ratio of the plate with constrained layer damping treatment. The governing equation of motion of the plate is derived on the basis of energy approach. A finite element model to describe dynamic performances of the plate is developed and used along with an optimization algorithm in order to determine the optimal topologies of constrained layer damping layout on the plate. The damping of visco-elastic layer is modeled by the complex modulus formula. Considering the vibration and energy dissipation mode of the plate with constrained layer damping treatment, damping material density and volume factor are considered as design variable and constraint respectively. Meantime, the modal damping ratio of the plate is assigned as the objective function in the topology optimization approach. The sensitivity of modal damping ratio to design variable is further derived and Method of Moving Asymptote (MMA is adopted to search the optimized topologies of constrained layer damping layout on the plate. Numerical examples are used to demonstrate the effectiveness of the proposed topology optimization approach. The results show that vibration energy dissipation of the plates can be enhanced by the optimal constrained layer damping layout. This optimal technology can be further extended to vibration attenuation of sandwich cylindrical shells which constitute the major building block of many critical structures such as cabins of aircrafts, hulls of submarines and bodies of rockets and missiles as an

  19. A novel weight determination method for time series data aggregation

    Science.gov (United States)

    Xu, Paiheng; Zhang, Rong; Deng, Yong

    2017-09-01

    Aggregation in time series is of great importance in time series smoothing, predicting and other time series analysis process, which makes it crucial to address the weights in times series correctly and reasonably. In this paper, a novel method to obtain the weights in time series is proposed, in which we adopt induced ordered weighted aggregation (IOWA) operator and visibility graph averaging (VGA) operator and linearly combine the weights separately generated by the two operator. The IOWA operator is introduced to the weight determination of time series, through which the time decay factor is taken into consideration. The VGA operator is able to generate weights with respect to the degree distribution in the visibility graph constructed from the corresponding time series, which reflects the relative importance of vertices in time series. The proposed method is applied to two practical datasets to illustrate its merits. The aggregation of Construction Cost Index (CCI) demonstrates the ability of proposed method to smooth time series, while the aggregation of The Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) illustrate how proposed method maintain the variation tendency of original data.

  20. Constrained Vapor Bubble Experiment

    Science.gov (United States)

    Gokhale, Shripad; Plawsky, Joel; Wayner, Peter C., Jr.; Zheng, Ling; Wang, Ying-Xi

    2002-11-01

    Microgravity experiments on the Constrained Vapor Bubble Heat Exchanger, CVB, are being developed for the International Space Station. In particular, we present results of a precursory experimental and theoretical study of the vertical Constrained Vapor Bubble in the Earth's environment. A novel non-isothermal experimental setup was designed and built to study the transport processes in an ethanol/quartz vertical CVB system. Temperature profiles were measured using an in situ PC (personal computer)-based LabView data acquisition system via thermocouples. Film thickness profiles were measured using interferometry. A theoretical model was developed to predict the curvature profile of the stable film in the evaporator. The concept of the total amount of evaporation, which can be obtained directly by integrating the experimental temperature profile, was introduced. Experimentally measured curvature profiles are in good agreement with modeling results. For microgravity conditions, an analytical expression, which reveals an inherent relation between temperature and curvature profiles, was derived.

  1. An integrating factor matrix method to find first integrals

    International Nuclear Information System (INIS)

    Saputra, K V I; Quispel, G R W; Van Veen, L

    2010-01-01

    In this paper we develop an integrating factor matrix method to derive conditions for the existence of first integrals. We use this novel method to obtain first integrals, along with the conditions for their existence, for two- and three-dimensional Lotka-Volterra systems with constant terms. The results are compared to previous results obtained by other methods.

  2. Locating new uranium occurrence by integrated weighted analysis in Kaladgi basin, Karnataka

    International Nuclear Information System (INIS)

    Sridhar, M.; Chaturvedi, A.K.; Rai, A.K.

    2014-01-01

    This study aims at identifying uranium potential zones by integrated analysis of thematic layer interpreted and derived from airborne radiometric and magnetic data, satellite data along with available ground geochemical data in western part of Kaladgi basin. Integrated weighted analysis of spatial datasets which included airborne radiometric data (eU, eTh and % K conc.), litho-structural map. hydrogeochemical U conc., and geomorphological data pertaining to study area, was attempted. The weightage analysis was done in GIS environment where different spatial dataset were brought on to a single platform and were analyzed by integration

  3. An extensive analysis of disease-gene associations using network integration and fast kernel-based gene prioritization methods

    Science.gov (United States)

    Valentini, Giorgio; Paccanaro, Alberto; Caniza, Horacio; Romero, Alfonso E.; Re, Matteo

    2014-01-01

    Objective In the context of “network medicine”, gene prioritization methods represent one of the main tools to discover candidate disease genes by exploiting the large amount of data covering different types of functional relationships between genes. Several works proposed to integrate multiple sources of data to improve disease gene prioritization, but to our knowledge no systematic studies focused on the quantitative evaluation of the impact of network integration on gene prioritization. In this paper, we aim at providing an extensive analysis of gene-disease associations not limited to genetic disorders, and a systematic comparison of different network integration methods for gene prioritization. Materials and methods We collected nine different functional networks representing different functional relationships between genes, and we combined them through both unweighted and weighted network integration methods. We then prioritized genes with respect to each of the considered 708 medical subject headings (MeSH) diseases by applying classical guilt-by-association, random walk and random walk with restart algorithms, and the recently proposed kernelized score functions. Results The results obtained with classical random walk algorithms and the best single network achieved an average area under the curve (AUC) across the 708 MeSH diseases of about 0.82, while kernelized score functions and network integration boosted the average AUC to about 0.89. Weighted integration, by exploiting the different “informativeness” embedded in different functional networks, outperforms unweighted integration at 0.01 significance level, according to the Wilcoxon signed rank sum test. For each MeSH disease we provide the top-ranked unannotated candidate genes, available for further bio-medical investigation. Conclusions Network integration is necessary to boost the performances of gene prioritization methods. Moreover the methods based on kernelized score functions can further

  4. Weight-training injuries. Common injuries and preventative methods.

    Science.gov (United States)

    Mazur, L J; Yetman, R J; Risser, W L

    1993-07-01

    The use of weights is an increasingly popular conditioning technique, competitive sport and recreational activity among children, adolescents and young adults. Weight-training can cause significant musculoskeletal injuries such as fractures, dislocations, spondylolysis, spondylolisthesis, intervertebral disk herniation, and meniscal injuries of the knee. Although injuries can occur during the use of weight machines, most apparently happen during the aggressive use of free weights. Prepubescent and older athletes who are well trained and supervised appear to have low injury rates in strength training programmes. Good coaching and proper weightlifting techniques and other injury prevention methods are likely to minimise the number of musculoskeletal problems caused by weight-training.

  5. A comparison of the weights-of-evidence method and probabilistic neural networks

    Science.gov (United States)

    Singer, Donald A.; Kouda, Ryoichi

    1999-01-01

    The need to integrate large quantities of digital geoscience information to classify locations as mineral deposits or nondeposits has been met by the weights-of-evidence method in many situations. Widespread selection of this method may be more the result of its ease of use and interpretation rather than comparisons with alternative methods. A comparison of the weights-of-evidence method to probabilistic neural networks is performed here with data from Chisel Lake-Andeson Lake, Manitoba, Canada. Each method is designed to estimate the probability of belonging to learned classes where the estimated probabilities are used to classify the unknowns. Using these data, significantly lower classification error rates were observed for the neural network, not only when test and training data were the same (0.02 versus 23%), but also when validation data, not used in any training, were used to test the efficiency of classification (0.7 versus 17%). Despite these data containing too few deposits, these tests of this set of data demonstrate the neural network's ability at making unbiased probability estimates and lower error rates when measured by number of polygons or by the area of land misclassified. For both methods, independent validation tests are required to ensure that estimates are representative of real-world results. Results from the weights-of-evidence method demonstrate a strong bias where most errors are barren areas misclassified as deposits. The weights-of-evidence method is based on Bayes rule, which requires independent variables in order to make unbiased estimates. The chi-square test for independence indicates no significant correlations among the variables in the Chisel Lake–Andeson Lake data. However, the expected number of deposits test clearly demonstrates that these data violate the independence assumption. Other, independent simulations with three variables show that using variables with correlations of 1.0 can double the expected number of deposits

  6. Cascading Constrained 2-D Arrays using Periodic Merging Arrays

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Laursen, Torben Vaarby

    2003-01-01

    We consider a method for designing 2-D constrained codes by cascading finite width arrays using predefined finite width periodic merging arrays. This provides a constructive lower bound on the capacity of the 2-D constrained code. Examples include symmetric RLL and density constrained codes...

  7. Weighting Function Integrated in Grid-interfacing Converters for Unbalanced Voltage Correction

    NARCIS (Netherlands)

    Wang, F.; Duarte, J.L.; Hendrix, M.A.M.

    2008-01-01

    In this paper a weighting function for voltage unbalance correction is proposed to be integrated into the control of distributed grid-interfacing systems. The correction action can help decrease the negative-sequence voltage at the point of connection with the grid. Based on the voltage unbalance

  8. Novel methods for Solving Economic Dispatch of Security-Constrained Unit Commitment Based on Linear Programming

    Science.gov (United States)

    Guo, Sangang

    2017-09-01

    There are two stages in solving security-constrained unit commitment problems (SCUC) within Lagrangian framework: one is to obtain feasible units’ states (UC), the other is power economic dispatch (ED) for each unit. The accurate solution of ED is more important for enhancing the efficiency of the solution to SCUC for the fixed feasible units’ statues. Two novel methods named after Convex Combinatorial Coefficient Method and Power Increment Method respectively based on linear programming problem for solving ED are proposed by the piecewise linear approximation to the nonlinear convex fuel cost functions. Numerical testing results show that the methods are effective and efficient.

  9. On the Integrated Job Scheduling and Constrained Network Routing Problem

    DEFF Research Database (Denmark)

    Gamst, Mette

    This paper examines the NP-hard problem of scheduling a number of jobs on a finite set of machines such that the overall profit of executed jobs is maximized. Each job demands a number of resources, which must be sent to the executing machine via constrained paths. Furthermore, two resource demand...

  10. Statistical Methods in Integrative Genomics

    Science.gov (United States)

    Richardson, Sylvia; Tseng, George C.; Sun, Wei

    2016-01-01

    Statistical methods in integrative genomics aim to answer important biology questions by jointly analyzing multiple types of genomic data (vertical integration) or aggregating the same type of data across multiple studies (horizontal integration). In this article, we introduce different types of genomic data and data resources, and then review statistical methods of integrative genomics, with emphasis on the motivation and rationale of these methods. We conclude with some summary points and future research directions. PMID:27482531

  11. Integral methods in low-frequency electromagnetics

    CERN Document Server

    Solin, Pavel; Karban, Pavel; Ulrych, Bohus

    2009-01-01

    A modern presentation of integral methods in low-frequency electromagnetics This book provides state-of-the-art knowledge on integral methods in low-frequency electromagnetics. Blending theory with numerous examples, it introduces key aspects of the integral methods used in engineering as a powerful alternative to PDE-based models. Readers will get complete coverage of: The electromagnetic field and its basic characteristics An overview of solution methods Solutions of electromagnetic fields by integral expressions Integral and integrodifferential methods

  12. Constraining surface emissions of air pollutants using inverse modelling: method intercomparison and a new two-step two-scale regularization approach

    Energy Technology Data Exchange (ETDEWEB)

    Saide, Pablo (CGRER, Center for Global and Regional Environmental Research, Univ. of Iowa, Iowa City, IA (United States)), e-mail: pablo-saide@uiowa.edu; Bocquet, Marc (Universite Paris-Est, CEREA Joint Laboratory Ecole des Ponts ParisTech and EDF RandD, Champs-sur-Marne (France); INRIA, Paris Rocquencourt Research Center (France)); Osses, Axel (Departamento de Ingeniera Matematica, Universidad de Chile, Santiago (Chile); Centro de Modelamiento Matematico, UMI 2807/Universidad de Chile-CNRS, Santiago (Chile)); Gallardo, Laura (Centro de Modelamiento Matematico, UMI 2807/Universidad de Chile-CNRS, Santiago (Chile); Departamento de Geofisica, Universidad de Chile, Santiago (Chile))

    2011-07-15

    When constraining surface emissions of air pollutants using inverse modelling one often encounters spurious corrections to the inventory at places where emissions and observations are colocated, referred to here as the colocalization problem. Several approaches have been used to deal with this problem: coarsening the spatial resolution of emissions; adding spatial correlations to the covariance matrices; adding constraints on the spatial derivatives into the functional being minimized; and multiplying the emission error covariance matrix by weighting factors. Intercomparison of methods for a carbon monoxide inversion over a city shows that even though all methods diminish the colocalization problem and produce similar general patterns, detailed information can greatly change according to the method used ranging from smooth, isotropic and short range modifications to not so smooth, non-isotropic and long range modifications. Poisson (non-Gaussian) and Gaussian assumptions both show these patterns, but for the Poisson case the emissions are naturally restricted to be positive and changes are given by means of multiplicative correction factors, producing results closer to the true nature of emission errors. Finally, we propose and test a new two-step, two-scale, fully Bayesian approach that deals with the colocalization problem and can be implemented for any prior density distribution

  13. Assessment of the Sustainable Development Capacity with the Entropy Weight Coefficient Method

    Directory of Open Access Journals (Sweden)

    Qingsong Wang

    2015-10-01

    Full Text Available Sustainable development is widely accepted in the world. How to reflect the sustainable development capacity of a region is an important issue for enacting policies and plans. An index system for capacity assessment is established by employing the Entropy Weight Coefficient method. The results indicate that the sustainable development capacity of Shandong Province is improving in terms of its economy subsystem, resource subsystem, and society subsystem whilst degrading in its environment subsystem. Shandong Province has shown the general trend towards sustainable development. However, the sustainable development capacity can be constrained by the resources such as energy, land, water, as well as environmental protection. These issues are induced by the economy development model, the security of energy supply, the level of new energy development, the end-of-pipe control of pollution, and the level of science and technology commercialization. Efforts are required to accelerate the development of the tertiary industry, the commercialization of high technology, the development of new energy and renewable energy, and the structure optimization of energy mix. Long-term measures need to be established for the ecosystem and environment protection.

  14. Blind Channel Equalization with Colored Source Based on Constrained Optimization Methods

    Directory of Open Access Journals (Sweden)

    Dayong Zhou

    2008-12-01

    Full Text Available Tsatsanis and Xu have applied the constrained minimum output variance (CMOV principle to directly blind equalize a linear channel—a technique that has proven effective with white inputs. It is generally assumed in the literature that their CMOV method can also effectively equalize a linear channel with a colored source. In this paper, we prove that colored inputs will cause the equalizer to incorrectly converge due to inadequate constraints. We also introduce a new blind channel equalizer algorithm that is based on the CMOV principle, but with a different constraint that will correctly handle colored sources. Our proposed algorithm works for channels with either white or colored inputs and performs equivalently to the trained minimum mean-square error (MMSE equalizer under high SNR. Thus, our proposed algorithm may be regarded as an extension of the CMOV algorithm proposed by Tsatsanis and Xu. We also introduce several methods to improve the performance of our introduced algorithm in the low SNR condition. Simulation results show the superior performance of our proposed methods.

  15. Input-constrained model predictive control via the alternating direction method of multipliers

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Frison, Gianluca; Andersen, Martin S.

    2014-01-01

    This paper presents an algorithm, based on the alternating direction method of multipliers, for the convex optimal control problem arising in input-constrained model predictive control. We develop an efficient implementation of the algorithm for the extended linear quadratic control problem (LQCP......) with input and input-rate limits. The algorithm alternates between solving an extended LQCP and a highly structured quadratic program. These quadratic programs are solved using a Riccati iteration procedure, and a structure-exploiting interior-point method, respectively. The computational cost per iteration...... is quadratic in the dimensions of the controlled system, and linear in the length of the prediction horizon. Simulations show that the approach proposed in this paper is more than an order of magnitude faster than several state-of-the-art quadratic programming algorithms, and that the difference in computation...

  16. Stochastic risk-averse coordinated scheduling of grid integrated energy storage units in transmission constrained wind-thermal systems within a conditional value-at-risk framework

    International Nuclear Information System (INIS)

    Hemmati, Reza; Saboori, Hedayat; Saboori, Saeid

    2016-01-01

    In recent decades, wind power resources have been integrated in the power systems increasingly. Besides confirmed benefits, utilization of large share of this volatile source in power generation portfolio has been faced system operators with new challenges in terms of uncertainty management. It is proved that energy storage systems are capable to handle projected uncertainty concerns. Risk-neutral methods have been proposed in the previous literature to schedule storage units considering wind resources uncertainty. Ignoring risk of the cost distributions with non-desirable properties may result in experiencing high costs in some unfavorable scenarios with high probability. In order to control the risk of the operator decisions, this paper proposes a new risk-constrained two-stage stochastic programming model to make optimal decisions on energy storage and thermal units in a transmission constrained hybrid wind-thermal power system. Risk-aversion procedure is explicitly formulated using the conditional value-at-risk measure, because of possessing distinguished features compared to the other risk measures. The proposed model is a mixed integer linear programming considering transmission network, thermal unit dynamics, and storage devices constraints. The simulations results demonstrate that taking the risk of the problem into account will affect scheduling decisions considerably depend on the level of the risk-aversion. - Highlights: • Risk of the operation decisions is handled by using risk-averse programming. • Conditional value-at-risk is used as risk measure. • Optimal risk level is obtained based on the cost/benefit analysis. • The proposed model is a two-stage stochastic mixed integer linear programming. • The unit commitment is integrated with ESSs and wind power penetration.

  17. Enhanced reconstruction of weighted networks from strengths and degrees

    International Nuclear Information System (INIS)

    Mastrandrea, Rossana; Fagiolo, Giorgio; Squartini, Tiziano; Garlaschelli, Diego

    2014-01-01

    Network topology plays a key role in many phenomena, from the spreading of diseases to that of financial crises. Whenever the whole structure of a network is unknown, one must resort to reconstruction methods that identify the least biased ensemble of networks consistent with the partial information available. A challenging case, frequently encountered due to privacy issues in the analysis of interbank flows and Big Data, is when there is only local (node-specific) aggregate information available. For binary networks, the relevant ensemble is one where the degree (number of links) of each node is constrained to its observed value. However, for weighted networks the problem is much more complicated. While the naïve approach prescribes to constrain the strengths (total link weights) of all nodes, recent counter-intuitive results suggest that in weighted networks the degrees are often more informative than the strengths. This implies that the reconstruction of weighted networks would be significantly enhanced by the specification of both strengths and degrees, a computationally hard and bias-prone procedure. Here we solve this problem by introducing an analytical and unbiased maximum-entropy method that works in the shortest possible time and does not require the explicit generation of reconstructed samples. We consider several real-world examples and show that, while the strengths alone give poor results, the additional knowledge of the degrees yields accurately reconstructed networks. Information-theoretic criteria rigorously confirm that the degree sequence, as soon as it is non-trivial, is irreducible to the strength sequence. Our results have strong implications for the analysis of motifs and communities and whenever the reconstructed ensemble is required as a null model to detect higher-order patterns

  18. Weight-Control Methods, 3-Year Weight Change, and Eating Behaviors: A Prospective Nationwide Study of Middle-Aged New Zealand Women.

    Science.gov (United States)

    Leong, Sook Ling; Gray, Andrew; Haszard, Jillian; Horwath, Caroline

    2016-08-01

    The effectiveness of women's weight-control methods and the influences of dieting on eating behaviors remain unclear. Our aim was to determine the association of various weight-control methods at baseline with weight change to 3 years, and examine the association between baseline weight-control status (trying to lose weight, trying to prevent weight gain or no weight-control attempts) and changes in intuitive eating and binge eating at 3 years. A nationally representative sample of 1,601 New Zealand women (40 to 50 years) was recruited and completed a self-administered questionnaire at baseline regarding use of variety of weight-control methods. Information on demographic characteristics, weight, height, food habits, binge eating, and intuitive eating were collected at baseline and 3 years. Linear and logistic regression models examined associations between both weight status and weight-control methods at baseline and weight change to 3 years; and baseline weight-control status and change in intuitive eating from baseline to 3 years and binge eating at 3 years. χ(2) tests were used to cross-sectionally compare food habits across the weight status categories at both baseline and 3 years. Trying to lose weight and the use of weight-control methods at baseline were not associated with change in body weight to 3 years. There were a few differences in the frequency of consumption of high-energy-density foods between those trying to lose or maintain weight and those not attempting weight control. Trying to lose weight at baseline was associated with a 2.0-unit (95% CI 0.7 to 3.4, P=0.003) reduction in intuitive eating scores by 3 years (potential range=21 to 105), and 224% (odds ratio=3.24; 95% CI 1.69 to 6.20; Pfoods. Dieting may reduce women's ability to recognize hunger and satiety cues and place women at increased risk of binge eating. Copyright © 2016 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.

  19. Design optimization of axial flow hydraulic turbine runner: Part II - multi-objective constrained optimization method

    Science.gov (United States)

    Peng, Guoyi; Cao, Shuliang; Ishizuka, Masaru; Hayama, Shinji

    2002-06-01

    This paper is concerned with the design optimization of axial flow hydraulic turbine runner blade geometry. In order to obtain a better design plan with good performance, a new comprehensive performance optimization procedure has been presented by combining a multi-variable multi-objective constrained optimization model with a Q3D inverse computation and a performance prediction procedure. With careful analysis of the inverse design of axial hydraulic turbine runner, the total hydraulic loss and the cavitation coefficient are taken as optimization objectives and a comprehensive objective function is defined using the weight factors. Parameters of a newly proposed blade bound circulation distribution function and parameters describing positions of blade leading and training edges in the meridional flow passage are taken as optimization variables.The optimization procedure has been applied to the design optimization of a Kaplan runner with specific speed of 440 kW. Numerical results show that the performance of designed runner is successfully improved through optimization computation. The optimization model is found to be validated and it has the feature of good convergence. With the multi-objective optimization model, it is possible to control the performance of designed runner by adjusting the value of weight factors defining the comprehensive objective function. Copyright

  20. Reduction theorems for weighted integral inequalities on the cone of monotone functions

    International Nuclear Information System (INIS)

    Gogatishvili, A; Stepanov, V D

    2013-01-01

    This paper surveys results related to the reduction of integral inequalities involving positive operators in weighted Lebesgue spaces on the real semi-axis and valid on the cone of monotone functions, to certain more easily manageable inequalities valid on the cone of non-negative functions. The case of monotone operators is new. As an application, a complete characterization for all possible integrability parameters is obtained for a number of Volterra operators. Bibliography: 118 titles

  1. Multiplicative algorithms for constrained non-negative matrix factorization

    KAUST Repository

    Peng, Chengbin

    2012-12-01

    Non-negative matrix factorization (NMF) provides the advantage of parts-based data representation through additive only combinations. It has been widely adopted in areas like item recommending, text mining, data clustering, speech denoising, etc. In this paper, we provide an algorithm that allows the factorization to have linear or approximatly linear constraints with respect to each factor. We prove that if the constraint function is linear, algorithms within our multiplicative framework will converge. This theory supports a large variety of equality and inequality constraints, and can facilitate application of NMF to a much larger domain. Taking the recommender system as an example, we demonstrate how a specialized weighted and constrained NMF algorithm can be developed to fit exactly for the problem, and the tests justify that our constraints improve the performance for both weighted and unweighted NMF algorithms under several different metrics. In particular, on the Movielens data with 94% of items, the Constrained NMF improves recall rate 3% compared to SVD50 and 45% compared to SVD150, which were reported as the best two in the top-N metric. © 2012 IEEE.

  2. Self-constrained inversion of potential fields

    Science.gov (United States)

    Paoletti, V.; Ialongo, S.; Florio, G.; Fedi, M.; Cella, F.

    2013-11-01

    We present a potential-field-constrained inversion procedure based on a priori information derived exclusively from the analysis of the gravity and magnetic data (self-constrained inversion). The procedure is designed to be applied to underdetermined problems and involves scenarios where the source distribution can be assumed to be of simple character. To set up effective constraints, we first estimate through the analysis of the gravity or magnetic field some or all of the following source parameters: the source depth-to-the-top, the structural index, the horizontal position of the source body edges and their dip. The second step is incorporating the information related to these constraints in the objective function as depth and spatial weighting functions. We show, through 2-D and 3-D synthetic and real data examples, that potential field-based constraints, for example, structural index, source boundaries and others, are usually enough to obtain substantial improvement in the density and magnetization models.

  3. Operator approach to solutions of the constrained BKP hierarchy

    International Nuclear Information System (INIS)

    Shen, Hsin-Fu; Lee, Niann-Chern; Tu, Ming-Hsien

    2011-01-01

    The operator formalism to the vector k-constrained BKP hierarchy is presented. We solve the Hirota bilinear equations of the vector k-constrained BKP hierarchy via the method of neutral free fermion. In particular, by choosing suitable group element of O(∞), we construct rational and soliton solutions of the vector k-constrained BKP hierarchy.

  4. Weighted inequalities for fractional integral operators and linear commutators in the Morrey-type spaces

    Directory of Open Access Journals (Sweden)

    Hua Wang

    2017-01-01

    Full Text Available Abstract In this paper, we first introduce some new Morrey-type spaces containing generalized Morrey space and weighted Morrey space with two weights as special cases. Then we give the weighted strong type and weak type estimates for fractional integral operators I α $I_{\\alpha}$ in these new Morrey-type spaces. Furthermore, the weighted strong type estimate and endpoint estimate of linear commutators [ b , I α ] $[b,I_{\\alpha}]$ formed by b and I α $I_{\\alpha}$ are established. Also we study related problems about two-weight, weak type inequalities for I α $I_{\\alpha}$ and [ b , I α ] $[b,I_{\\alpha}]$ in the Morrey-type spaces and give partial results.

  5. Modeling the solute transport by particle-tracing method with variable weights

    Science.gov (United States)

    Jiang, J.

    2016-12-01

    Particle-tracing method is usually used to simulate the solute transport in fracture media. In this method, the concentration at one point is proportional to number of particles visiting this point. However, this method is rather inefficient at the points with small concentration. Few particles visit these points, which leads to violent oscillation or gives zero value of concentration. In this paper, we proposed a particle-tracing method with variable weights. The concentration at one point is proportional to the sum of the weights of the particles visiting it. It adjusts the weight factors during simulations according to the estimated probabilities of corresponding walks. If the weight W of a tracking particle is larger than the relative concentration C at the corresponding site, the tracking particle will be splitted into Int(W/C) copies and each copy will be simulated independently with the weight W/Int(W/C) . If the weight W of a tracking particle is less than the relative concentration C at the corresponding site, the tracking particle will be continually tracked with a probability W/C and the weight will be adjusted to be C. By adjusting weights, the number of visiting particles distributes evenly in the whole range. Through this variable weights scheme, we can eliminate the violent oscillation and increase the accuracy of orders of magnitudes.

  6. On the origin of constrained superfields

    Energy Technology Data Exchange (ETDEWEB)

    Dall’Agata, G. [Dipartimento di Fisica “Galileo Galilei”, Università di Padova,Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova,Via Marzolo 8, 35131 Padova (Italy); Dudas, E. [Centre de Physique Théorique, École Polytechnique, CNRS, Université Paris-Saclay,F-91128 Palaiseau (France); Farakos, F. [Dipartimento di Fisica “Galileo Galilei”, Università di Padova,Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova,Via Marzolo 8, 35131 Padova (Italy)

    2016-05-06

    In this work we analyze constrained superfields in supersymmetry and supergravity. We propose a constraint that, in combination with the constrained goldstino multiplet, consistently removes any selected component from a generic superfield. We also describe its origin, providing the operators whose equations of motion lead to the decoupling of such components. We illustrate our proposal by means of various examples and show how known constraints can be reproduced by our method.

  7. CONSTRAINING MASS RATIO AND EXTINCTION IN THE FU ORIONIS BINARY SYSTEM WITH INFRARED INTEGRAL FIELD SPECTROSCOPY

    International Nuclear Information System (INIS)

    Pueyo, Laurent; Hillenbrand, Lynne; Hinkley, Sasha; Dekany, Richard; Roberts, Jenny; Vasisht, Gautam; Roberts, Lewis C. Jr.; Shao, Mike; Burruss, Rick; Cady, Eric; Oppenheimer, Ben R.; Brenner, Douglas; Zimmerman, Neil; Monnier, John D.; Crepp, Justin; Parry, Ian; Beichman, Charles; Soummer, Rémi

    2012-01-01

    We report low-resolution near-infrared spectroscopic observations of the eruptive star FU Orionis using the Integral Field Spectrograph (IFS) Project 1640 installed at the Palomar Hale telescope. This work focuses on elucidating the nature of the faint source, located 0.''5 south of FU Ori, and identified in 2003 as FU Ori S. We first use our observations in conjunction with published data to demonstrate that the two stars are indeed physically associated and form a true binary pair. We then proceed to extract J- and H-band spectro-photometry using the damped LOCI algorithm, a reduction method tailored for high contrast science with IFS. This is the first communication reporting the high accuracy of this technique, pioneered by the Project 1640 team, on a faint astronomical source. We use our low-resolution near-infrared spectrum in conjunction with 10.2 μm interferometric data to constrain the infrared excess of FU Ori S. We then focus on estimating the bulk physical properties of FU Ori S. Our models lead to estimates of an object heavily reddened, A V = 8-12, with an effective temperature of ∼4000-6500 K. Finally, we put these results in the context of the FU Ori N-S system and argue that our analysis provides evidence that FU Ori S might be the more massive component of this binary system.

  8. CONSTRAINING MASS RATIO AND EXTINCTION IN THE FU ORIONIS BINARY SYSTEM WITH INFRARED INTEGRAL FIELD SPECTROSCOPY

    Energy Technology Data Exchange (ETDEWEB)

    Pueyo, Laurent [Johns Hopkins University, Department of Physics and Astronomy, 366 Bloomberg Center 3400 N. Charles Street, Baltimore, MD 21218 (United States); Hillenbrand, Lynne; Hinkley, Sasha; Dekany, Richard; Roberts, Jenny [Department of Astronomy, California Institute of Technology, 1200 East California Boulevard, Pasadena, CA 91125 (United States); Vasisht, Gautam; Roberts, Lewis C. Jr.; Shao, Mike; Burruss, Rick; Cady, Eric [Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109 (United States); Oppenheimer, Ben R.; Brenner, Douglas; Zimmerman, Neil [American Museum of Natural History, Central Park West at 79th Street, New York, NY 10024 (United States); Monnier, John D. [Department of Astronomy, University of Michigan, 941 Dennison Building, 500 Church Street, Ann Arbor, MI 48109-1090 (United States); Crepp, Justin [Department of Physics, 225 Nieuwland Science Hall, University of Notre Dame, Notre Dame, IN 46556 (United States); Parry, Ian [University of Cambridge, Institute of Astronomy, Madingley Road, Cambridge, CB3, OHA (United Kingdom); Beichman, Charles [NASA Exoplanet Science Institute, 770 South Wilson Avenue, Pasadena, CA 91225 (United States); Soummer, Remi [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States)

    2012-09-20

    We report low-resolution near-infrared spectroscopic observations of the eruptive star FU Orionis using the Integral Field Spectrograph (IFS) Project 1640 installed at the Palomar Hale telescope. This work focuses on elucidating the nature of the faint source, located 0.''5 south of FU Ori, and identified in 2003 as FU Ori S. We first use our observations in conjunction with published data to demonstrate that the two stars are indeed physically associated and form a true binary pair. We then proceed to extract J- and H-band spectro-photometry using the damped LOCI algorithm, a reduction method tailored for high contrast science with IFS. This is the first communication reporting the high accuracy of this technique, pioneered by the Project 1640 team, on a faint astronomical source. We use our low-resolution near-infrared spectrum in conjunction with 10.2 {mu}m interferometric data to constrain the infrared excess of FU Ori S. We then focus on estimating the bulk physical properties of FU Ori S. Our models lead to estimates of an object heavily reddened, A{sub V} = 8-12, with an effective temperature of {approx}4000-6500 K. Finally, we put these results in the context of the FU Ori N-S system and argue that our analysis provides evidence that FU Ori S might be the more massive component of this binary system.

  9. Conditions for the Solvability of the Linear Programming Formulation for Constrained Discounted Markov Decision Processes

    Energy Technology Data Exchange (ETDEWEB)

    Dufour, F., E-mail: dufour@math.u-bordeaux1.fr [Institut de Mathématiques de Bordeaux, INRIA Bordeaux Sud Ouest, Team: CQFD, and IMB (France); Prieto-Rumeau, T., E-mail: tprieto@ccia.uned.es [UNED, Department of Statistics and Operations Research (Spain)

    2016-08-15

    We consider a discrete-time constrained discounted Markov decision process (MDP) with Borel state and action spaces, compact action sets, and lower semi-continuous cost functions. We introduce a set of hypotheses related to a positive weight function which allow us to consider cost functions that might not be bounded below by a constant, and which imply the solvability of the linear programming formulation of the constrained MDP. In particular, we establish the existence of a constrained optimal stationary policy. Our results are illustrated with an application to a fishery management problem.

  10. A Review on Methods of Risk Adjustment and their Use in Integrated Healthcare Systems

    Science.gov (United States)

    Juhnke, Christin; Bethge, Susanne

    2016-01-01

    Introduction: Effective risk adjustment is an aspect that is more and more given weight on the background of competitive health insurance systems and vital healthcare systems. The objective of this review was to obtain an overview of existing models of risk adjustment as well as on crucial weights in risk adjustment. Moreover, the predictive performance of selected methods in international healthcare systems should be analysed. Theory and methods: A comprehensive, systematic literature review on methods of risk adjustment was conducted in terms of an encompassing, interdisciplinary examination of the related disciplines. Results: In general, several distinctions can be made: in terms of risk horizons, in terms of risk factors or in terms of the combination of indicators included. Within these, another differentiation by three levels seems reasonable: methods based on mortality risks, methods based on morbidity risks as well as those based on information on (self-reported) health status. Conclusions and discussion: After the final examination of different methods of risk adjustment it was shown that the methodology used to adjust risks varies. The models differ greatly in terms of their included morbidity indicators. The findings of this review can be used in the evaluation of integrated healthcare delivery systems and can be integrated into quality- and patient-oriented reimbursement of care providers in the design of healthcare contracts. PMID:28316544

  11. A Decomposition Method for Security Constrained Economic Dispatch of a Three-Layer Power System

    Science.gov (United States)

    Yang, Junfeng; Luo, Zhiqiang; Dong, Cheng; Lai, Xiaowen; Wang, Yang

    2018-01-01

    This paper proposes a new decomposition method for the security-constrained economic dispatch in a three-layer large-scale power system. The decomposition is realized using two main techniques. The first is to use Ward equivalencing-based network reduction to reduce the number of variables and constraints in the high-layer model without sacrificing accuracy. The second is to develop a price response function to exchange signal information between neighboring layers, which significantly improves the information exchange efficiency of each iteration and results in less iterations and less computational time. The case studies based on the duplicated RTS-79 system demonstrate the effectiveness and robustness of the proposed method.

  12. Constraining Unsaturated Hydraulic Parameters Using the Latin Hypercube Sampling Method and Coupled Hydrogeophysical Approach

    Science.gov (United States)

    Farzamian, Mohammad; Monteiro Santos, Fernando A.; Khalil, Mohamed A.

    2017-12-01

    The coupled hydrogeophysical approach has proved to be a valuable tool for improving the use of geoelectrical data for hydrological model parameterization. In the coupled approach, hydrological parameters are directly inferred from geoelectrical measurements in a forward manner to eliminate the uncertainty connected to the independent inversion of electrical resistivity data. Several numerical studies have been conducted to demonstrate the advantages of a coupled approach; however, only a few attempts have been made to apply the coupled approach to actual field data. In this study, we developed a 1D coupled hydrogeophysical code to estimate the van Genuchten-Mualem model parameters, K s, n, θ r and α, from time-lapse vertical electrical sounding data collected during a constant inflow infiltration experiment. van Genuchten-Mualem parameters were sampled using the Latin hypercube sampling method to provide a full coverage of the range of each parameter from their distributions. By applying the coupled approach, vertical electrical sounding data were coupled to hydrological models inferred from van Genuchten-Mualem parameter samples to investigate the feasibility of constraining the hydrological model. The key approaches taken in the study are to (1) integrate electrical resistivity and hydrological data and avoiding data inversion, (2) estimate the total water mass recovery of electrical resistivity data and consider it in van Genuchten-Mualem parameters evaluation and (3) correct the influence of subsurface temperature fluctuations during the infiltration experiment on electrical resistivity data. The results of the study revealed that the coupled hydrogeophysical approach can improve the value of geophysical measurements in hydrological model parameterization. However, the approach cannot overcome the technical limitations of the geoelectrical method associated with resolution and of water mass recovery.

  13. A Spatially Constrained Multi-autoencoder Approach for Multivariate Geochemical Anomaly Recognition

    Science.gov (United States)

    Lirong, C.; Qingfeng, G.; Renguang, Z.; Yihui, X.

    2017-12-01

    Separating and recognizing geochemical anomalies from the geochemical background is one of the key tasks in geochemical exploration. Many methods have been developed, such as calculating the mean ±2 standard deviation, and fractal/multifractal models. In recent years, deep autoencoder, a deep learning approach, have been used for multivariate geochemical anomaly recognition. While being able to deal with the non-normal distributions of geochemical concentrations and the non-linear relationships among them, this self-supervised learning method does not take into account the spatial heterogeneity of geochemical background and the uncertainty induced by the randomly initialized weights of neurons, leading to ineffective recognition of weak anomalies. In this paper, we introduce a spatially constrained multi-autoencoder (SCMA) approach for multivariate geochemical anomaly recognition, which includes two steps: spatial partitioning and anomaly score computation. The first step divides the study area into multiple sub-regions to segregate the geochemical background, by grouping the geochemical samples through K-means clustering, spatial filtering, and spatial constraining rules. In the second step, for each sub-region, a group of autoencoder neural networks are constructed with an identical structure but different initial weights on neurons. Each autoencoder is trained using the geochemical samples within the corresponding sub-region to learn the sub-regional geochemical background. The best autoencoder of a group is chosen as the final model for the corresponding sub-region. The anomaly score at each location can then be calculated as the euclidean distance between the observed concentrations and reconstructed concentrations of geochemical elements.The experiments using the geochemical data and Fe deposits in the southwestern Fujian province of China showed that our SCMA approach greatly improved the recognition of weak anomalies, achieving the AUC of 0.89, compared

  14. An integrated video- and weight-monitoring system for the surveillance of highly enriched uranium blend down operations

    International Nuclear Information System (INIS)

    Lenarduzzi, R.; Castleberry, K.; Whitaker, M.; Martinez, R.

    1998-01-01

    An integrated video-surveillance and weight-monitoring system has been designed and constructed for tracking the blending down of weapons-grade uranium by the US Department of Energy. The instrumentation is being used by the International Atomic Energy Agency in its task of tracking and verifying the blended material at the Portsmouth Gaseous Diffusion Plant, Portsmouth, Ohio. The weight instrumentation developed at the Oak Ridge National Laboratory monitors and records the weight of cylinders of the highly enriched uranium as their contents are fed into the blending facility while the video equipment provided by Sandia National Laboratory records periodic and event triggered images of the blending area. A secure data network between the scales, cameras, and computers insures data integrity and eliminates the possibility of tampering. The details of the weight monitoring instrumentation, video- and weight-system interaction, and the secure data network is discussed

  15. Integral equation methods for electromagnetics

    CERN Document Server

    Volakis, John

    2012-01-01

    This text/reference is a detailed look at the development and use of integral equation methods for electromagnetic analysis, specifically for antennas and radar scattering. Developers and practitioners will appreciate the broad-based approach to understanding and utilizing integral equation methods and the unique coverage of historical developments that led to the current state-of-the-art. In contrast to existing books, Integral Equation Methods for Electromagnetics lays the groundwork in the initial chapters so students and basic users can solve simple problems and work their way up to the mo

  16. Modeling the microstructural evolution during constrained sintering

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Frandsen, Henrik Lund; Tikare, V.

    A numerical model able to simulate solid state constrained sintering of a powder compact is presented. The model couples an existing kinetic Monte Carlo (kMC) model for free sintering with a finite element (FE) method for calculating stresses on a microstructural level. The microstructural response...... to the stress field as well as the FE calculation of the stress field from the microstructural evolution is discussed. The sintering behavior of two powder compacts constrained by a rigid substrate is simulated and compared to free sintering of the same samples. Constrained sintering result in a larger number...

  17. On a New Family of Kalman Filter Algorithms for Integrated Navigation

    Science.gov (United States)

    Mahboub, V.; Saadatseresht, M.; Ardalan, A. A.

    2017-09-01

    Here we present a review on a new family of Kalman filter algorithms which recently developed for integrated navigation. In particular it is useful for vision based navigation due to the type of data. Here we mainly focus on three algorithms namely weighted Total Kalman filter (WTKF), integrated Kalman filter (IKF) and constrained integrated Kalman filter (CIKF). The common characteristic of these algorithms is that they can consider the neglected random observed quantities which may appear in the dynamic model. Moreover, our approach makes use of condition equations and straightforward variance propagation rules. The WTKF algorithm can deal with problems with arbitrary weight matrixes. Both of the observation equations and system equations can be dynamic-errors-in-variables (DEIV) models in the IKF algorithms. In some problems a quadratic constraint may exist. They can be solved by CIKF algorithm. Finally, we compare four algorithms WTKF, IKF, CIKF and EKF in numerical examples.

  18. Increased power to weight ratio of piezoelectric energy harvesters through integration of cellular honeycomb structures

    International Nuclear Information System (INIS)

    Chandrasekharan, N; Thompson, L L

    2016-01-01

    The limitations posed by batteries have compelled the need to investigate energy harvesting methods to power small electronic devices that require very low operational power. Vibration based energy harvesting methods with piezoelectric transduction in particular has been shown to possess potential towards energy harvesters replacing batteries. Current piezoelectric energy harvesters exhibit considerably lower power to weight ratio or specific power when compared to batteries the harvesters seek to replace. To attain the goal of battery-less self-sustainable device operation the power to weight ratio gap between piezoelectric energy harvesters and batteries need to be bridged. In this paper the potential of integrating lightweight honeycomb structures with existing piezoelectric device configurations (bimorph) towards achieving higher specific power is investigated. It is shown in this study that at low excitation frequency ranges, replacing the solid continuous substrate of conventional bimorph with honeycomb structures of the same material results in a significant increase in power to weight ratio of the piezoelectric harvester. At higher driving frequency ranges it is shown that unlike the traditional piezoelectric bimorph with solid continuous substrate, the honeycomb substrate bimorph can preserve optimum global design parameters through manipulation of honeycomb unit cell parameters. Increased operating lifetime and design flexibility of the honeycomb core piezoelectric bimorph is demonstrated as unit cell parameters of the honeycomb structures can be manipulated to alter mass and stiffness properties of the substrate, resulting in unit cell parameter significantly influencing power generation. (paper)

  19. Harvesting Entropy for Random Number Generation for Internet of Things Constrained Devices Using On-Board Sensors

    Directory of Open Access Journals (Sweden)

    Marcin Piotr Pawlowski

    2015-10-01

    Full Text Available Entropy in computer security is associated with the unpredictability of a source of randomness. The random source with high entropy tends to achieve a uniform distribution of random values. Random number generators are one of the most important building blocks of cryptosystems. In constrained devices of the Internet of Things ecosystem, high entropy random number generators are hard to achieve due to hardware limitations. For the purpose of the random number generation in constrained devices, this work proposes a solution based on the least-significant bits concatenation entropy harvesting method. As a potential source of entropy, on-board integrated sensors (i.e., temperature, humidity and two different light sensors have been analyzed. Additionally, the costs (i.e., time and memory consumption of the presented approach have been measured. The results obtained from the proposed method with statistical fine tuning achieved a Shannon entropy of around 7.9 bits per byte of data for temperature and humidity sensors. The results showed that sensor-based random number generators are a valuable source of entropy with very small RAM and Flash memory requirements for constrained devices of the Internet of Things.

  20. Harvesting Entropy for Random Number Generation for Internet of Things Constrained Devices Using On-Board Sensors

    Science.gov (United States)

    Pawlowski, Marcin Piotr; Jara, Antonio; Ogorzalek, Maciej

    2015-01-01

    Entropy in computer security is associated with the unpredictability of a source of randomness. The random source with high entropy tends to achieve a uniform distribution of random values. Random number generators are one of the most important building blocks of cryptosystems. In constrained devices of the Internet of Things ecosystem, high entropy random number generators are hard to achieve due to hardware limitations. For the purpose of the random number generation in constrained devices, this work proposes a solution based on the least-significant bits concatenation entropy harvesting method. As a potential source of entropy, on-board integrated sensors (i.e., temperature, humidity and two different light sensors) have been analyzed. Additionally, the costs (i.e., time and memory consumption) of the presented approach have been measured. The results obtained from the proposed method with statistical fine tuning achieved a Shannon entropy of around 7.9 bits per byte of data for temperature and humidity sensors. The results showed that sensor-based random number generators are a valuable source of entropy with very small RAM and Flash memory requirements for constrained devices of the Internet of Things. PMID:26506357

  1. Harvesting entropy for random number generation for internet of things constrained devices using on-board sensors.

    Science.gov (United States)

    Pawlowski, Marcin Piotr; Jara, Antonio; Ogorzalek, Maciej

    2015-10-22

    Entropy in computer security is associated with the unpredictability of a source of randomness. The random source with high entropy tends to achieve a uniform distribution of random values. Random number generators are one of the most important building blocks of cryptosystems. In constrained devices of the Internet of Things ecosystem, high entropy random number generators are hard to achieve due to hardware limitations. For the purpose of the random number generation in constrained devices, this work proposes a solution based on the least-significant bits concatenation entropy harvesting method. As a potential source of entropy, on-board integrated sensors (i.e., temperature, humidity and two different light sensors) have been analyzed. Additionally, the costs (i.e., time and memory consumption) of the presented approach have been measured. The results obtained from the proposed method with statistical fine tuning achieved a Shannon entropy of around 7.9 bits per byte of data for temperature and humidity sensors. The results showed that sensor-based random number generators are a valuable source of entropy with very small RAM and Flash memory requirements for constrained devices of the Internet of Things.

  2. Diverse methods for integrable models

    NARCIS (Netherlands)

    Fehér, G.

    2017-01-01

    This thesis is centered around three topics, sharing integrability as a common theme. This thesis explores different methods in the field of integrable models. The first two chapters are about integrable lattice models in statistical physics. The last chapter describes an integrable quantum chain.

  3. An extensive analysis of disease-gene associations using network integration and fast kernel-based gene prioritization methods.

    Science.gov (United States)

    Valentini, Giorgio; Paccanaro, Alberto; Caniza, Horacio; Romero, Alfonso E; Re, Matteo

    2014-06-01

    In the context of "network medicine", gene prioritization methods represent one of the main tools to discover candidate disease genes by exploiting the large amount of data covering different types of functional relationships between genes. Several works proposed to integrate multiple sources of data to improve disease gene prioritization, but to our knowledge no systematic studies focused on the quantitative evaluation of the impact of network integration on gene prioritization. In this paper, we aim at providing an extensive analysis of gene-disease associations not limited to genetic disorders, and a systematic comparison of different network integration methods for gene prioritization. We collected nine different functional networks representing different functional relationships between genes, and we combined them through both unweighted and weighted network integration methods. We then prioritized genes with respect to each of the considered 708 medical subject headings (MeSH) diseases by applying classical guilt-by-association, random walk and random walk with restart algorithms, and the recently proposed kernelized score functions. The results obtained with classical random walk algorithms and the best single network achieved an average area under the curve (AUC) across the 708 MeSH diseases of about 0.82, while kernelized score functions and network integration boosted the average AUC to about 0.89. Weighted integration, by exploiting the different "informativeness" embedded in different functional networks, outperforms unweighted integration at 0.01 significance level, according to the Wilcoxon signed rank sum test. For each MeSH disease we provide the top-ranked unannotated candidate genes, available for further bio-medical investigation. Network integration is necessary to boost the performances of gene prioritization methods. Moreover the methods based on kernelized score functions can further enhance disease gene ranking results, by adopting both

  4. Optimal dispatch in dynamic security constrained open power market

    International Nuclear Information System (INIS)

    Singh, S.N.; David, A.K.

    2002-01-01

    Power system security is a new concern in the competitive power market operation, because the integration of the system controller and the generation owner has been broken. This paper presents an approach for dynamic security constrained optimal dispatch in restructured power market environment. The transient energy margin using transient energy function (TEF) approach has been used to calculate the stability margin of the system and a hybrid method is applied to calculate the approximate unstable equilibrium point (UEP) that is used to calculate the exact UEP and thus, the energy margin using TEF. The case study results illustrated on two systems shows that the operating mechanisms are compatible with the new business environment. (author)

  5. A Chance-Constrained Economic Dispatch Model in Wind-Thermal-Energy Storage System

    Directory of Open Access Journals (Sweden)

    Yanzhe Hu

    2017-03-01

    Full Text Available As a type of renewable energy, wind energy is integrated into the power system with more and more penetration levels. It is challenging for the power system operators (PSOs to cope with the uncertainty and variation of the wind power and its forecasts. A chance-constrained economic dispatch (ED model for the wind-thermal-energy storage system (WTESS is developed in this paper. An optimization model with the wind power and the energy storage system (ESS is first established with the consideration of both the economic benefits of the system and less wind curtailments. The original wind power generation is processed by the ESS to obtain the final wind power output generation (FWPG. A Gaussian mixture model (GMM distribution is adopted to characterize the probabilistic and cumulative distribution functions with an analytical expression. Then, a chance-constrained ED model integrated by the wind-energy storage system (W-ESS is developed by considering both the overestimation costs and the underestimation costs of the system and solved by the sequential linear programming method. Numerical simulation results using the wind power data in four wind farms are performed on the developed ED model with the IEEE 30-bus system. It is verified that the developed ED model is effective to integrate the uncertain and variable wind power. The GMM distribution could accurately fit the actual distribution of the final wind power output, and the ESS could help effectively decrease the operation costs.

  6. Polymer quantum mechanics some examples using path integrals

    International Nuclear Information System (INIS)

    Parra, Lorena; Vergara, J. David

    2014-01-01

    In this work we analyze several physical systems in the context of polymer quantum mechanics using path integrals. First we introduce the group averaging method to quantize constrained systems with path integrals and later we use this procedure to compute the effective actions for the polymer non-relativistic particle and the polymer harmonic oscillator. We analyze the measure of the path integral and we describe the semiclassical dynamics of the systems

  7. Onomatopoeia characters extraction from comic images using constrained Delaunay triangulation

    Science.gov (United States)

    Liu, Xiangping; Shoji, Kenji; Mori, Hiroshi; Toyama, Fubito

    2014-02-01

    A method for extracting onomatopoeia characters from comic images was developed based on stroke width feature of characters, since they nearly have a constant stroke width in a number of cases. An image was segmented with a constrained Delaunay triangulation. Connected component grouping was performed based on the triangles generated by the constrained Delaunay triangulation. Stroke width calculation of the connected components was conducted based on the altitude of the triangles generated with the constrained Delaunay triangulation. The experimental results proved the effectiveness of the proposed method.

  8. Robot-Beacon Distributed Range-Only SLAM for Resource-Constrained Operation.

    Science.gov (United States)

    Torres-González, Arturo; Martínez-de Dios, Jose Ramiro; Ollero, Anibal

    2017-04-20

    This work deals with robot-sensor network cooperation where sensor nodes (beacons) are used as landmarks for Range-Only (RO) Simultaneous Localization and Mapping (SLAM). Most existing RO-SLAM techniques consider beacons as passive devices disregarding the sensing, computational and communication capabilities with which they are actually endowed. SLAM is a resource-demanding task. Besides the technological constraints of the robot and beacons, many applications impose further resource consumption limitations. This paper presents a scalable distributed RO-SLAM scheme for resource-constrained operation. It is capable of exploiting robot-beacon cooperation in order to improve SLAM accuracy while meeting a given resource consumption bound expressed as the maximum number of measurements that are integrated in SLAM per iteration. The proposed scheme combines a Sparse Extended Information Filter (SEIF) SLAM method, in which each beacon gathers and integrates robot-beacon and inter-beacon measurements, and a distributed information-driven measurement allocation tool that dynamically selects the measurements that are integrated in SLAM, balancing uncertainty improvement and resource consumption. The scheme adopts a robot-beacon distributed approach in which each beacon participates in the selection, gathering and integration in SLAM of robot-beacon and inter-beacon measurements, resulting in significant estimation accuracies, resource-consumption efficiency and scalability. It has been integrated in an octorotor Unmanned Aerial System (UAS) and evaluated in 3D SLAM outdoor experiments. The experimental results obtained show its performance and robustness and evidence its advantages over existing methods.

  9. Constrained bidirectional propagation and stroke segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Mori, S; Gillespie, W; Suen, C Y

    1983-03-01

    A new method for decomposing a complex figure into its constituent strokes is described. This method, based on constrained bidirectional propagation, is suitable for parallel processing. Examples of its application to the segmentation of Chinese characters are presented. 9 references.

  10. The Extraction of Road Boundary from Crowdsourcing Trajectory Using Constrained Delaunay Triangulation

    Directory of Open Access Journals (Sweden)

    YANG Wei

    2017-02-01

    Full Text Available Extraction of road boundary accurately from crowdsourcing trajectory lines is still a hard work.Therefore,this study presented a new approach to use vehicle trajectory lines to extract road boundary.Firstly, constructing constrained Delaunay triangulation within interpolated track lines to calculate road boundary descriptors using triangle edge length and Voronoi cell.Road boundary recognition model was established by integrating the two boundary descriptors.Then,based on seed polygons,a regional growing method was proposed to extract road boundary. Finally, taxi GPS traces in Beijing were used to verify the validity of the novel method, and the results also showed that our method was suitable for GPS traces with disparity density,complex road structure and different time interval.

  11. Constraining Basin Depth and Fault Displacement in the Malombe Basin Using Potential Field Methods

    Science.gov (United States)

    Beresh, S. C. M.; Elifritz, E. A.; Méndez, K.; Johnson, S.; Mynatt, W. G.; Mayle, M.; Atekwana, E. A.; Laó-Dávila, D. A.; Chindandali, P. R. N.; Chisenga, C.; Gondwe, S.; Mkumbwa, M.; Kalaguluka, D.; Kalindekafe, L.; Salima, J.

    2017-12-01

    The Malombe Basin is part of the Malawi Rift which forms the southern part of the Western Branch of the East African Rift System. At its southern end, the Malawi Rift bifurcates into the Bilila-Mtakataka and Chirobwe-Ntcheu fault systems and the Lake Malombe Rift Basin around the Shire Horst, a competent block under the Nankumba Peninsula. The Malombe Basin is approximately 70km from north to south and 35km at its widest point from east to west, bounded by reversing-polarity border faults. We aim to constrain the depth of the basin to better understand displacement of each border fault. Our work utilizes two east-west gravity profiles across the basin coupled with Source Parameter Imaging (SPI) derived from a high-resolution aeromagnetic survey. The first gravity profile was done across the northern portion of the basin and the second across the southern portion. Gravity and magnetic data will be used to constrain basement depths and the thickness of the sedimentary cover. Additionally, Shuttle Radar Topography Mission (SRTM) data is used to understand the topographic expression of the fault scarps. Estimates for minimum displacement of the border faults on either side of the basin were made by adding the elevation of the scarps to the deepest SPI basement estimates at the basin borders. Our preliminary results using SPI and SRTM data show a minimum displacement of approximately 1.3km for the western border fault; the minimum displacement for the eastern border fault is 740m. However, SPI merely shows the depth to the first significantly magnetic layer in the subsurface, which may or may not be the actual basement layer. Gravimetric readings are based on subsurface density and thus circumvent issues arising from magnetic layers located above the basement; therefore expected results for our work will be to constrain more accurate basin depth by integrating the gravity profiles. Through more accurate basement depth estimates we also gain more accurate displacement

  12. Constrained optimization via simulation models for new product innovation

    Science.gov (United States)

    Pujowidianto, Nugroho A.

    2017-11-01

    We consider the problem of constrained optimization where the decision makers aim to optimize the primary performance measure while constraining the secondary performance measures. This paper provides a brief overview of stochastically constrained optimization via discrete event simulation. Most review papers tend to be methodology-based. This review attempts to be problem-based as decision makers may have already decided on the problem formulation. We consider constrained optimization models as there are usually constraints on secondary performance measures as trade-off in new product development. It starts by laying out different possible methods and the reasons using constrained optimization via simulation models. It is then followed by the review of different simulation optimization approach to address constrained optimization depending on the number of decision variables, the type of constraints, and the risk preferences of the decision makers in handling uncertainties.

  13. Overweight and weight dissatisfaction related to socio-economic position, integration and dietary indicators among south Asian immigrants in Oslo.

    Science.gov (United States)

    Råberg, Marte; Kumar, Bernadette; Holmboe-Ottesen, Gerd; Wandel, Margareta

    2010-05-01

    To investigate how socio-economic position, demographic factors, degree of integration and dietary indicators are related to BMI/waist:hip ratio (WHR) and to weight dissatisfaction and slimming among South Asians in Oslo, Norway. Cross-sectional study consisting of a health check including anthropometric measures and two self-administered questionnaires. Oslo, Norway. Pakistanis and Sri Lankans (n 629), aged 30-60 years, residing in Oslo. BMI was positively associated with female gender (P = 0.004) and Pakistani origin (P integration (measured by a composite index, independent of duration of residence; P = 0.017). One-third of those with normal weight and most of those obese were dissatisfied with their weight. Among these, about 40 % had attempted to slim during the past year. Dissatisfaction with weight was positively associated with education in women (P = 0.006) and with integration in men (P = 0.026), and inversely associated with physical activity (P = 0.044) in men. Women who had made slimming attempts had breakfast and other meals less frequently than others (P < 0.05). Weight dissatisfaction exists among South Asian immigrants. More research is needed regarding bodily dissatisfaction and the relationship between perception of weight and weight-change attempts among immigrants in Norway, in order to prevent and treat both obesity and eating disorders.

  14. IW-Scoring: an Integrative Weighted Scoring framework for annotating and prioritizing genetic variations in the noncoding genome.

    Science.gov (United States)

    Wang, Jun; Dayem Ullah, Abu Z; Chelala, Claude

    2018-01-30

    The vast majority of germline and somatic variations occur in the noncoding part of the genome, only a small fraction of which are believed to be functional. From the tens of thousands of noncoding variations detectable in each genome, identifying and prioritizing driver candidates with putative functional significance is challenging. To address this, we implemented IW-Scoring, a new Integrative Weighted Scoring model to annotate and prioritise functionally relevant noncoding variations. We evaluate 11 scoring methods, and apply an unsupervised spectral approach for subsequent selective integration into two linear weighted functional scoring schemas for known and novel variations. IW-Scoring produces stable high-quality performance as the best predictors for three independent data sets. We demonstrate the robustness of IW-Scoring in identifying recurrent functional mutations in the TERT promoter, as well as disease SNPs in proximity to consensus motifs and with gene regulatory effects. Using follicular lymphoma as a paradigmatic cancer model, we apply IW-Scoring to locate 11 recurrently mutated noncoding regions in 14 follicular lymphoma genomes, and validate 9 of these regions in an extension cohort, including the promoter and enhancer regions of PAX5. Overall, IW-Scoring demonstrates greater versatility in identifying trait- and disease-associated noncoding variants. Scores from IW-Scoring as well as other methods are freely available from http://www.snp-nexus.org/IW-Scoring/. © The Author(s) 2018. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. Controlling for the use of extreme weights in bank efficiency assessments during the financial crisis

    DEFF Research Database (Denmark)

    Asmild, Mette; Zhu, Minyan

    2016-01-01

    We propose a method for bank efficiency assessment, based on weight restricted DEA, that limits banks’ abilities to use extreme weights, corresponding to extreme judgements of the risk adjusted prices on funding sources and assets. Based on a data set comprising the largest European banks during...... the financial crisis, we illustrate the impact of the proposed weight restrictions in two different efficiency models; one related to banks’ funding mix and one related to their asset mix. The results show that using a more balanced set of weights tend to reduce the estimated efficiency scores more for those...... banks which were bailed out during the crisis, which confirms the potential bias within standard DEA that does not control for extreme weights applied by highly risky banks. We discuss the use of the proposed method as a regulatory tool to constrain discretion when complying with regulatory capital...

  16. Remaining useful life prediction based on noisy condition monitoring signals using constrained Kalman filter

    International Nuclear Information System (INIS)

    Son, Junbo; Zhou, Shiyu; Sankavaram, Chaitanya; Du, Xinyu; Zhang, Yilu

    2016-01-01

    In this paper, a statistical prognostic method to predict the remaining useful life (RUL) of individual units based on noisy condition monitoring signals is proposed. The prediction accuracy of existing data-driven prognostic methods depends on the capability of accurately modeling the evolution of condition monitoring (CM) signals. Therefore, it is inevitable that the RUL prediction accuracy depends on the amount of random noise in CM signals. When signals are contaminated by a large amount of random noise, RUL prediction even becomes infeasible in some cases. To mitigate this issue, a robust RUL prediction method based on constrained Kalman filter is proposed. The proposed method models the CM signals subject to a set of inequality constraints so that satisfactory prediction accuracy can be achieved regardless of the noise level of signal evolution. The advantageous features of the proposed RUL prediction method is demonstrated by both numerical study and case study with real world data from automotive lead-acid batteries. - Highlights: • A computationally efficient constrained Kalman filter is proposed. • Proposed filter is integrated into an online failure prognosis framework. • A set of proper constraints significantly improves the failure prediction accuracy. • Promising results are reported in the application of battery failure prognosis.

  17. Solution of Constrained Optimal Control Problems Using Multiple Shooting and ESDIRK Methods

    DEFF Research Database (Denmark)

    Capolei, Andrea; Jørgensen, John Bagterp

    2012-01-01

    of this paper is the use of ESDIRK integration methods for solution of the initial value problems and the corresponding sensitivity equations arising in the multiple shooting algorithm. Compared to BDF-methods, ESDIRK-methods are advantageous in multiple shooting algorithms in which restarts and frequent...... algorithm. As we consider stiff systems, implicit solvers with sensitivity computation capabilities for initial value problems must be used in the multiple shooting algorithm. Traditionally, multi-step methods based on the BDF algorithm have been used for such problems. The main novel contribution...... discontinuities on each shooting interval are present. The ESDIRK methods are implemented using an inexact Newton method that reuses the factorization of the iteration matrix for the integration as well as the sensitivity computation. Numerical experiments are provided to demonstrate the algorithm....

  18. A new method of calculation of thyroid weight, using computed tomography

    International Nuclear Information System (INIS)

    Sugimura, Kazuro; Matsuo, Michimasa; Sugimura, Chie; Nishiyama, Shoji; Narabayashi, Isamu; Kimura, Shuji

    1983-01-01

    The weight of the thyroid gland is an important factor for determination of dose of the radioactive iodine that is used for the management of hyperthyroidism. Various methods employing scintigraphic image have been employed for estimation of the thyroid weight, but the error by these methods has been greater than 40 per cent. In this study, a new technique has been developed for more accurate estimation of the weight of the thyroid glands employing the distinctive system of three dimensional reconstruction with the simultaneous calculation of the volume of the thyroid using CT images. By this technique, the volume of thyroid phantom could be calculated with lesser than 9.4 per cent error. The proper interval of CT scan was 10 mm for satisfactory measurement. In 18 patients who have undergone thyroidectomy, the thyroid weight that had been estimated by our technique was compared with the actual weight of the excised specimen. There was a satisfactory correlation with 11.3 +- 7.5 per cent error. It has been concluded that our technique provides more accurate estimation of the weight of the thyroid glands than any other methods which have been previously employed. (author)

  19. Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization

    KAUST Repository

    Reyes, Juan Carlos De los

    2013-11-01

    We propose a nonsmooth PDE-constrained optimization approach for the determination of the correct noise model in total variation (TV) image denoising. An optimization problem for the determination of the weights corresponding to different types of noise distributions is stated and existence of an optimal solution is proved. A tailored regularization approach for the approximation of the optimal parameter values is proposed thereafter and its consistency studied. Additionally, the differentiability of the solution operator is proved and an optimality system characterizing the optimal solutions of each regularized problem is derived. The optimal parameter values are numerically computed by using a quasi-Newton method, together with semismooth Newton type algorithms for the solution of the TV-subproblems. © 2013 American Institute of Mathematical Sciences.

  20. Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization

    KAUST Repository

    Reyes, Juan Carlos De los; Schö nlieb, Carola-Bibiane

    2013-01-01

    We propose a nonsmooth PDE-constrained optimization approach for the determination of the correct noise model in total variation (TV) image denoising. An optimization problem for the determination of the weights corresponding to different types of noise distributions is stated and existence of an optimal solution is proved. A tailored regularization approach for the approximation of the optimal parameter values is proposed thereafter and its consistency studied. Additionally, the differentiability of the solution operator is proved and an optimality system characterizing the optimal solutions of each regularized problem is derived. The optimal parameter values are numerically computed by using a quasi-Newton method, together with semismooth Newton type algorithms for the solution of the TV-subproblems. © 2013 American Institute of Mathematical Sciences.

  1. Hyperbolicity and constrained evolution in linearized gravity

    International Nuclear Information System (INIS)

    Matzner, Richard A.

    2005-01-01

    Solving the 4-d Einstein equations as evolution in time requires solving equations of two types: the four elliptic initial data (constraint) equations, followed by the six second order evolution equations. Analytically the constraint equations remain solved under the action of the evolution, and one approach is to simply monitor them (unconstrained evolution). Since computational solution of differential equations introduces almost inevitable errors, it is clearly 'more correct' to introduce a scheme which actively maintains the constraints by solution (constrained evolution). This has shown promise in computational settings, but the analysis of the resulting mixed elliptic hyperbolic method has not been completely carried out. We present such an analysis for one method of constrained evolution, applied to a simple vacuum system, linearized gravitational waves. We begin with a study of the hyperbolicity of the unconstrained Einstein equations. (Because the study of hyperbolicity deals only with the highest derivative order in the equations, linearization loses no essential details.) We then give explicit analytical construction of the effect of initial data setting and constrained evolution for linearized gravitational waves. While this is clearly a toy model with regard to constrained evolution, certain interesting features are found which have relevance to the full nonlinear Einstein equations

  2. A numerical integration-based yield estimation method for integrated circuits

    International Nuclear Information System (INIS)

    Liang Tao; Jia Xinzhang

    2011-01-01

    A novel integration-based yield estimation method is developed for yield optimization of integrated circuits. This method tries to integrate the joint probability density function on the acceptability region directly. To achieve this goal, the simulated performance data of unknown distribution should be converted to follow a multivariate normal distribution by using Box-Cox transformation (BCT). In order to reduce the estimation variances of the model parameters of the density function, orthogonal array-based modified Latin hypercube sampling (OA-MLHS) is presented to generate samples in the disturbance space during simulations. The principle of variance reduction of model parameters estimation through OA-MLHS together with BCT is also discussed. Two yield estimation examples, a fourth-order OTA-C filter and a three-dimensional (3D) quadratic function are used for comparison of our method with Monte Carlo based methods including Latin hypercube sampling and importance sampling under several combinations of sample sizes and yield values. Extensive simulations show that our method is superior to other methods with respect to accuracy and efficiency under all of the given cases. Therefore, our method is more suitable for parametric yield optimization. (semiconductor integrated circuits)

  3. A numerical integration-based yield estimation method for integrated circuits

    Energy Technology Data Exchange (ETDEWEB)

    Liang Tao; Jia Xinzhang, E-mail: tliang@yahoo.cn [Key Laboratory of Ministry of Education for Wide Bandgap Semiconductor Materials and Devices, School of Microelectronics, Xidian University, Xi' an 710071 (China)

    2011-04-15

    A novel integration-based yield estimation method is developed for yield optimization of integrated circuits. This method tries to integrate the joint probability density function on the acceptability region directly. To achieve this goal, the simulated performance data of unknown distribution should be converted to follow a multivariate normal distribution by using Box-Cox transformation (BCT). In order to reduce the estimation variances of the model parameters of the density function, orthogonal array-based modified Latin hypercube sampling (OA-MLHS) is presented to generate samples in the disturbance space during simulations. The principle of variance reduction of model parameters estimation through OA-MLHS together with BCT is also discussed. Two yield estimation examples, a fourth-order OTA-C filter and a three-dimensional (3D) quadratic function are used for comparison of our method with Monte Carlo based methods including Latin hypercube sampling and importance sampling under several combinations of sample sizes and yield values. Extensive simulations show that our method is superior to other methods with respect to accuracy and efficiency under all of the given cases. Therefore, our method is more suitable for parametric yield optimization. (semiconductor integrated circuits)

  4. Isotropic non-white matter partial volume effects in constrained spherical deconvolution

    Directory of Open Access Journals (Sweden)

    Timo eRoine

    2014-03-01

    Full Text Available Diffusion-weighted (DW magnetic resonance imaging (MRI is a noninvasive imaging method, which can be used to investigate neural tracts in the white matter (WM of the brain. Significant partial volume effects (PVE are present in the DW signal due to relatively large voxel sizes. These PVEs can be caused by both non-WM tissue, such as gray matter (GM and cerebrospinal fluid (CSF, and by multiple nonparallel WM fiber populations. High angular resolution diffusion imaging (HARDI methods have been developed to correctly characterize complex WM fiber configurations, but to date, many of the HARDI methods do not account for non-WM PVEs. In this work, we investigated the isotropic PVEs caused by non-WM tissue in WM voxels on fiber orientations extracted with constrained spherical deconvolution (CSD. Experiments were performed on simulated and real DW-MRI data. In particular, simulations were performed to demonstrate the effects of varying the diffusion weightings, signal-to-noise ratios (SNR, fiber configurations, and tissue fractions.Our results show that the presence of non-WM tissue signal causes a decrease in the precision of the detected fiber orientations and an increase in the detection of false peaks in CSD. We estimated 35-50 % of WM voxels to be affected by non-WM PVEs. For HARDI sequences, which typically have a relatively high degree of diffusion weighting, these adverse effects are most pronounced in voxels with GM PVEs. The non-WM PVEs become severe with 50 % GM volume for maximum spherical harmonics orders of 8 and below, and already with 25 % GM volume for higher orders. In addition, a low diffusion weighting or SNR increases the effects. The non-WM PVEs may cause problems in connectomics, where reliable fiber tracking at the WM-GM interface is especially important. We suggest acquiring data with high diffusion-weighting 2500-3000 s/mm2, reasonable SNR (~30 and using lower SH orders in GM contaminated regions to minimize the non-WM PVEs

  5. Isotropic non-white matter partial volume effects in constrained spherical deconvolution.

    Science.gov (United States)

    Roine, Timo; Jeurissen, Ben; Perrone, Daniele; Aelterman, Jan; Leemans, Alexander; Philips, Wilfried; Sijbers, Jan

    2014-01-01

    Diffusion-weighted (DW) magnetic resonance imaging (MRI) is a non-invasive imaging method, which can be used to investigate neural tracts in the white matter (WM) of the brain. Significant partial volume effects (PVEs) are present in the DW signal due to relatively large voxel sizes. These PVEs can be caused by both non-WM tissue, such as gray matter (GM) and cerebrospinal fluid (CSF), and by multiple non-parallel WM fiber populations. High angular resolution diffusion imaging (HARDI) methods have been developed to correctly characterize complex WM fiber configurations, but to date, many of the HARDI methods do not account for non-WM PVEs. In this work, we investigated the isotropic PVEs caused by non-WM tissue in WM voxels on fiber orientations extracted with constrained spherical deconvolution (CSD). Experiments were performed on simulated and real DW-MRI data. In particular, simulations were performed to demonstrate the effects of varying the diffusion weightings, signal-to-noise ratios (SNRs), fiber configurations, and tissue fractions. Our results show that the presence of non-WM tissue signal causes a decrease in the precision of the detected fiber orientations and an increase in the detection of false peaks in CSD. We estimated 35-50% of WM voxels to be affected by non-WM PVEs. For HARDI sequences, which typically have a relatively high degree of diffusion weighting, these adverse effects are most pronounced in voxels with GM PVEs. The non-WM PVEs become severe with 50% GM volume for maximum spherical harmonics orders of 8 and below, and already with 25% GM volume for higher orders. In addition, a low diffusion weighting or SNR increases the effects. The non-WM PVEs may cause problems in connectomics, where reliable fiber tracking at the WM-GM interface is especially important. We suggest acquiring data with high diffusion-weighting 2500-3000 s/mm(2), reasonable SNR (~30) and using lower SH orders in GM contaminated regions to minimize the non-WM PVEs in CSD.

  6. A method for determining customer requirement weights based on TFMF and TLR

    Science.gov (United States)

    Ai, Qingsong; Shu, Ting; Liu, Quan; Zhou, Zude; Xiao, Zheng

    2013-11-01

    'Customer requirements' (CRs) management plays an important role in enterprise systems (ESs) by processing customer-focused information. Quality function deployment (QFD) is one of the main CRs analysis methods. Because CR weights are crucial for the input of QFD, we developed a method for determining CR weights based on trapezoidal fuzzy membership function (TFMF) and 2-tuple linguistic representation (TLR). To improve the accuracy of CR weights, we propose to apply TFMF to describe CR weights so that they can be appropriately represented. Because the fuzzy logic is not capable of aggregating information without loss, TLR model is adopted as well. We first describe the basic concepts of TFMF and TLR and then introduce an approach to compute CR weights. Finally, an example is provided to explain and verify the proposed method.

  7. Influence of the thermal treatment on the stability of partially constrained recovery of NiTi actuator wire

    International Nuclear Information System (INIS)

    Mertmann, M.; Bracke, A.; Hornbogen, E.

    1995-01-01

    NiTi shape memory wire may be used for actuation purposes in flexible robotic grippers, which have to be able to handle objects of different size, shape or weight. Therefore it is advantageous to develop an electrically driven shape memory actuator, which may perform any combination of shape change and exerted force within the following limiting boundaries: - free recovery: gripping of a very small and lightweight object, - constrained recovery: gripping of an object with maximum size and weight. Several NiTi actuator wires are fabricated and annealed between 400 and 600 C after cold working in the martensitic state. After prestraining each wire is embedded in a silicone matrix material. The polymer works as a bias spring and is able to store elastic deformation energy. This paper investigates the influence of thermal treatment on the stability of the exerted force between the two boundaries of completely free and constrained recovery, the ''partially constrained recovery''. The stability of recovery strain and stress is measured in a test assembly, in which different modes of partially constrained recovery are simulated. The work is supplemented by dilatometric measurements carried out with each actuator wire before and after the test procedure. (orig.)

  8. Stochastic weighted particle methods for population balance equations

    International Nuclear Information System (INIS)

    Patterson, Robert I.A.; Wagner, Wolfgang; Kraft, Markus

    2011-01-01

    Highlights: → Weight transfer functions for Monte Carlo simulation of coagulation. → Efficient support for single-particle growth processes. → Comparisons to analytic solutions and soot formation problems. → Better numerical accuracy for less common particles. - Abstract: A class of coagulation weight transfer functions is constructed, each member of which leads to a stochastic particle algorithm for the numerical treatment of population balance equations. These algorithms are based on systems of weighted computational particles and the weight transfer functions are constructed such that the number of computational particles does not change during coagulation events. The algorithms also facilitate the simulation of physical processes that change single particles, such as growth, or other surface reactions. Four members of the algorithm family have been numerically validated by comparison to analytic solutions to simple problems. Numerical experiments have been performed for complex laminar premixed flame systems in which members of the class of stochastic weighted particle methods were compared to each other and to a direct simulation algorithm. Two of the weighted algorithms have been shown to offer performance advantages over the direct simulation algorithm in situations where interest is focused on the larger particles in a system. The extent of this advantage depends on the particular system and on the quantities of interest.

  9. Comparison of preconditioned Krylov subspace iteration methods for PDE-constrained optimization problems - Poisson and convection-diffusion control

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe; Farouq, S.; Neytcheva, M.

    2016-01-01

    Roč. 73, č. 3 (2016), s. 631-633 ISSN 1017-1398 R&D Projects: GA MŠk ED1.1.00/02.0070 Institutional support: RVO:68145535 Keywords : PDE-constrained optimization problems * finite elements * iterative solution methods Subject RIV: BA - General Mathematics Impact factor: 1.241, year: 2016 http://link.springer.com/article/10.1007%2Fs11075-016-0111-1

  10. Comparison of Multi-Criteria Decision Support Methods for Integrated Rehabilitation Prioritization

    Directory of Open Access Journals (Sweden)

    Franz Tscheikner-Gratl

    2017-01-01

    Full Text Available The decisions taken in rehabilitation planning for the urban water networks will have a long lasting impact on the functionality and quality of future services provided by urban infrastructure. These decisions can be assisted by different approaches ranging from linear depreciation for estimating the economic value of the network over using a deterioration model to assess the probability of failure or the technical service life to sophisticated multi-criteria decision support systems. Subsequently, the aim of this paper is to compare five available multi-criteria decision-making (MCDM methods (ELECTRE, AHP, WSM, TOPSIS, and PROMETHEE for the application in an integrated rehabilitation management scheme for a real world case study and analyze them with respect to their suitability to be used in integrated asset management of water systems. The results of the different methods are not equal. This occurs because the chosen score scales, weights and the resulting distributions of the scores within the criteria do not have the same impact on all the methods. Independently of the method used, the decision maker must be familiar with its strengths but also weaknesses. Therefore, in some cases, it would be rational to use one of the simplest methods. However, to check for consistency and increase the reliability of the results, the application of several methods is encouraged.

  11. Effects of preservation method on length and weight of pond raised ...

    African Journals Online (AJOL)

    Length and weight measurements of fish used for taxonomy and determination of length-weight relationship are taken from preserved specimen. This study sets out to investigate the effects of two preservatives, 70% alcohol and 10% formalin and freezing method on the length and weight of preserved specimens of tilapia ...

  12. Modified homotopy perturbation method for solving hypersingular integral equations of the first kind.

    Science.gov (United States)

    Eshkuvatov, Z K; Zulkarnain, F S; Nik Long, N M A; Muminov, Z

    2016-01-01

    Modified homotopy perturbation method (HPM) was used to solve the hypersingular integral equations (HSIEs) of the first kind on the interval [-1,1] with the assumption that the kernel of the hypersingular integral is constant on the diagonal of the domain. Existence of inverse of hypersingular integral operator leads to the convergence of HPM in certain cases. Modified HPM and its norm convergence are obtained in Hilbert space. Comparisons between modified HPM, standard HPM, Bernstein polynomials approach Mandal and Bhattacharya (Appl Math Comput 190:1707-1716, 2007), Chebyshev expansion method Mahiub et al. (Int J Pure Appl Math 69(3):265-274, 2011) and reproducing kernel Chen and Zhou (Appl Math Lett 24:636-641, 2011) are made by solving five examples. Theoretical and practical examples revealed that the modified HPM dominates the standard HPM and others. Finally, it is found that the modified HPM is exact, if the solution of the problem is a product of weights and polynomial functions. For rational solution the absolute error decreases very fast by increasing the number of collocation points.

  13. Methods of fetal MR: beyond T2-weighted imaging

    International Nuclear Information System (INIS)

    Brugger, Peter C.; Stuhr, Fritz; Lindner, Christian; Prayer, Daniela

    2006-01-01

    The present work reviews the basic methods of performing fetal magnetic resonance imaging (MRI). Since fetal MRI differs in many respects from a postnatal study, several factors have to be taken into account to achieve satisfying image quality. Image quality depends on adequate positioning of the pregnant woman in the magnet, use of appropriate coils and the selection of sequences. Ultrafast T2-weighted sequences are regarded as the mainstay of fetal MR-imaging. However, additional sequences, such as T1-weighted images, diffusion-weighted images, echoplanar imaging may provide further information, especially in extra- central-nervous system regions of the fetal body

  14. Methods of fetal MR: beyond T2-weighted imaging

    Energy Technology Data Exchange (ETDEWEB)

    Brugger, Peter C. [Center of Anatomy and Cell Biology, Integrative Morphology Group, Medical University of Vienna, Waehringerstrasse 13, 1090 Vienna (Austria)]. E-mail: peter.brugger@meduniwien.ac.at; Stuhr, Fritz [Department of Radiology, Medical University of Vienna, Waehringerguertel 18-20, 1090 Vienna (Austria); Lindner, Christian [Department of Radiology, Medical University of Vienna, Waehringerguertel 18-20, 1090 Vienna (Austria); Prayer, Daniela [Department of Radiology, Medical University of Vienna, Waehringerguertel 18-20, 1090 Vienna (Austria)

    2006-02-15

    The present work reviews the basic methods of performing fetal magnetic resonance imaging (MRI). Since fetal MRI differs in many respects from a postnatal study, several factors have to be taken into account to achieve satisfying image quality. Image quality depends on adequate positioning of the pregnant woman in the magnet, use of appropriate coils and the selection of sequences. Ultrafast T2-weighted sequences are regarded as the mainstay of fetal MR-imaging. However, additional sequences, such as T1-weighted images, diffusion-weighted images, echoplanar imaging may provide further information, especially in extra- central-nervous system regions of the fetal body.

  15. Point-based warping with optimized weighting factors of displacement vectors

    Science.gov (United States)

    Pielot, Ranier; Scholz, Michael; Obermayer, Klaus; Gundelfinger, Eckart D.; Hess, Andreas

    2000-06-01

    The accurate comparison of inter-individual 3D image brain datasets requires non-affine transformation techniques (warping) to reduce geometric variations. Constrained by the biological prerequisites we use in this study a landmark-based warping method with weighted sums of displacement vectors, which is enhanced by an optimization process. Furthermore, we investigate fast automatic procedures for determining landmarks to improve the practicability of 3D warping. This combined approach was tested on 3D autoradiographs of Gerbil brains. The autoradiographs were obtained after injecting a non-metabolized radioactive glucose derivative into the Gerbil thereby visualizing neuronal activity in the brain. Afterwards the brain was processed with standard autoradiographical methods. The landmark-generator computes corresponding reference points simultaneously within a given number of datasets by Monte-Carlo-techniques. The warping function is a distance weighted exponential function with a landmark- specific weighting factor. These weighting factors are optimized by a computational evolution strategy. The warping quality is quantified by several coefficients (correlation coefficient, overlap-index, and registration error). The described approach combines a highly suitable procedure to automatically detect landmarks in autoradiographical brain images and an enhanced point-based warping technique, optimizing the local weighting factors. This optimization process significantly improves the similarity between the warped and the target dataset.

  16. Automatic numerical integration methods for Feynman integrals through 3-loop

    International Nuclear Information System (INIS)

    De Doncker, E; Olagbemi, O; Yuasa, F; Ishikawa, T; Kato, K

    2015-01-01

    We give numerical integration results for Feynman loop diagrams through 3-loop such as those covered by Laporta [1]. The methods are based on automatic adaptive integration, using iterated integration and extrapolation with programs from the QUADPACK package, or multivariate techniques from the ParInt package. The Dqags algorithm from QuadPack accommodates boundary singularities of fairly general types. PARINT is a package for multivariate integration layered over MPI (Message Passing Interface), which runs on clusters and incorporates advanced parallel/distributed techniques such as load balancing among processes that may be distributed over a network of nodes. Results are included for 3-loop self-energy diagrams without IR (infra-red) or UV (ultra-violet) singularities. A procedure based on iterated integration and extrapolation yields a novel method of numerical regularization for integrals with UV terms, and is applied to a set of 2-loop self-energy diagrams with UV singularities. (paper)

  17. A real-time Java tool chain for resource constrained platforms

    DEFF Research Database (Denmark)

    Korsholm, Stephan Erbs; Søndergaard, Hans; Ravn, Anders P.

    2013-01-01

    The Java programming language was originally developed for embedded systems, but the resource requirements of previous and current Java implementations - especially memory consumption - tend to exclude them from being used on a significant class of resource constrained embedded platforms. The con......The Java programming language was originally developed for embedded systems, but the resource requirements of previous and current Java implementations - especially memory consumption - tend to exclude them from being used on a significant class of resource constrained embedded platforms...... by integrating: (1) a lean virtual machine (HVM) without any external dependencies on POSIX-like libraries or other OS functionalities, (2) a hardware abstraction layer, implemented almost entirely in Java through the use of hardware objects, first level interrupt handlers, and native variables, and (3....... An evaluation of the presented solution shows that the miniCDj benchmark gets reduced to a size where it can run on resource constrained platforms....

  18. Supplier Selection Using Weighted Utility Additive Method

    Science.gov (United States)

    Karande, Prasad; Chakraborty, Shankar

    2015-10-01

    Supplier selection is a multi-criteria decision-making (MCDM) problem which mainly involves evaluating a number of available suppliers according to a set of common criteria for choosing the best one to meet the organizational needs. For any manufacturing or service organization, selecting the right upstream suppliers is a key success factor that will significantly reduce purchasing cost, increase downstream customer satisfaction and improve competitive ability. The past researchers have attempted to solve the supplier selection problem employing different MCDM techniques which involve active participation of the decision makers in the decision-making process. This paper deals with the application of weighted utility additive (WUTA) method for solving supplier selection problems. The WUTA method, an extension of utility additive approach, is based on ordinal regression and consists of building a piece-wise linear additive decision model from a preference structure using linear programming (LP). It adopts preference disaggregation principle and addresses the decision-making activities through operational models which need implicit preferences in the form of a preorder of reference alternatives or a subset of these alternatives present in the process. The preferential preorder provided by the decision maker is used as a restriction of a LP problem, which has its own objective function, minimization of the sum of the errors associated with the ranking of each alternative. Based on a given reference ranking of alternatives, one or more additive utility functions are derived. Using these utility functions, the weighted utilities for individual criterion values are combined into an overall weighted utility for a given alternative. It is observed that WUTA method, having a sound mathematical background, can provide accurate ranking to the candidate suppliers and choose the best one to fulfill the organizational requirements. Two real time examples are illustrated to prove

  19. Flash-flood potential assessment and mapping by integrating the weights-of-evidence and frequency ratio statistical methods in GIS environment - case study: Bâsca Chiojdului River catchment (Romania)

    Science.gov (United States)

    Costache, Romulus; Zaharia, Liliana

    2017-06-01

    Given the significant worldwide human and economic losses caused due to floods annually, reducing the negative consequences of these hazards is a major concern in development strategies at different spatial scales. A basic step in flood risk management is identifying areas susceptible to flood occurrences. This paper proposes a methodology allowing the identification of areas with high potential of accelerated surface run-off and consequently, of flash-flood occurrences. The methodology involves assessment and mapping in GIS environment of flash flood potential index (FFPI), by integrating two statistical methods: frequency ratio and weights-of-evidence. The methodology was applied for Bâsca Chiojdului River catchment (340 km2), located in the Carpathians Curvature region (Romania). Firstly, the areas with torrential phenomena were identified and the main factors controlling the surface run-off were selected (in this study nine geographical factors were considered). Based on the features of the considered factors, many classes were set for each of them. In the next step, the weights of each class/category of the considered factors were determined, by identifying their spatial relationships with the presence or absence of torrential phenomena. Finally, the weights for each class/category of geographical factors were summarized in GIS, resulting the FFPI values for each of the two statistical methods. These values were divided into five classes of intensity and were mapped. The final results were used to estimate the flash-flood potential and also to identify the most susceptible areas to this phenomenon. Thus, the high and very high values of FFPI characterize more than one-third of the study catchment. The result validation was performed by (i) quantifying the rate of the number of pixels corresponding to the torrential phenomena considered for the study (training area) and for the results' testing (validating area) and (ii) plotting the ROC (receiver operating

  20. Value, Cost, and Sharing: Open Issues in Constrained Clustering

    Science.gov (United States)

    Wagstaff, Kiri L.

    2006-01-01

    Clustering is an important tool for data mining, since it can identify major patterns or trends without any supervision (labeled data). Over the past five years, semi-supervised (constrained) clustering methods have become very popular. These methods began with incorporating pairwise constraints and have developed into more general methods that can learn appropriate distance metrics. However, several important open questions have arisen about which constraints are most useful, how they can be actively acquired, and when and how they should be propagated to neighboring points. This position paper describes these open questions and suggests future directions for constrained clustering research.

  1. How will greenhouse gas emissions from motor vehicles be constrained in China around 2030?

    International Nuclear Information System (INIS)

    Zheng, Bo; Zhang, Qiang; Borken-Kleefeld, Jens; Huo, Hong; Guan, Dabo; Klimont, Zbigniew; Peters, Glen P.; He, Kebin

    2015-01-01

    Highlights: • We build a projection model to predict vehicular GHG emissions on provincial basis. • Fuel efficiency gains cannot constrain vehicle GHGs in major southern provinces. • We propose an integrated policy set through sensitivity analysis of policy options. • The policy set will peak GHG emissions of 90% provinces and whole China by 2030. - Abstract: Increasing emissions from road transportation endanger China’s objective to reduce national greenhouse gas (GHG) emissions. The unconstrained growth of vehicle GHG emissions are mainly caused by the insufficient improvement of energy efficiency (kilometers traveled per unit energy use) under current policies, which cannot offset the explosion of vehicle activity in China, especially the major southern provinces. More stringent polices are required to decline GHG emissions in these provinces, and thereby help to constrain national total emissions. In this work, we make a provincial-level projection for vehicle growth, energy demand and GHG emissions to evaluate vehicle GHG emission trends under various policy options in China and determine the way to constrain national emissions. Through sensitivity analysis of various single policies, we propose an integrated policy set to assure the objective of peak national vehicle GHG emissions be achieved around 2030. The integrated policy involves decreasing the use of urban light-duty vehicles by 25%, improving fuel economy by 25% by 2035 comparing 2020, and promoting electric vehicles and biofuels. The stringent new policies would allow China to constrain GHG emissions from road transport sector around 2030. This work provides a perspective to understand vehicle GHG emission growth patterns in China’s provinces, and proposes a strong policy combination to constrain national GHG emissions, which can support the achievement of peak GHG emissions by 2030 promised by the Chinese government

  2. Diffusion weighted imaging by MR method

    International Nuclear Information System (INIS)

    Horikawa, Yoshiharu; Naruse, Shoji; Ebisu, Toshihiko; Tokumitsu, Takuaki; Ueda, Satoshi; Tanaka, Chuzo; Higuchi, Toshihiro; Umeda, Masahiro.

    1993-01-01

    Diffusion weighted magnetic resonance imaging is a recently developed technique used to examine the micromovement of water molecules in vivo. We have applied this technique to examine various kinds of brain diseases, both experimentally and clinically. The calculated apparent diffusion coefficient (ADC) in vivo showed reliable values. In experimentally induced brain edema in rats, the pathophysiological difference of the type of edema (such as cytotoxic, and vasogenic) could be differentiated on the diffusion weighted MR images. Cytotoxic brain edema showed high intensity (slower diffusion) on the diffusion weighted images. On the other hand, vasogenic brain edema showed a low intensity image (faster diffusion). Diffusion anisotropy was demonstrated according to the direction of myelinated fibers and applied motion proving gradient (MPG). This anisotropy was also demonstrated in human brain tissue along the course of the corpus callosum, pyramidal tract and optic radiation. In brain ischemia cases, lesions were detected as high signal intensity areas, even one hour after the onset of ischemia. Diffusion was faster in brain tumor compared with normal brain. Histological differences were not clearly reflected by the ADC value. In epidermoid tumor cases, the intensity was characteristically high, was demonstrated, and the cerebrospinal fluid border was clearly demonstrated. New clinical information obtainable with this molecular diffusion method will prove to be useful in various clinical studies. (author)

  3. Constructing inverse probability weights for continuous exposures: a comparison of methods.

    Science.gov (United States)

    Naimi, Ashley I; Moodie, Erica E M; Auger, Nathalie; Kaufman, Jay S

    2014-03-01

    Inverse probability-weighted marginal structural models with binary exposures are common in epidemiology. Constructing inverse probability weights for a continuous exposure can be complicated by the presence of outliers, and the need to identify a parametric form for the exposure and account for nonconstant exposure variance. We explored the performance of various methods to construct inverse probability weights for continuous exposures using Monte Carlo simulation. We generated two continuous exposures and binary outcomes using data sampled from a large empirical cohort. The first exposure followed a normal distribution with homoscedastic variance. The second exposure followed a contaminated Poisson distribution, with heteroscedastic variance equal to the conditional mean. We assessed six methods to construct inverse probability weights using: a normal distribution, a normal distribution with heteroscedastic variance, a truncated normal distribution with heteroscedastic variance, a gamma distribution, a t distribution (1, 3, and 5 degrees of freedom), and a quantile binning approach (based on 10, 15, and 20 exposure categories). We estimated the marginal odds ratio for a single-unit increase in each simulated exposure in a regression model weighted by the inverse probability weights constructed using each approach, and then computed the bias and mean squared error for each method. For the homoscedastic exposure, the standard normal, gamma, and quantile binning approaches performed best. For the heteroscedastic exposure, the quantile binning, gamma, and heteroscedastic normal approaches performed best. Our results suggest that the quantile binning approach is a simple and versatile way to construct inverse probability weights for continuous exposures.

  4. An improved Q estimation approach: the weighted centroid frequency shift method

    Science.gov (United States)

    Li, Jingnan; Wang, Shangxu; Yang, Dengfeng; Dong, Chunhui; Tao, Yonghui; Zhou, Yatao

    2016-06-01

    Seismic wave propagation in subsurface media suffers from absorption, which can be quantified by the quality factor Q. Accurate estimation of the Q factor is of great importance for the resolution enhancement of seismic data, precise imaging and interpretation, and reservoir prediction and characterization. The centroid frequency shift method (CFS) is currently one of the most commonly used Q estimation methods. However, for seismic data that contain noise, the accuracy and stability of Q extracted using CFS depend on the choice of frequency band. In order to reduce the influence of frequency band choices and obtain Q with greater precision and robustness, we present an improved CFS Q measurement approach—the weighted CFS method (WCFS), which incorporates a Gaussian weighting coefficient into the calculation procedure of the conventional CFS. The basic idea is to enhance the proportion of advantageous frequencies in the amplitude spectrum and reduce the weight of disadvantageous frequencies. In this novel method, we first construct a Gauss function using the centroid frequency and variance of the reference wavelet. Then we employ it as the weighting coefficient for the amplitude spectrum of the original signal. Finally, the conventional CFS is adopted for the weighted amplitude spectrum to extract the Q factor. Numerical tests of noise-free synthetic data demonstrate that the WCFS is feasible and efficient, and produces more accurate results than the conventional CFS. Tests for noisy synthetic data indicate that the new method has better anti-noise capability than the CFS. The application to field vertical seismic profile (VSP) data further demonstrates its validity5.

  5. Calculation of the importance-weighted neutron generation time using MCNIC method

    International Nuclear Information System (INIS)

    Feghhi, S.A.H.; Shahriari, M.; Afarideh, H.

    2008-01-01

    In advanced nuclear power systems, such as ADS, the need for reliable kinetics parameters is of considerable importance because of the lower value for β eff due to the large amount of transuranic elements loaded in the core of those systems. All reactor kinetic parameters are weighted quantities. In other words each neutron with a given position and energy is weighted with its importance. Neutron generation time as an important kinetic parameter, in all nuclear power systems has a significant role in the analysis of fast transients. The difference between non-weighted neutron generation time; Λ; standard in most Monte Carlo codes; and the weighted one Λ + can be quite significant depending on the type of the system. In previous work, based on the physical concept of neutron importance, a new method; MCNIC; using the MCNP code has been introduced for the calculation of neutron importance in fissionable assemblies for all criticality states. In the present work the applicability of MCNIC method has been extended for the calculation of the importance-weighted neutron generation time. The influence of reflector thickness on importance-weighted neutron generation time has been investigated by the development of an auxiliary code, IWLA, for a hypothetic assembly. The results of these calculations were compared with the non-weighted neutron generation times calculated using the Monte Carlo code MCNP. The difference between the importance-weighted and non-weighted quantity is more significant in a reflected system and increases with reflector thickness

  6. An integrated impact assessment and weighting methodology: evaluation of the environmental consequences of computer display technology substitution.

    Science.gov (United States)

    Zhou, Xiaoying; Schoenung, Julie M

    2007-04-01

    Computer display technology is currently in a state of transition, as the traditional technology of cathode ray tubes is being replaced by liquid crystal display flat-panel technology. Technology substitution and process innovation require the evaluation of the trade-offs among environmental impact, cost, and engineering performance attributes. General impact assessment methodologies, decision analysis and management tools, and optimization methods commonly used in engineering cannot efficiently address the issues needed for such evaluation. The conventional Life Cycle Assessment (LCA) process often generates results that can be subject to multiple interpretations, although the advantages of the LCA concept and framework obtain wide recognition. In the present work, the LCA concept is integrated with Quality Function Deployment (QFD), a popular industrial quality management tool, which is used as the framework for the development of our integrated model. The problem of weighting is addressed by using pairwise comparison of stakeholder preferences. Thus, this paper presents a new integrated analytical approach, Integrated Industrial Ecology Function Deployment (I2-EFD), to assess the environmental behavior of alternative technologies in correlation with their performance and economic characteristics. Computer display technology is used as the case study to further develop our methodology through the modification and integration of various quality management tools (e.g., process mapping, prioritization matrix) and statistical methods (e.g., multi-attribute analysis, cluster analysis). Life cycle thinking provides the foundation for our methodology, as we utilize a published LCA report, which stopped at the characterization step, as our starting point. Further, we evaluate the validity and feasibility of our methodology by considering uncertainty and conducting sensitivity analysis.

  7. Analytical design of proportional-integral controllers for the optimal control of first-order processes with operational constraints

    Energy Technology Data Exchange (ETDEWEB)

    Thu, Hien Cao Thi; Lee, Moonyong [Yeungnam University, Gyeongsan (Korea, Republic of)

    2013-12-15

    A novel analytical design method of industrial proportional-integral (PI) controllers was developed for the optimal control of first-order processes with operational constraints. The control objective was to minimize a weighted sum of the controlled variable error and the rate of change in the manipulated variable under the maximum allowable limits in the controlled variable, manipulated variable and the rate of change in the manipulated variable. The constrained optimal servo control problem was converted to an unconstrained optimization to obtain an analytical tuning formula. A practical shortcut procedure for obtaining optimal PI parameters was provided based on graphical analysis of global optimality. The proposed PI controller was found to guarantee global optimum and deal explicitly with the three important operational constraints.

  8. Evolutionary constrained optimization

    CERN Document Server

    Deb, Kalyanmoy

    2015-01-01

    This book makes available a self-contained collection of modern research addressing the general constrained optimization problems using evolutionary algorithms. Broadly the topics covered include constraint handling for single and multi-objective optimizations; penalty function based methodology; multi-objective based methodology; new constraint handling mechanism; hybrid methodology; scaling issues in constrained optimization; design of scalable test problems; parameter adaptation in constrained optimization; handling of integer, discrete and mix variables in addition to continuous variables; application of constraint handling techniques to real-world problems; and constrained optimization in dynamic environment. There is also a separate chapter on hybrid optimization, which is gaining lots of popularity nowadays due to its capability of bridging the gap between evolutionary and classical optimization. The material in the book is useful to researchers, novice, and experts alike. The book will also be useful...

  9. A real-time Java tool chain for resource constrained platforms

    DEFF Research Database (Denmark)

    Korsholm, Stephan E.; Søndergaard, Hans; Ravn, Anders Peter

    2014-01-01

    The Java programming language was originally developed for embedded systems, but the resource requirements of previous and current Java implementations – especially memory consumption – tend to exclude them from being used on a significant class of resource constrained embedded platforms. The con......The Java programming language was originally developed for embedded systems, but the resource requirements of previous and current Java implementations – especially memory consumption – tend to exclude them from being used on a significant class of resource constrained embedded platforms...... by integrating the following: (1) a lean virtual machine without any external dependencies on POSIX-like libraries or other OS functionalities; (2) a hardware abstraction layer, implemented almost entirely in Java through the use of hardware objects, first level interrupt handlers, and native variables; and (3....... An evaluation of the presented solution shows that the miniCDj benchmark gets reduced to a size where it can run on resource constrained platforms....

  10. Application of the method of integral equations to calculating the electrodynamic characteristics of periodically corrugated waveguides

    International Nuclear Information System (INIS)

    Belov, V.E.; Rodygin, L.V.; Fil'chenko, S.E.; Yunakovskii, A.D.

    1988-01-01

    A method is described for calculating the electrodynamic characteristics of periodically corrugated waveguide systems. This method is based on representing the field as the solution of the Helmholtz vector equation in the form of a simple layer potential, transformed with the use of the Floquet conditions. Systems of compound integral equations based on a weighted vector function of the simple layer potential are derived for waveguides with azimuthally symmetric and helical corrugations. A numerical realization of the Fourier method is cited for seeking the dispersion relation of azimuthally symmetric waves of a circular corrugated waveguide

  11. Integral Equation Methods for Electromagnetic and Elastic Waves

    CERN Document Server

    Chew, Weng; Hu, Bin

    2008-01-01

    Integral Equation Methods for Electromagnetic and Elastic Waves is an outgrowth of several years of work. There have been no recent books on integral equation methods. There are books written on integral equations, but either they have been around for a while, or they were written by mathematicians. Much of the knowledge in integral equation methods still resides in journal papers. With this book, important relevant knowledge for integral equations are consolidated in one place and researchers need only read the pertinent chapters in this book to gain important knowledge needed for integral eq

  12. Bilinear nodal transport method in weighted diamond difference form

    International Nuclear Information System (INIS)

    Azmy, Y.Y.

    1987-01-01

    Nodal methods have been developed and implemented for the numerical solution of the discrete ordinates neutron transport equation. Numerical testing of these methods and comparison of their results to those obtained by conventional methods have established the high accuracy of nodal methods. Furthermore, it has been suggested that the linear-linear approximation is the most computationally efficient, practical nodal approximation. Indeed, this claim has been substantiated by comparing the accuracy in the solution, and the CPU time required to achieve convergence to that solution by several nodal approximations, as well as the diamond difference scheme. Two types of linear-linear nodal methods have been developed in the literature: analytic linear-linear (NLL) methods, in which the transverse-leakage terms are derived analytically, and approximate linear-linear (PLL) methods, in which these terms are approximated. In spite of their higher accuracy, NLL methods result in very complicated discrete-variable equations that exhibit a high degree of coupling, thus requiring special solution algorithms. On the other hand, the sacrificed accuracy in PLL methods is compensated for by the simple discrete-variable equations and diamond-difference-like solution algorithm. In this paper the authors outline the development of an NLL nodal method, the bilinear method, which can be written in a weighted diamond difference form with one spatial weight per dimension that is analytically derived rather than preassigned in an ad hoc fashion

  13. Resource Management in Constrained Dynamic Situations

    Science.gov (United States)

    Seok, Jinwoo

    Resource management is considered in this dissertation for systems with limited resources, possibly combined with other system constraints, in unpredictably dynamic environments. Resources may represent fuel, power, capabilities, energy, and so on. Resource management is important for many practical systems; usually, resources are limited, and their use must be optimized. Furthermore, systems are often constrained, and constraints must be satisfied for safe operation. Simplistic resource management can result in poor use of resources and failure of the system. Furthermore, many real-world situations involve dynamic environments. Many traditional problems are formulated based on the assumptions of given probabilities or perfect knowledge of future events. However, in many cases, the future is completely unknown, and information on or probabilities about future events are not available. In other words, we operate in unpredictably dynamic situations. Thus, a method is needed to handle dynamic situations without knowledge of the future, but few formal methods have been developed to address them. Thus, the goal is to design resource management methods for constrained systems, with limited resources, in unpredictably dynamic environments. To this end, resource management is organized hierarchically into two levels: 1) planning, and 2) control. In the planning level, the set of tasks to be performed is scheduled based on limited resources to maximize resource usage in unpredictably dynamic environments. In the control level, the system controller is designed to follow the schedule by considering all the system constraints for safe and efficient operation. Consequently, this dissertation is mainly divided into two parts: 1) planning level design, based on finite state machines, and 2) control level methods, based on model predictive control. We define a recomposable restricted finite state machine to handle limited resource situations and unpredictably dynamic environments

  14. Integrating mobile technology with routine dietetic practice: the case of myPace for weight management.

    Science.gov (United States)

    Harricharan, Michelle; Gemen, Raymond; Celemín, Laura Fernández; Fletcher, David; de Looy, Anne E; Wills, Josephine; Barnett, Julie

    2015-05-01

    The field of Mobile health (mHealth), which includes mobile phone applications (apps), is growing rapidly and has the potential to transform healthcare by increasing its quality and efficiency. The present paper focuses particularly on mobile technology for body weight management, including mobile phone apps for weight loss and the available evidence on their effectiveness. Translation of behaviour change theory into weight management strategies, including integration in mobile technology is also discussed. Moreover, the paper presents and discusses the myPace platform as a case in point. There is little clinical evidence on the effectiveness of currently available mobile phone apps in enabling behaviour change and improving health-related outcomes, including sustained body weight loss. Moreover, it is unclear to what extent these apps have been developed in collaboration with health professionals, such as dietitians, and the extent to which apps draw on and operationalise behaviour change techniques has not been explored. Furthermore, presently weight management apps are not built for use as part of dietetic practice, or indeed healthcare more widely, where face-to-face engagement is fundamental for instituting the building blocks for sustained lifestyle change. myPace is an innovative mobile technology for weight management meant to be embedded into and to enhance dietetic practice. Developed out of systematic, iterative stages of engagement with dietitians and consumers, it is uniquely designed to complement and support the trusted health practitioner-patient relationship. Future mHealth technology would benefit if engagement with health professionals and/or targeted patient groups, and behaviour change theory stood as the basis for technology development. Particularly, integrating technology into routine health care practice, rather than replacing one with the other, could be the way forward.

  15. Changes in epistemic frameworks: Random or constrained?

    Directory of Open Access Journals (Sweden)

    Ananka Loubser

    2012-11-01

    Full Text Available Since the emergence of a solid anti-positivist approach in the philosophy of science, an important question has been to understand how and why epistemic frameworks change in time, are modified or even substituted. In contemporary philosophy of science three main approaches to framework-change were detected in the humanist tradition:1. In both the pre-theoretical and theoretical domains changes occur according to a rather constrained, predictable or even pre-determined pattern (e.g. Holton.2. Changes occur in a way that is more random or unpredictable and free from constraints (e.g. Kuhn, Feyerabend, Rorty, Lyotard.3. Between these approaches, a middle position can be found, attempting some kind of synthesis (e.g. Popper, Lakatos.Because this situation calls for clarification and systematisation, this article in fact tried to achieve more clarity on how changes in pre-scientific frameworks occur, as well as provided transcendental criticism of the above positions. This article suggested that the above-mentioned positions are not fully satisfactory, as change and constancy are not sufficiently integrated. An alternative model was suggested in which changes in epistemic frameworks occur according to a pattern, neither completely random nor rigidly constrained, which results in change being dynamic but not arbitrary. This alternative model is integral, rather than dialectical and therefore does not correspond to position three. 

  16. An improved data integration algorithm to constrain the 3D displacement field induced by fast deformation phenomena tested on the Napa Valley earthquake

    Science.gov (United States)

    Polcari, Marco; Fernández, José; Albano, Matteo; Bignami, Christian; Palano, Mimmo; Stramondo, Salvatore

    2017-12-01

    In this work, we propose an improved algorithm to constrain the 3D ground displacement field induced by fast surface deformations due to earthquakes or landslides. Based on the integration of different data, we estimate the three displacement components by solving a function minimization problem from the Bayes theory. We exploit the outcomes from SAR Interferometry (InSAR), Global Positioning System (GNSS) and Multiple Aperture Interferometry (MAI) to retrieve the 3D surface displacement field. Any other source of information can be added to the processing chain in a simple way, being the algorithm computationally efficient. Furthermore, we use the intensity Pixel Offset Tracking (POT) to locate the discontinuity produced on the surface by a sudden deformation phenomenon and then improve the GNSS data interpolation. This approach allows to be independent from other information such as in-situ investigations, tectonic studies or knowledge of the data covariance matrix. We applied such a method to investigate the ground deformation field related to the 2014 Mw 6.0 Napa Valley earthquake, occurred few kilometers from the San Andreas fault system.

  17. Image Segmentation Based on Constrained Spectral Variance Difference and Edge Penalty

    Directory of Open Access Journals (Sweden)

    Bo Chen

    2015-05-01

    Full Text Available Segmentation, which is usually the first step in object-based image analysis (OBIA, greatly influences the quality of final OBIA results. In many existing multi-scale segmentation algorithms, a common problem is that under-segmentation and over-segmentation always coexist at any scale. To address this issue, we propose a new method that integrates the newly developed constrained spectral variance difference (CSVD and the edge penalty (EP. First, initial segments are produced by a fast scan. Second, the generated segments are merged via a global mutual best-fitting strategy using the CSVD and EP as merging criteria. Finally, very small objects are merged with their nearest neighbors to eliminate the remaining noise. A series of experiments based on three sets of remote sensing images, each with different spatial resolutions, were conducted to evaluate the effectiveness of the proposed method. Both visual and quantitative assessments were performed, and the results show that large objects were better preserved as integral entities while small objects were also still effectively delineated. The results were also found to be superior to those from eCongnition’s multi-scale segmentation.

  18. Tightly Coupled Integration of GPS Ambiguity Fixed Precise Point Positioning and MEMS-INS through a Troposphere-Constrained Adaptive Kalman Filter

    Directory of Open Access Journals (Sweden)

    Houzeng Han

    2016-07-01

    Full Text Available Precise Point Positioning (PPP makes use of the undifferenced pseudorange and carrier phase measurements with ionospheric-free (IF combinations to achieve centimeter-level positioning accuracy. Conventionally, the IF ambiguities are estimated as float values. To improve the PPP positioning accuracy and shorten the convergence time, the integer phase clock model with between-satellites single-difference (BSSD operation is used to recover the integer property. However, the continuity and availability of stand-alone PPP is largely restricted by the observation environment. The positioning performance will be significantly degraded when GPS operates under challenging environments, if less than five satellites are present. A commonly used approach is integrating a low cost inertial sensor to improve the positioning performance and robustness. In this study, a tightly coupled (TC algorithm is implemented by integrating PPP with inertial navigation system (INS using an Extended Kalman filter (EKF. The navigation states, inertial sensor errors and GPS error states are estimated together. The troposphere constrained approach, which utilizes external tropospheric delay as virtual observation, is applied to further improve the ambiguity-fixed height positioning accuracy, and an improved adaptive filtering strategy is implemented to improve the covariance modelling considering the realistic noise effect. A field vehicular test with a geodetic GPS receiver and a low cost inertial sensor was conducted to validate the improvement on positioning performance with the proposed approach. The results show that the positioning accuracy has been improved with inertial aiding. Centimeter-level positioning accuracy is achievable during the test, and the PPP/INS TC integration achieves a fast re-convergence after signal outages. For troposphere constrained solutions, a significant improvement for the height component has been obtained. The overall positioning accuracies

  19. Particle swarm optimization-based local entropy weighted histogram equalization for infrared image enhancement

    Science.gov (United States)

    Wan, Minjie; Gu, Guohua; Qian, Weixian; Ren, Kan; Chen, Qian; Maldague, Xavier

    2018-06-01

    Infrared image enhancement plays a significant role in intelligent urban surveillance systems for smart city applications. Unlike existing methods only exaggerating the global contrast, we propose a particle swam optimization-based local entropy weighted histogram equalization which involves the enhancement of both local details and fore-and background contrast. First of all, a novel local entropy weighted histogram depicting the distribution of detail information is calculated based on a modified hyperbolic tangent function. Then, the histogram is divided into two parts via a threshold maximizing the inter-class variance in order to improve the contrasts of foreground and background, respectively. To avoid over-enhancement and noise amplification, double plateau thresholds of the presented histogram are formulated by means of particle swarm optimization algorithm. Lastly, each sub-image is equalized independently according to the constrained sub-local entropy weighted histogram. Comparative experiments implemented on real infrared images prove that our algorithm outperforms other state-of-the-art methods in terms of both visual and quantized evaluations.

  20. Wispy Prosthesis: A Novel Method in Denture Weight Reduction.

    Science.gov (United States)

    Anne, Gopinadh; Budeti, Sreedevi; Anche, Sampath Kumar; Zakkula, Srujana; Atla, Jyothi; Jyothula, Ravi Rakesh Dev; Appana, Krishna Chaitanya; Peddinti, Vijaya Kumar

    2016-04-01

    Stability and retention of the denture becomes at stake with the increase in weight of the denture prosthesis. As a consequence, different materials and methods have been introduced to overcome these issues but denture weight reduction still remains to be a cumbersome and strenuous procedure. To introduce a novel technique for the fabrication of denture prosthesis where in the weight of the denture will not affect the retention and stability of the denture. Four groups with a sample size of 10 each, were included where in one group was control and other three were study groups. The control group samples were made completely solid and the study group samples were packed with materials like bean balls, cellulose balls and polyacrylic fibers. The weight of all the samples of each study group was measured and compared with the control group. The observations were analyzed statistically by paired t-test. It was observed that the bean balls group produced a weight reduction of 31.3%, cellulose balls group 27.4% and polyacrylic fibers group 24.5% when compared to that of the control group. This novel technique will eliminate the problems that were associated in creating hollowness and at the same time will reduce the weight of the prosthesis and among all the study groups, bean balls group were found to reduce maximum weight of the prosthesis.

  1. A reliability assessment of constrained spherical deconvolution-based diffusion-weighted magnetic resonance imaging in individuals with chronic stroke.

    Science.gov (United States)

    Snow, Nicholas J; Peters, Sue; Borich, Michael R; Shirzad, Navid; Auriat, Angela M; Hayward, Kathryn S; Boyd, Lara A

    2016-01-15

    Diffusion-weighted magnetic resonance imaging (DW-MRI) is commonly used to assess white matter properties after stroke. Novel work is utilizing constrained spherical deconvolution (CSD) to estimate complex intra-voxel fiber architecture unaccounted for with tensor-based fiber tractography. However, the reliability of CSD-based tractography has not been established in people with chronic stroke. Establishing the reliability of CSD-based DW-MRI in chronic stroke. High-resolution DW-MRI was performed in ten adults with chronic stroke during two separate sessions. Deterministic region of interest-based fiber tractography using CSD was performed by two raters. Mean fractional anisotropy (FA), apparent diffusion coefficient (ADC), tract number, and tract volume were extracted from reconstructed fiber pathways in the corticospinal tract (CST) and superior longitudinal fasciculus (SLF). Callosal fiber pathways connecting the primary motor cortices were also evaluated. Inter-rater and test-retest reliability were determined by intra-class correlation coefficients (ICCs). ICCs revealed excellent reliability for FA and ADC in ipsilesional (0.86-1.00; preliability for all metrics in callosal fibers (0.85-1.00; preliable approach to evaluate FA and ADC in major white matter pathways, in chronic stroke. Future work should address the reproducibility and utility of CSD-based metrics of tract number and tract volume. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Adaptive optimal control of unknown constrained-input systems using policy iteration and neural networks.

    Science.gov (United States)

    Modares, Hamidreza; Lewis, Frank L; Naghibi-Sistani, Mohammad-Bagher

    2013-10-01

    This paper presents an online policy iteration (PI) algorithm to learn the continuous-time optimal control solution for unknown constrained-input systems. The proposed PI algorithm is implemented on an actor-critic structure where two neural networks (NNs) are tuned online and simultaneously to generate the optimal bounded control policy. The requirement of complete knowledge of the system dynamics is obviated by employing a novel NN identifier in conjunction with the actor and critic NNs. It is shown how the identifier weights estimation error affects the convergence of the critic NN. A novel learning rule is developed to guarantee that the identifier weights converge to small neighborhoods of their ideal values exponentially fast. To provide an easy-to-check persistence of excitation condition, the experience replay technique is used. That is, recorded past experiences are used simultaneously with current data for the adaptation of the identifier weights. Stability of the whole system consisting of the actor, critic, system state, and system identifier is guaranteed while all three networks undergo adaptation. Convergence to a near-optimal control law is also shown. The effectiveness of the proposed method is illustrated with a simulation example.

  3. Subdomain Precise Integration Method for Periodic Structures

    Directory of Open Access Journals (Sweden)

    F. Wu

    2014-01-01

    Full Text Available A subdomain precise integration method is developed for the dynamical responses of periodic structures comprising many identical structural cells. The proposed method is based on the precise integration method, the subdomain scheme, and the repeatability of the periodic structures. In the proposed method, each structural cell is seen as a super element that is solved using the precise integration method, considering the repeatability of the structural cells. The computational efforts and the memory size of the proposed method are reduced, while high computational accuracy is achieved. Therefore, the proposed method is particularly suitable to solve the dynamical responses of periodic structures. Two numerical examples are presented to demonstrate the accuracy and efficiency of the proposed method through comparison with the Newmark and Runge-Kutta methods.

  4. Quantification of emissions from knapsack sprayers: 'the weight method

    Science.gov (United States)

    Garcia-Santos, Glenda; Binder, Claudia R.

    2010-05-01

    Misuse of pesticides kill or seriously sicken thousands of people every year and poison the natural environment. Investigations of occupational and environmental risk have received considerable interest over the last decades. And yet, lack of staff and analytical equipments as well the costs of chemical analyses make difficult, if not impossible, the control of the pesticide contamination and residues in humans, air, water, and soils in developing countries. To assess emissions of pesticides (transport and deposition) during spray application and the risk for the human health and the environment, tracers can be useful tools. Uranine was used to quantify drift airborne and later deposition on the neighbouring field and clothes of the applicator after spraying with a knapsack sprayer in one of the biggest areas of potato production in Colombia. Keeping the same setup the amount of wet drift was measured by difference in the weight of high absorbent papers used to collect the tracer. Surprisingly this weight method (Weight-HAP) was able to explain 71% of the drift variance measured with the tracer. Therefore the weight method is presented as a suitable rapid low cost screening tool, complementary to toxicological tests, to assess air pollution, occupational and environmental exposure generated by the emissions from knapsack sprayers during pesticide application. This technique might be important in places were there is lack of analytical instruments.

  5. Understanding the Essential Meaning of Measured Changes in Weight and Body Composition Among Women During and After Adjuvant Treatment for Breast Cancer: A Mixed-Methods Study.

    Science.gov (United States)

    Pedersen, Birgith; Groenkjaer, Mette; Falkmer, Ursula; Delmar, Charlotte

    Changes in weight and body composition among women during and after adjuvant antineoplastic treatment for breast cancer may influence long-term survival and quality of life. Research on factual weight changes is diverse and contrasting, and their influence on women's perception of body and self seems to be insufficiently explored. The aim of this study was to expand the understanding of the association between changes in weight and body composition and the women's perception of body and selves. A mixed-methods research design was used. Data consisted of weight and body composition measures from 95 women with breast cancer during 18 months past surgery. Twelve women from this cohort were interviewed individually at 12 months. Linear mixed model and logistic regression were used to estimate changes of repeated measures and odds ratio. Interviews were analyzed guided by existential phenomenology. Joint displays and integrative mixed-methods interpretation demonstrated that even small weight gains, extended waist, and weight loss were associated with fearing recurrence of breast cancer. Perceiving an ambiguous transforming body, the women moved between a unified body subject and the body as an object dissociated in "I" and "it" while fighting against or accepting the body changes. Integrating findings demonstrated that factual weight changes do not correspond with the perceived changes and may trigger existential threats. Transition to a new habitual body demand health practitioners to enter a joint narrative work to reveal how the changes impact on the women's body and self-perception independent of how they are displayed quantitatively.

  6. A Fuzzy Group Prioritization Method for Deriving Weights and its Software Implementation

    Directory of Open Access Journals (Sweden)

    Tarifa Almulhim

    2013-09-01

    Full Text Available Several Multi-Criteria Decision Making (MCDM methods involve pairwise comparisons to obtain the preferences of decision makers (DMs. This paper proposes a fuzzy group prioritization method for deriving group priorities/weights from fuzzy pairwise comparison matrices. The proposed method extends the Fuzzy Preferences Programming Method (FPP by considering the different importance weights of multiple DMs . The elements of the group pairwise comparison matrices are presented as fuzzy numbers rather than exact numerical values, in order to model the uncertainty and imprecision in the DMs’ judgments. Unlike the known fuzzy prioritization techniques, the proposed method is able to derive crisp weights from incomplete and fuzzy set of comparison judgments and does not require additional aggregation procedures. A prototype of a decision tool is developed to assist DMs to implement the proposed method for solving fuzzy group prioritization problems in MATLAB. Detailed numerical examples are used to illustrate the proposed approach.

  7. Multibody motion in implicitly constrained director format with links via explicit constraints

    DEFF Research Database (Denmark)

    Nielsen, Martin Bjerre; Krenk, Steen

    2013-01-01

    A conservative time integration algorithm is developed for constrained mechanical systems of kinematically linked rigid bodies based on convected base vectors. The base vectors are represented in terms of their absolute coordinates, hence the formulation makes use of three translation components...

  8. Pole shifting with constrained output feedback

    International Nuclear Information System (INIS)

    Hamel, D.; Mensah, S.; Boisvert, J.

    1984-03-01

    The concept of pole placement plays an important role in linear, multi-variable, control theory. It has received much attention since its introduction, and several pole shifting algorithms are now available. This work presents a new method which allows practical and engineering constraints such as gain limitation and controller structure to be introduced right into the pole shifting design strategy. This is achieved by formulating the pole placement problem as a constrained optimization problem. Explicit constraints (controller structure and gain limits) are defined to identify an admissible region for the feedback gain matrix. The desired pole configuration is translated into an appropriate cost function which must be closed-loop minimized. The resulting constrained optimization problem can thus be solved with optimization algorithms. The method has been implemented as an algorithmic interactive module in a computer-aided control system design package, MVPACK. The application of the method is illustrated to design controllers for an aircraft and an evaporator. The results illustrate the importance of controller structure on overall performance of a control system

  9. Comparison of phase-constrained parallel MRI approaches: Analogies and differences.

    Science.gov (United States)

    Blaimer, Martin; Heim, Marius; Neumann, Daniel; Jakob, Peter M; Kannengiesser, Stephan; Breuer, Felix A

    2016-03-01

    Phase-constrained parallel MRI approaches have the potential for significantly improving the image quality of accelerated MRI scans. The purpose of this study was to investigate the properties of two different phase-constrained parallel MRI formulations, namely the standard phase-constrained approach and the virtual conjugate coil (VCC) concept utilizing conjugate k-space symmetry. Both formulations were combined with image-domain algorithms (SENSE) and a mathematical analysis was performed. Furthermore, the VCC concept was combined with k-space algorithms (GRAPPA and ESPIRiT) for image reconstruction. In vivo experiments were conducted to illustrate analogies and differences between the individual methods. Furthermore, a simple method of improving the signal-to-noise ratio by modifying the sampling scheme was implemented. For SENSE, the VCC concept was mathematically equivalent to the standard phase-constrained formulation and therefore yielded identical results. In conjunction with k-space algorithms, the VCC concept provided more robust results when only a limited amount of calibration data were available. Additionally, VCC-GRAPPA reconstructed images provided spatial phase information with full resolution. Although both phase-constrained parallel MRI formulations are very similar conceptually, there exist important differences between image-domain and k-space domain reconstructions regarding the calibration robustness and the availability of high-resolution phase information. © 2015 Wiley Periodicals, Inc.

  10. Tongue Images Classification Based on Constrained High Dispersal Network

    Directory of Open Access Journals (Sweden)

    Dan Meng

    2017-01-01

    Full Text Available Computer aided tongue diagnosis has a great potential to play important roles in traditional Chinese medicine (TCM. However, the majority of the existing tongue image analyses and classification methods are based on the low-level features, which may not provide a holistic view of the tongue. Inspired by deep convolutional neural network (CNN, we propose a novel feature extraction framework called constrained high dispersal neural networks (CHDNet to extract unbiased features and reduce human labor for tongue diagnosis in TCM. Previous CNN models have mostly focused on learning convolutional filters and adapting weights between them, but these models have two major issues: redundancy and insufficient capability in handling unbalanced sample distribution. We introduce high dispersal and local response normalization operation to address the issue of redundancy. We also add multiscale feature analysis to avoid the problem of sensitivity to deformation. Our proposed CHDNet learns high-level features and provides more classification information during training time, which may result in higher accuracy when predicting testing samples. We tested the proposed method on a set of 267 gastritis patients and a control group of 48 healthy volunteers. Test results show that CHDNet is a promising method in tongue image classification for the TCM study.

  11. Method for exploiting bias in factor analysis using constrained alternating least squares algorithms

    Science.gov (United States)

    Keenan, Michael R.

    2008-12-30

    Bias plays an important role in factor analysis and is often implicitly made use of, for example, to constrain solutions to factors that conform to physical reality. However, when components are collinear, a large range of solutions may exist that satisfy the basic constraints and fit the data equally well. In such cases, the introduction of mathematical bias through the application of constraints may select solutions that are less than optimal. The biased alternating least squares algorithm of the present invention can offset mathematical bias introduced by constraints in the standard alternating least squares analysis to achieve factor solutions that are most consistent with physical reality. In addition, these methods can be used to explicitly exploit bias to provide alternative views and provide additional insights into spectral data sets.

  12. Virasoro algebra action on integrable hierarchies and Virasoro contraints in matrix models

    International Nuclear Information System (INIS)

    Semikhatov, A.M.

    1991-01-01

    The action of the Virasoro algebra on integrable hierarchies of non-linear equations and on related objects ('Schroedinger' differential operators) is investigated. The method consists in pushing forward the Virasoro action to the wave function of a hierarchy, and then reconstructing its action on the dressing and Lax operators. This formulation allows one to observe a number of suggestive similarities between the structures involved in the description of the Virasoro algebra on the hierarchies and the structure of conformal field theory on the world-sheet. This includes, in particular, an 'off-shell' hierarchy version of operator products and of the Cauchy kernel. In relation to matrix models, which have been observed to be effectively described by integrable hierarchies subjected to Virasoro constraints, I propose to define general Virasoro-constrained hierarchies also in terms of dressing operators, by certain equations which carry the information of the hierarchy and the Virasoro algebra simultaneously and which suggest an interpretation as operator versions of recursion/loop equations in topological theories. These same equations provide a relation with integrable hierarchies with quantized spectral parameter introduced recently. The formulation in terms of dressing operators allows a scaling (continuum limit) of discrete (i.e. lattice) hierarchies with the Virasoro constraints into 'continuous' Virasoro-constrained hierarchies. In particular, the KP hierarchy subjected to the Virasoro constraints is recovered as a scaling limit of the Virasoro-constrained Toda hierarchy. The dressing operator method also makes is straightforward to identify the full symmetry algebra of Virasoro-constrained hierarchies, which is related to the family of W ∞ (J) algebras introduced recently. (orig./HS)

  13. Numerical methods for engine-airframe integration

    International Nuclear Information System (INIS)

    Murthy, S.N.B.; Paynter, G.C.

    1986-01-01

    Various papers on numerical methods for engine-airframe integration are presented. The individual topics considered include: scientific computing environment for the 1980s, overview of prediction of complex turbulent flows, numerical solutions of the compressible Navier-Stokes equations, elements of computational engine/airframe integrations, computational requirements for efficient engine installation, application of CAE and CFD techniques to complete tactical missile design, CFD applications to engine/airframe integration, and application of a second-generation low-order panel methods to powerplant installation studies. Also addressed are: three-dimensional flow analysis of turboprop inlet and nacelle configurations, application of computational methods to the design of large turbofan engine nacelles, comparison of full potential and Euler solution algorithms for aeropropulsive flow field computations, subsonic/transonic, supersonic nozzle flows and nozzle integration, subsonic/transonic prediction capabilities for nozzle/afterbody configurations, three-dimensional viscous design methodology of supersonic inlet systems for advanced technology aircraft, and a user's technology assessment

  14. Fractional multilinear integrals with rough kernels on generalized weighted Morrey spaces

    Directory of Open Access Journals (Sweden)

    Akbulut Ali

    2016-01-01

    Full Text Available In this paper, we study the boundedness of fractional multilinear integral operators with rough kernels TΩ,αA1,A2,…,Ak,$T_{\\Omega ,\\alpha }^{{A_1},{A_2}, \\ldots ,{A_k}},$ which is a generalization of the higher-order commutator of the rough fractional integral on the generalized weighted Morrey spaces Mp,ϕ (w. We find the sufficient conditions on the pair (ϕ1, ϕ2 with w ∈ Ap,q which ensures the boundedness of the operators TΩ,αA1,A2,…,Ak,$T_{\\Omega ,\\alpha }^{{A_1},{A_2}, \\ldots ,{A_k}},$ from Mp,φ1wptoMp,φ2wq${M_{p,{\\varphi _1}}}\\left( {{w^p}} \\right\\,{\\rm{to}}\\,{M_{p,{\\varphi _2}}}\\left( {{w^q}} \\right$ for 1 < p < q < ∞. In all cases the conditions for the boundedness of the operator TΩ,αA1,A2,…,Ak,$T_{\\Omega ,\\alpha }^{{A_1},{A_2}, \\ldots ,{A_k}},$ are given in terms of Zygmund-type integral inequalities on (ϕ1, ϕ2 and w, which do not assume any assumption on monotonicity of ϕ1 (x,r, ϕ2(x, r in r.

  15. NBL-Davies-Gray weight titration method

    International Nuclear Information System (INIS)

    Hassell, C.

    1981-01-01

    The titration method for uranium consists of the following basic steps: reduction of U +6 to U +4 by Fe +2 ; selective oxidation of excess Fe +2 by HNO 3 with Mo +6 catalyst, all in strong phosphoric acid solution; and titration of the U +4 with standard dichromate after dilution. In this paper, detailed procedure of the NBL method, its modification to a gravimetric system or weight titration technique, and miniaturization of the NBL titrimetric method are discussed. Improved precisions and accuracy (2 to 3 times), of the gravimetric titrant delivery has made it possible to reduce the amount of uranium taken for each analysis. At present, using gravimetric delivery, most samples are titrated in the 30 to 50 mg range. Improved precision has led to investigating the possibility of a scaled-down version of the basic method so as to reduce the volume of phosphoric acid waste generated. Because all reactions are carried out in the same vessel, this method can be automated. Analysts at NBL have been able to restrict error to 0.05% or better in the 30 to 100 mg range using the basic procedure

  16. A numerical method for resonance integral calculations

    International Nuclear Information System (INIS)

    Tanbay, Tayfun; Ozgener, Bilge

    2013-01-01

    A numerical method has been proposed for resonance integral calculations and a cubic fit based on least squares approximation to compute the optimum Bell factor is given. The numerical method is based on the discretization of the neutron slowing down equation. The scattering integral is approximated by taking into account the location of the upper limit in energy domain. The accuracy of the method has been tested by performing computations of resonance integrals for uranium dioxide isolated rods and comparing the results with empirical values. (orig.)

  17. Review of Statistical Learning Methods in Integrated Omics Studies (An Integrated Information Science).

    Science.gov (United States)

    Zeng, Irene Sui Lan; Lumley, Thomas

    2018-01-01

    Integrated omics is becoming a new channel for investigating the complex molecular system in modern biological science and sets a foundation for systematic learning for precision medicine. The statistical/machine learning methods that have emerged in the past decade for integrated omics are not only innovative but also multidisciplinary with integrated knowledge in biology, medicine, statistics, machine learning, and artificial intelligence. Here, we review the nontrivial classes of learning methods from the statistical aspects and streamline these learning methods within the statistical learning framework. The intriguing findings from the review are that the methods used are generalizable to other disciplines with complex systematic structure, and the integrated omics is part of an integrated information science which has collated and integrated different types of information for inferences and decision making. We review the statistical learning methods of exploratory and supervised learning from 42 publications. We also discuss the strengths and limitations of the extended principal component analysis, cluster analysis, network analysis, and regression methods. Statistical techniques such as penalization for sparsity induction when there are fewer observations than the number of features and using Bayesian approach when there are prior knowledge to be integrated are also included in the commentary. For the completeness of the review, a table of currently available software and packages from 23 publications for omics are summarized in the appendix.

  18. Efficient orbit integration by manifold correction methods.

    Science.gov (United States)

    Fukushima, Toshio

    2005-12-01

    Triggered by a desire to investigate, numerically, the planetary precession through a long-term numerical integration of the solar system, we developed a new formulation of numerical integration of orbital motion named manifold correct on methods. The main trick is to rigorously retain the consistency of physical relations, such as the orbital energy, the orbital angular momentum, or the Laplace integral, of a binary subsystem. This maintenance is done by applying a correction to the integrated variables at each integration step. Typical methods of correction are certain geometric transformations, such as spatial scaling and spatial rotation, which are commonly used in the comparison of reference frames, or mathematically reasonable operations, such as modularization of angle variables into the standard domain [-pi, pi). The form of the manifold correction methods finally evolved are the orbital longitude methods, which enable us to conduct an extremely precise integration of orbital motions. In unperturbed orbits, the integration errors are suppressed at the machine epsilon level for an indefinitely long period. In perturbed cases, on the other hand, the errors initially grow in proportion to the square root of time and then increase more rapidly, the onset of which depends on the type and magnitude of the perturbations. This feature is also realized for highly eccentric orbits by applying the same idea as used in KS-regularization. In particular, the introduction of time elements greatly enhances the performance of numerical integration of KS-regularized orbits, whether the scaling is applied or not.

  19. Application Study of Comprehensive Forecasting Model Based on Entropy Weighting Method on Trend of PM2.5 Concentration in Guangzhou, China

    Science.gov (United States)

    Liu, Dong-jun; Li, Li

    2015-01-01

    For the issue of haze-fog, PM2.5 is the main influence factor of haze-fog pollution in China. The trend of PM2.5 concentration was analyzed from a qualitative point of view based on mathematical models and simulation in this study. The comprehensive forecasting model (CFM) was developed based on the combination forecasting ideas. Autoregressive Integrated Moving Average Model (ARIMA), Artificial Neural Networks (ANNs) model and Exponential Smoothing Method (ESM) were used to predict the time series data of PM2.5 concentration. The results of the comprehensive forecasting model were obtained by combining the results of three methods based on the weights from the Entropy Weighting Method. The trend of PM2.5 concentration in Guangzhou China was quantitatively forecasted based on the comprehensive forecasting model. The results were compared with those of three single models, and PM2.5 concentration values in the next ten days were predicted. The comprehensive forecasting model balanced the deviation of each single prediction method, and had better applicability. It broadens a new prediction method for the air quality forecasting field. PMID:26110332

  20. Integrated circuit and method of arbitration in a network on an integrated circuit.

    NARCIS (Netherlands)

    2011-01-01

    The invention relates to an integrated circuit and to a method of arbitration in a network on an integrated circuit. According to the invention, a method of arbitration in a network on an integrated circuit is provided, the network comprising a router unit, the router unit comprising a first input

  1. Measurement of residual stresses using fracture mechanics weight functions

    International Nuclear Information System (INIS)

    Fan, Y.

    2000-01-01

    A residual stress measurement method has been developed to quantify through-the-thickness residual stresses. Accurate measurement of residual stresses is crucial for many engineering structures. Fabrication processes such as welding and machining generate residual stresses that are difficult to predict. Residual stresses affect the integrity of structures through promoting failures due to brittle fracture, fatigue, stress corrosion cracking, and wear. In this work, the weight function theory of fracture mechanics is used to measure residual stresses. The weight function theory is an important development in computational fracture mechanics. Stress intensity factors for arbitrary stress distribution on the crack faces can be accurately and efficiently computed for predicting crack growth. This paper demonstrates that the weight functions are equally useful in measuring residual stresses. In this method, an artificial crack is created by a thin cut in a structure containing residual stresses. The cut relieves the residual stresses normal to the crack-face and allows the relieved residual stresses to deform the structure. Strain gages placed adjacent to the cut measure the relieved strains corresponding to incrementally increasing depths of the cut. The weight functions of the cracked body relate the measured strains to the residual stresses normal to the cut within the structure. The procedure details, such as numerical integration of the singular functions in applying the weight function method, will be discussed

  2. Measurement of residual stresses using fracture mechanics weight functions

    International Nuclear Information System (INIS)

    Fan, Y.

    2001-01-01

    A residual stress measurement method has been developed to quantify through-the-thickness residual stresses. Accurate measurement of residual stresses is crucial for many engineering structures. Fabrication processes such as welding and machining generate residual stresses that are difficult to predict. Residual stresses affect the integrity of structures through promoting failures due to brittle fracture, fatigue, stress corrosion cracking, and wear. In this work, the weight function theory of fracture mechanics is used to measure residual stresses. The weight function theory is an important development in computational fracture mechanics. Stress intensity factors for arbitrary stress distribution on the crack faces can be accurately and efficiently computed for predicting crack growth. This paper demonstrates that the weight functions are equally useful in measuring residual stresses. In this method, an artificial crack is created by a thin cut in a structure containing residual stresses. The cut relieves the residual stresses normal to the crack-face and allows the relieved residual stresses to deform the structure. Strain gages placed adjacent to the cut measure the relieved strains corresponding to incrementally increasing depths of the cut. The weight functions of the cracked body relate the measured strains to the residual stresses normal to the cut within the structure. The procedure details, such as numerical integration of the singular functions in applying the weight function method, will be discussed. (author)

  3. In vitro transcription of a torsionally constrained template

    DEFF Research Database (Denmark)

    Bentin, Thomas; Nielsen, Peter E

    2002-01-01

    RNA polymerase (RNAP) and the DNA template must rotate relative to each other during transcription elongation. In the cell, however, the components of the transcription apparatus may be subject to rotary constraints. For instance, the DNA is divided into topological domains that are delineated...... of torsionally constrained DNA by free RNAP. We asked whether or not a newly synthesized RNA chain would limit transcription elongation. For this purpose we developed a method to immobilize covalently closed circular DNA to streptavidin-coated beads via a peptide nucleic acid (PNA)-biotin conjugate in principle...... constrained. We conclude that transcription of a natural bacterial gene may proceed with high efficiency despite the fact that newly synthesized RNA is entangled around the template in the narrow confines of torsionally constrained supercoiled DNA....

  4. Momentum-weighted conjugate gradient descent algorithm for gradient coil optimization.

    Science.gov (United States)

    Lu, Hanbing; Jesmanowicz, Andrzej; Li, Shi-Jiang; Hyde, James S

    2004-01-01

    MRI gradient coil design is a type of nonlinear constrained optimization. A practical problem in transverse gradient coil design using the conjugate gradient descent (CGD) method is that wire elements move at different rates along orthogonal directions (r, phi, z), and tend to cross, breaking the constraints. A momentum-weighted conjugate gradient descent (MW-CGD) method is presented to overcome this problem. This method takes advantage of the efficiency of the CGD method combined with momentum weighting, which is also an intrinsic property of the Levenberg-Marquardt algorithm, to adjust step sizes along the three orthogonal directions. A water-cooled, 12.8 cm inner diameter, three axis torque-balanced gradient coil for rat imaging was developed based on this method, with an efficiency of 2.13, 2.08, and 4.12 mT.m(-1).A(-1) along X, Y, and Z, respectively. Experimental data demonstrate that this method can improve efficiency by 40% and field uniformity by 27%. This method has also been applied to the design of a gradient coil for the human brain, employing remote current return paths. The benefits of this design include improved gradient field uniformity and efficiency, with a shorter length than gradient coil designs using coaxial return paths. Copyright 2003 Wiley-Liss, Inc.

  5. Dynamic airspace configuration method based on a weighted graph model

    Directory of Open Access Journals (Sweden)

    Chen Yangzhou

    2014-08-01

    Full Text Available This paper proposes a new method for dynamic airspace configuration based on a weighted graph model. The method begins with the construction of an undirected graph for the given airspace, where the vertices represent those key points such as airports, waypoints, and the edges represent those air routes. Those vertices are used as the sites of Voronoi diagram, which divides the airspace into units called as cells. Then, aircraft counts of both each cell and of each air-route are computed. Thus, by assigning both the vertices and the edges with those aircraft counts, a weighted graph model comes into being. Accordingly the airspace configuration problem is described as a weighted graph partitioning problem. Then, the problem is solved by a graph partitioning algorithm, which is a mixture of general weighted graph cuts algorithm, an optimal dynamic load balancing algorithm and a heuristic algorithm. After the cuts algorithm partitions the model into sub-graphs, the load balancing algorithm together with the heuristic algorithm transfers aircraft counts to balance workload among sub-graphs. Lastly, airspace configuration is completed by determining the sector boundaries. The simulation result shows that the designed sectors satisfy not only workload balancing condition, but also the constraints such as convexity, connectivity, as well as minimum distance constraint.

  6. Coding for Two Dimensional Constrained Fields

    DEFF Research Database (Denmark)

    Laursen, Torben Vaarbye

    2006-01-01

    a first order model to model higher order constraints by the use of an alphabet extension. We present an iterative method that based on a set of conditional probabilities can help in choosing the large numbers of parameters of the model in order to obtain a stationary model. Explicit results are given...... for the No Isolated Bits constraint. Finally we present a variation of the encoding scheme of bit-stuffing that is applicable to the class of checkerboard constrained fields. It is possible to calculate the entropy of the coding scheme thus obtaining lower bounds on the entropy of the fields considered. These lower...... bounds are very tight for the Run-Length limited fields. Explicit bounds are given for the diamond constrained field as well....

  7. Evaluation of cleaner production options in dyeing and printing industry: Using combination weighting method

    Science.gov (United States)

    Kang, Hong; Zhang, Yun; Hou, Haochen; Sun, Xiaoyang; Qin, Chenglu

    2018-03-01

    The textile industry has a high environmental impact so that implementing cleaner production audit is an effective way to achieve energy conservation and emissions reduction. But the evaluation method in current cleaner production audit divided the evaluation of CPOs into two parts: environment and economy. The evaluation index system was constructed from three criteria of environment benefits, economy benefits and product performance; weights of five indicators were determined by combination weights of entropy method and factor weight sorting method. Then efficiencies were evaluated comprehensively. The results showed that the best alkali recovery option was the nanofiltration membrane method (S=0.80).

  8. An intelligent sales forecasting system through integration of artificial neural networks and fuzzy neural networks with fuzzy weight elimination.

    Science.gov (United States)

    Kuo, R J; Wu, P; Wang, C P

    2002-09-01

    Sales forecasting plays a very prominent role in business strategy. Numerous investigations addressing this problem have generally employed statistical methods, such as regression or autoregressive and moving average (ARMA). However, sales forecasting is very complicated owing to influence by internal and external environments. Recently, artificial neural networks (ANNs) have also been applied in sales forecasting since their promising performances in the areas of control and pattern recognition. However, further improvement is still necessary since unique circumstances, e.g. promotion, cause a sudden change in the sales pattern. Thus, this study utilizes a proposed fuzzy neural network (FNN), which is able to eliminate the unimportant weights, for the sake of learning fuzzy IF-THEN rules obtained from the marketing experts with respect to promotion. The result from FNN is further integrated with the time series data through an ANN. Both the simulated and real-world problem results show that FNN with weight elimination can have lower training error compared with the regular FNN. Besides, real-world problem results also indicate that the proposed estimation system outperforms the conventional statistical method and single ANN in accuracy.

  9. Configuration mixing calculations with basis states obtained from constrained variational methods

    International Nuclear Information System (INIS)

    Miller, H.G.; Schroeder, H.P.

    1982-01-01

    Configuration mixing calculations have been performed in 20 Ne using basis states which are energetically the lowest-lying solutions of the constrained Hartree-Fock equations with an angular momentum constraint of the form 2 > = J(J + 1), For J = 6, very good agreement with the lower-lying 6 + states in an exact eigenvalue spectrum has been obtained with relatively few PAV-K mixed CHF basis states. (orig.)

  10. Selective Integration in the Material-Point Method

    DEFF Research Database (Denmark)

    Andersen, Lars; Andersen, Søren; Damkilde, Lars

    2009-01-01

    The paper deals with stress integration in the material-point method. In order to avoid parasitic shear in bending, a formulation is proposed, based on selective integration in the background grid that is used to solve the governing equations. The suggested integration scheme is compared...... to a traditional material-point-method computation in which the stresses are evaluated at the material points. The deformation of a cantilever beam is analysed, assuming elastic or elastoplastic material behaviour....

  11. Evaluating the sensitization potential of surfactants: integrating data from the local lymph node assay, guinea pig maximization test, and in vitro methods in a weight-of-evidence approach.

    Science.gov (United States)

    Ball, Nicholas; Cagen, Stuart; Carrillo, Juan-Carlos; Certa, Hans; Eigler, Dorothea; Emter, Roger; Faulhammer, Frank; Garcia, Christine; Graham, Cynthia; Haux, Carl; Kolle, Susanne N; Kreiling, Reinhard; Natsch, Andreas; Mehling, Annette

    2011-08-01

    An integral part of hazard and safety assessments is the estimation of a chemical's potential to cause skin sensitization. Currently, only animal tests (OECD 406 and 429) are accepted in a regulatory context. Nonanimal test methods are being developed and formally validated. In order to gain more insight into the responses induced by eight exemplary surfactants, a battery of in vivo and in vitro tests were conducted using the same batch of chemicals. In general, the surfactants were negative in the GPMT, KeratinoSens and hCLAT assays and none formed covalent adducts with test peptides. In contrast, all but one was positive in the LLNA. Most were rated as being irritants by the EpiSkin assay with the additional endpoint, IL1-alpha. The weight of evidence based on this comprehensive testing indicates that, with one exception, they are non-sensitizing skin irritants, confirming that the LLNA tends to overestimate the sensitization potential of surfactants. As results obtained from LLNAs are considered as the gold standard for the development of new nonanimal alternative test methods, results such as these highlight the necessity to carefully evaluate the applicability domains of test methods in order to develop reliable nonanimal alternative testing strategies for sensitization testing. Copyright © 2011 Elsevier Inc. All rights reserved.

  12. Weight loss methods and changes in eating habits among successful weight losers.

    Science.gov (United States)

    Soini, Sirpa; Mustajoki, Pertti; Eriksson, Johan G

    2016-01-01

    Changes in several lifestyle related factors are required for successful long-term weight loss. Identification of these factors is of major importance from a public health point of view. This study was based upon findings from the Finnish Weight Control Registry (FWCR), a web-based registry. In total, 316 people were recruited and 184 met the study inclusion criteria. The aims of this study were to assess means and typical changes in eating habits associated with successful long-term weight loss. Half of the participants (48%) reported that they lost weight slowly primarily with dietary changes. Self-weighing frequency was high, 92% was weighing themselves at least once a week during the weight loss phase, and 75% during the maintenance phase. Dietary aspects associated with successful weight loss and weight maintenance included an increase in intake of vegetables, a reduction in frequency of eating candies and fast food, regular meal frequency and application of the Plate model. Both slow and fast weight loss may lead to successful long-term results and weight maintenance. A decrease in energy intake was achieved by reducing intake of energy-dense food, applying the Plate model and by regular meal frequency. Key messages Successful long-term weight loss is associated with a reduction in intake of energy-dense food. A more regular meal frequency and a high frequency of self-weighing seem to be helpful.

  13. A Novel Approach to Speaker Weight Estimation Using a Fusion of the i-vector and NFA Frameworks

    DEFF Research Database (Denmark)

    Poorjam, Amir Hossein; Bahari, Mohamad Hasan; Van hamme, Hogo

    2017-01-01

    -negative Factor Analysis (NFA) framework which is based on a constrained factor analysis on GMM weight supervectors. Then, the available information in both Gaussian means and Gaussian weights is exploited through a feature-level fusion of the i-vectors and the NFA vectors. Finally, a least-squares support vector......This paper proposes a novel approach for automatic speaker weight estimation from spontaneous telephone speech signals. In this method, each utterance is modeled using the i-vector framework which is based on the factor analysis on Gaussian Mixture Model (GMM) mean supervectors, and the Non...... regression is employed to estimate the weight of speakers from the given utterances. The proposed approach is evaluated on spontaneous telephone speech signals of National Institute of Standards and Technology 2008 and 2010 Speaker Recognition Evaluation corpora. To investigate the effectiveness...

  14. Cross section evaluation by spinor integration: The massless case in 4D

    International Nuclear Information System (INIS)

    Feng Bo; Huang Rijun; Jia Yin; Luo Mingxing; Wang Honghui

    2010-01-01

    To get the total cross section of one interaction from its amplitude M, one needs to integrate |M| 2 over phase spaces of all outgoing particles. Starting from this paper, we will propose a new method to perform such integrations, which is inspired by the reduced phase space integration of one-loop unitarity cut developed in the last few years. The new method reduces one constrained three-dimension momentum space integration to a one-dimensional integration, plus one possible Feynman parameter integration. There is no need to specify a reference framework in our calculation, since every step is manifestly Lorentz invariant by the new method. The current paper is the first paper of a series for the new method. Here we have exclusively focused on massless particles in 4D. There is no need to carve out a complicated integration region in the phase space for this particular simple case because the integration region is always simply [0,1].

  15. Epoch of reionization 21 cm forecasting from MCMC-constrained semi-numerical models

    Science.gov (United States)

    Hassan, Sultan; Davé, Romeel; Finlator, Kristian; Santos, Mario G.

    2017-06-01

    The recent low value of Planck Collaboration XLVII integrated optical depth to Thomson scattering suggests that the reionization occurred fairly suddenly, disfavouring extended reionization scenarios. This will have a significant impact on the 21 cm power spectrum. Using a semi-numerical framework, we improve our model from instantaneous to include time-integrated ionization and recombination effects, and find that this leads to more sudden reionization. It also yields larger H II bubbles that lead to an order of magnitude more 21 cm power on large scales, while suppressing the small-scale ionization power. Local fluctuations in the neutral hydrogen density play the dominant role in boosting the 21 cm power spectrum on large scales, while recombinations are subdominant. We use a Monte Carlo Markov chain approach to constrain our model to observations of the star formation rate functions at z = 6, 7, 8 from Bouwens et al., the Planck Collaboration XLVII optical depth measurements and the Becker & Bolton ionizing emissivity data at z ˜ 5. We then use this constrained model to perform 21 cm forecasting for Low Frequency Array, Hydrogen Epoch of Reionization Array and Square Kilometre Array in order to determine how well such data can characterize the sources driving reionization. We find that the Mock 21 cm power spectrum alone can somewhat constrain the halo mass dependence of ionizing sources, the photon escape fraction and ionizing amplitude, but combining the Mock 21 cm data with other current observations enables us to separately constrain all these parameters. Our framework illustrates how the future 21 cm data can play a key role in understanding the sources and topology of reionization as observations improve.

  16. Path integral for stochastic inflation: Nonperturbative volume weighting, complex histories, initial conditions, and the end of inflation

    Science.gov (United States)

    Gratton, Steven

    2011-09-01

    In this paper we present a path integral formulation of stochastic inflation. Volume weighting can be naturally implemented from this new perspective in a very straightforward way when compared to conventional Langevin approaches. With an in-depth study of inflation in a quartic potential, we investigate how the inflaton evolves and how inflation typically ends both with and without volume weighting. The calculation can be carried to times beyond those accessible to conventional Fokker-Planck approaches. Perhaps unexpectedly, complex histories sometimes emerge with volume weighting. The reward for this excursion into the complex plane is an insight into how volume-weighted inflation both loses memory of initial conditions and ends via slow roll. The slow-roll end of inflation mitigates certain “Youngness Paradox”-type criticisms of the volume-weighted paradigm. Thus it is perhaps time to rehabilitate proper-time volume weighting as a viable measure for answering at least some interesting cosmological questions.

  17. Path integral for stochastic inflation: Nonperturbative volume weighting, complex histories, initial conditions, and the end of inflation

    International Nuclear Information System (INIS)

    Gratton, Steven

    2011-01-01

    In this paper we present a path integral formulation of stochastic inflation. Volume weighting can be naturally implemented from this new perspective in a very straightforward way when compared to conventional Langevin approaches. With an in-depth study of inflation in a quartic potential, we investigate how the inflaton evolves and how inflation typically ends both with and without volume weighting. The calculation can be carried to times beyond those accessible to conventional Fokker-Planck approaches. Perhaps unexpectedly, complex histories sometimes emerge with volume weighting. The reward for this excursion into the complex plane is an insight into how volume-weighted inflation both loses memory of initial conditions and ends via slow roll. The slow-roll end of inflation mitigates certain ''Youngness Paradox''-type criticisms of the volume-weighted paradigm. Thus it is perhaps time to rehabilitate proper-time volume weighting as a viable measure for answering at least some interesting cosmological questions.

  18. Transient Stability Promotion by FACTS Controller Based on Adaptive Inertia Weight Particle Swarm Optimization Method

    Directory of Open Access Journals (Sweden)

    Ghazanfar Shahgholian

    2018-01-01

    Full Text Available This paper examines the influence of Static Synchronous Series Compensator (SSSC on the oscillation damping control in the network. The performance of Flexible AC Transmission System (FACTS controller highly depends upon its parameters and appropriate location in the network. A new Adaptive Inertia Weight Particle Swarm Optimization (AIWPSO method is employed to design the parameters of the SSSC-base controller. In the proposed controller, the proper signal of the power system such as rotor angle is used as the feedback. AIWPSO technique has high flexibility and balanced mechanism for the local and global research. The proposed controller is compared with a Genetic Algorithm (GA based controller that confirms the operation of the controller. To show the integrity of the proposed controller method, the achievement of the simulations is done out in a single-machine infinite-bus and multi-machine grid under multi turmoil.

  19. The QAP weighted network analysis method and its application in international services trade

    Science.gov (United States)

    Xu, Helian; Cheng, Long

    2016-04-01

    Based on QAP (Quadratic Assignment Procedure) correlation and complex network theory, this paper puts forward a new method named QAP Weighted Network Analysis Method. The core idea of the method is to analyze influences among relations in a social or economic group by building a QAP weighted network of networks of relations. In the QAP weighted network, a node depicts a relation and an undirect edge exists between any pair of nodes if there is significant correlation between relations. As an application of the QAP weighted network, we study international services trade by using the QAP weighted network, in which nodes depict 10 kinds of services trade relations. After the analysis of international services trade by QAP weighted network, and by using distance indicators, hierarchy tree and minimum spanning tree, the conclusion shows that: Firstly, significant correlation exists in all services trade, and the development of any one service trade will stimulate the other nine. Secondly, as the economic globalization goes deeper, correlations in all services trade have been strengthened continually, and clustering effects exist in those services trade. Thirdly, transportation services trade, computer and information services trade and communication services trade have the most influence and are at the core in all services trade.

  20. 3-D thermal weight function method and multiple virtual crack extension technique for thermal shock problems

    International Nuclear Information System (INIS)

    Lu Yanlin; Zhou Xiao; Qu Jiadi; Dou Yikang; He Yinbiao

    2005-01-01

    An efficient scheme, 3-D thermal weight function (TWF) method, and a novel numerical technique, multiple virtual crack extension (MVCE) technique, were developed for determination of histories of transient stress intensity factor (SIF) distributions along 3-D crack fronts of a body subjected to thermal shock. The TWF is a universal function, which is dependent only on the crack configuration and body geometry. TWF is independent of time during thermal shock, so the whole history of transient SIF distributions along crack fronts can be directly calculated through integration of the products of TWF and transient temperatures and temperature gradients. The repeated determinations of the distributions of stresses (or displacements) fields for individual time instants are thus avoided in the TWF method. An expression of the basic equation for the 3-D universal weight function method for Mode I in an isotropic elastic body is derived. This equation can also be derived from Bueckner-Rice's 3-D WF formulations in the framework of transformation strain. It can be understood from this equation that the so-called thermal WF is in fact coincident with the mechanical WF except for some constants of elasticity. The details and formulations of the MVCE technique are given for elliptical cracks. The MVCE technique possesses several advantages. The specially selected linearly independent VCE modes can directly be used as shape functions for the interpolation of unknown SIFs. As a result, the coefficient matrix of the final system of equations in the MVCE method is a triple-diagonal matrix and the values of the coefficients on the main diagonal are large. The system of equations has good numerical properties. The number of linearly independent VCE modes that can be introduced in a problem is unlimited. Complex situations in which the SIFs vary dramatically along crack fronts can be numerically well simulated by the MVCE technique. An integrated system of programs for solving the

  1. CT energy weighting in the presence of scatter and limited energy resolution

    International Nuclear Information System (INIS)

    Schmidt, Taly Gilat

    2010-01-01

    Purpose: Energy-resolved CT has the potential to improve the contrast-to-noise ratio (CNR) through optimal weighting of photons detected in energy bins. In general, optimal weighting gives higher weight to the lower energy photons that contain the most contrast information. However, low-energy photons are generally most corrupted by scatter and spectrum tailing, an effect caused by the limited energy resolution of the detector. This article first quantifies the effects of spectrum tailing on energy-resolved data, which may also be beneficial for material decomposition applications. Subsequently, the combined effects of energy weighting, spectrum tailing, and scatter are investigated through simulations. Methods: The study first investigated the effects of spectrum tailing on the estimated attenuation coefficients of homogeneous slab objects. Next, the study compared the CNR and artifact performance of images simulated with varying levels of scatter and spectrum tailing effects, and reconstructed with energy integrating, photon-counting, and two optimal linear weighting methods: Projection-based and image-based weighting. Realistic detector energy-response functions were simulated based on a previously proposed model. The energy-response functions represent the probability that a photon incident on the detector at a particular energy will be detected at a different energy. Realistic scatter was simulated with Monte Carlo methods. Results: Spectrum tailing resulted in a negative shift in the estimated attenuation coefficient of slab objects compared to an ideal detector. The magnitude of the shift varied with material composition, increased with material thickness, and decreased with photon energy. Spectrum tailing caused cupping artifacts and CT number inaccuracies in images reconstructed with optimal energy weighting, and did not impact images reconstructed with photon counting weighting. Spectrum tailing did not significantly impact the CNR in reconstructed images

  2. Evaluation and selection of energy technologies using an integrated graph theory and analytic hierarchy process methods

    Directory of Open Access Journals (Sweden)

    P. B. Lanjewar

    2016-06-01

    Full Text Available The evaluation and selection of energy technologies involve a large number of attributes whose selection and weighting is decided in accordance with the social, environmental, technical and economic framework. In the present work an integrated multiple attribute decision making methodology is developed by combining graph theory and analytic hierarchy process methods to deal with the evaluation and selection of energy technologies. The energy technology selection attributes digraph enables a quick visual appraisal of the energy technology selection attributes and their interrelationships. The preference index provides a total objective score for comparison of energy technologies alternatives. Application of matrix permanent offers a better appreciation of the considered attributes and helps to analyze the different alternatives from combinatorial viewpoint. The AHP is used to assign relative weights to the attributes. Four examples of evaluation and selection of energy technologies are considered in order to demonstrate and validate the proposed method.

  3. Metal artifact reduction in x-ray computed tomography (CT) by constrained optimization

    International Nuclear Information System (INIS)

    Zhang Xiaomeng; Wang Jing; Xing Lei

    2011-01-01

    Purpose: The streak artifacts caused by metal implants have long been recognized as a problem that limits various applications of CT imaging. In this work, the authors propose an iterative metal artifact reduction algorithm based on constrained optimization. Methods: After the shape and location of metal objects in the image domain is determined automatically by the binary metal identification algorithm and the segmentation of ''metal shadows'' in projection domain is done, constrained optimization is used for image reconstruction. It minimizes a predefined function that reflects a priori knowledge of the image, subject to the constraint that the estimated projection data are within a specified tolerance of the available metal-shadow-excluded projection data, with image non-negativity enforced. The minimization problem is solved through the alternation of projection-onto-convex-sets and the steepest gradient descent of the objective function. The constrained optimization algorithm is evaluated with a penalized smoothness objective. Results: The study shows that the proposed method is capable of significantly reducing metal artifacts, suppressing noise, and improving soft-tissue visibility. It outperforms the FBP-type methods and ART and EM methods and yields artifacts-free images. Conclusions: Constrained optimization is an effective way to deal with CT reconstruction with embedded metal objects. Although the method is presented in the context of metal artifacts, it is applicable to general ''missing data'' image reconstruction problems.

  4. A New Global Regression Analysis Method for the Prediction of Wind Tunnel Model Weight Corrections

    Science.gov (United States)

    Ulbrich, Norbert Manfred; Bridge, Thomas M.; Amaya, Max A.

    2014-01-01

    A new global regression analysis method is discussed that predicts wind tunnel model weight corrections for strain-gage balance loads during a wind tunnel test. The method determines corrections by combining "wind-on" model attitude measurements with least squares estimates of the model weight and center of gravity coordinates that are obtained from "wind-off" data points. The method treats the least squares fit of the model weight separate from the fit of the center of gravity coordinates. Therefore, it performs two fits of "wind- off" data points and uses the least squares estimator of the model weight as an input for the fit of the center of gravity coordinates. Explicit equations for the least squares estimators of the weight and center of gravity coordinates are derived that simplify the implementation of the method in the data system software of a wind tunnel. In addition, recommendations for sets of "wind-off" data points are made that take typical model support system constraints into account. Explicit equations of the confidence intervals on the model weight and center of gravity coordinates and two different error analyses of the model weight prediction are also discussed in the appendices of the paper.

  5. Nonlinear moments method for the isotropic Boltzmann equation and the invariance of collision integral

    International Nuclear Information System (INIS)

    Ehnder, A.Ya.; Ehnder, I.A.

    1999-01-01

    A new approach to develop nonlinear moment method to solve the Boltzmann equation is presented. This approach is based on the invariance of collision integral as to the selection of the base functions. The Sonin polynomials with the Maxwell weighting function are selected to serve as the base functions. It is shown that for the arbitrary cross sections of the interaction the matrix elements corresponding to the moments from the nonlinear integral of collisions are bound by simple recurrent bonds enabling to express all nonlinear matrix elements in terms of the linear ones. As a result, high-efficiency numerical pattern to calculate nonlinear matrix elements is obtained. The presented approach offers possibilities both to calculate relaxation processes within high speed range and to some more complex kinetic problems [ru

  6. A channel-by-channel method of reducing the errors associated with peak area integration

    International Nuclear Information System (INIS)

    Luedeke, T.P.; Tripard, G.E.

    1996-01-01

    A new method of reducing the errors associated with peak area integration has been developed. This method utilizes the signal content of each channel as an estimate of the overall peak area. These individual estimates can then be weighted according to the precision with which each estimate is known, producing an overall area estimate. Experimental measurements were performed on a small peak sitting on a large background, and the results compared to those obtained from a commercial software program. Results showed a marked decrease in the spread of results around the true value (obtained by counting for a long period of time), and a reduction in the statistical uncertainty associated with the peak area. (orig.)

  7. Geographically weighted regression based methods for merging satellite and gauge precipitation

    Science.gov (United States)

    Chao, Lijun; Zhang, Ke; Li, Zhijia; Zhu, Yuelong; Wang, Jingfeng; Yu, Zhongbo

    2018-03-01

    Real-time precipitation data with high spatiotemporal resolutions are crucial for accurate hydrological forecasting. To improve the spatial resolution and quality of satellite precipitation, a three-step satellite and gauge precipitation merging method was formulated in this study: (1) bilinear interpolation is first applied to downscale coarser satellite precipitation to a finer resolution (PS); (2) the (mixed) geographically weighted regression methods coupled with a weighting function are then used to estimate biases of PS as functions of gauge observations (PO) and PS; and (3) biases of PS are finally corrected to produce a merged precipitation product. Based on the above framework, eight algorithms, a combination of two geographically weighted regression methods and four weighting functions, are developed to merge CMORPH (CPC MORPHing technique) precipitation with station observations on a daily scale in the Ziwuhe Basin of China. The geographical variables (elevation, slope, aspect, surface roughness, and distance to the coastline) and a meteorological variable (wind speed) were used for merging precipitation to avoid the artificial spatial autocorrelation resulting from traditional interpolation methods. The results show that the combination of the MGWR and BI-square function (MGWR-BI) has the best performance (R = 0.863 and RMSE = 7.273 mm/day) among the eight algorithms. The MGWR-BI algorithm was then applied to produce hourly merged precipitation product. Compared to the original CMORPH product (R = 0.208 and RMSE = 1.208 mm/hr), the quality of the merged data is significantly higher (R = 0.724 and RMSE = 0.706 mm/hr). The developed merging method not only improves the spatial resolution and quality of the satellite product but also is easy to implement, which is valuable for hydrological modeling and other applications.

  8. Laterally constrained inversion for CSAMT data interpretation

    Science.gov (United States)

    Wang, Ruo; Yin, Changchun; Wang, Miaoyue; Di, Qingyun

    2015-10-01

    Laterally constrained inversion (LCI) has been successfully applied to the inversion of dc resistivity, TEM and airborne EM data. However, it hasn't been yet applied to the interpretation of controlled-source audio-frequency magnetotelluric (CSAMT) data. In this paper, we apply the LCI method for CSAMT data inversion by preconditioning the Jacobian matrix. We apply a weighting matrix to Jacobian to balance the sensitivity of model parameters, so that the resolution with respect to different model parameters becomes more uniform. Numerical experiments confirm that this can improve the convergence of the inversion. We first invert a synthetic dataset with and without noise to investigate the effect of LCI applications to CSAMT data, for the noise free data, the results show that the LCI method can recover the true model better compared to the traditional single-station inversion; and for the noisy data, the true model is recovered even with a noise level of 8%, indicating that LCI inversions are to some extent noise insensitive. Then, we re-invert two CSAMT datasets collected respectively in a watershed and a coal mine area in Northern China and compare our results with those from previous inversions. The comparison with the previous inversion in a coal mine shows that LCI method delivers smoother layer interfaces that well correlate to seismic data, while comparison with a global searching algorithm of simulated annealing (SA) in a watershed shows that though both methods deliver very similar good results, however, LCI algorithm presented in this paper runs much faster. The inversion results for the coal mine CSAMT survey show that a conductive water-bearing zone that was not revealed by the previous inversions has been identified by the LCI. This further demonstrates that the method presented in this paper works for CSAMT data inversion.

  9. Measurement of the top quark mass using neutrino φ weighting method in dilepton events at CDF

    International Nuclear Information System (INIS)

    Bellettini, G.; Budagov, Yu.; Chlachidze, G.; Glagolev, V.; Prakoshyn, F.; Sisakyan, A.; Suslov, I.; Giokaris, N.; Velev, G.

    2005-01-01

    We report on a measurement of the top quark mass in the dilepton channel of tt bar events from pp bar collisions at √s = 1.96 TeV. The integrated luminosity of the data sample is 340 pb -1 . 33 events were reconstructed according to the tt bar hypothesis and fitted as a superposition of signal and background. Using the background constrained fit (with 11.6 ± 2.1 events expected from background) M top = 169.8 ± 9.3 9.2 (stat.) GeV/c 2 is measured. The estimate of systematic error is ± 3.8 GeV/c 2

  10. Constrained non-rigid registration for whole body image registration: method and validation

    Science.gov (United States)

    Li, Xia; Yankeelov, Thomas E.; Peterson, Todd E.; Gore, John C.; Dawant, Benoit M.

    2007-03-01

    3D intra- and inter-subject registration of image volumes is important for tasks that include measurements and quantification of temporal/longitudinal changes, atlas-based segmentation, deriving population averages, or voxel and tensor-based morphometry. A number of methods have been proposed to tackle this problem but few of them have focused on the problem of registering whole body image volumes acquired either from humans or small animals. These image volumes typically contain a large number of articulated structures, which makes registration more difficult than the registration of head images, to which the vast majority of registration algorithms have been applied. To solve this problem, we have previously proposed an approach, which initializes an intensity-based non-rigid registration algorithm with a point based registration technique [1, 2]. In this paper, we introduce new constraints into our non-rigid registration algorithm to prevent the bones from being deformed inaccurately. Results we have obtained show that the new constrained algorithm leads to better registration results than the previous one.

  11. Memory-efficient calculations of adjoint-weighted tallies by the Monte Carlo Wielandt method

    International Nuclear Information System (INIS)

    Choi, Sung Hoon; Shim, Hyung Jin

    2016-01-01

    Highlights: • The MC Wielandt method is applied to reduce memory for the adjoint estimation. • The adjoint-weighted kinetics parameters are estimated in the MC Wielandt calculations. • The MC S/U analyses are conducted in the MC Wielandt calculations. - Abstract: The current Monte Carlo (MC) adjoint-weighted tally techniques based on the iterated fission probability (IFP) concept require a memory amount which is proportional to the numbers of the adjoint-weighted tallies and histories per cycle to store history-wise tally estimates during the convergence of the adjoint flux. Especially the conventional MC adjoint-weighted perturbation (AWP) calculations for the nuclear data sensitivity and uncertainty (S/U) analysis suffer from the huge memory consumption to realize the IFP concept. In order to reduce the memory requirement drastically, we present a new adjoint estimation method in which the memory usage is irrelevant to the numbers of histories per cycle by applying the IFP concept for the MC Wielandt calculations. The new algorithms for the adjoint-weighted kinetics parameter estimation and the AWP calculations in the MC Wielandt method are implemented in a Seoul National University MC code, McCARD and its validity is demonstrated in critical facility problems. From the comparison of the nuclear data S/U analyses, it is demonstrated that the memory amounts to store the sensitivity estimates in the proposed method become negligibly small.

  12. Interpretation of Flow Logs from Nevada Test Site Boreholes to Estimate Hydraulic Conductivity Using Numerical Simulations Constrained by Single-Well Aquifer Tests

    Science.gov (United States)

    Garcia, C. Amanda; Halford, Keith J.; Laczniak, Randell J.

    2010-01-01

    Hydraulic conductivities of volcanic and carbonate lithologic units at the Nevada Test Site were estimated from flow logs and aquifer-test data. Borehole flow and drawdown were integrated and interpreted using a radial, axisymmetric flow model, AnalyzeHOLE. This integrated approach is used because complex well completions and heterogeneous aquifers and confining units produce vertical flow in the annular space and aquifers adjacent to the wellbore. AnalyzeHOLE simulates vertical flow, in addition to horizontal flow, which accounts for converging flow toward screen ends and diverging flow toward transmissive intervals. Simulated aquifers and confining units uniformly are subdivided by depth into intervals in which the hydraulic conductivity is estimated with the Parameter ESTimation (PEST) software. Between 50 and 150 hydraulic-conductivity parameters were estimated by minimizing weighted differences between simulated and measured flow and drawdown. Transmissivity estimates from single-well or multiple-well aquifer tests were used to constrain estimates of hydraulic conductivity. The distribution of hydraulic conductivity within each lithology had a minimum variance because estimates were constrained with Tikhonov regularization. AnalyzeHOLE simulated hydraulic-conductivity estimates for lithologic units across screened and cased intervals are as much as 100 times less than those estimated using proportional flow-log analyses applied across screened intervals only. Smaller estimates of hydraulic conductivity for individual lithologic units are simulated because sections of the unit behind cased intervals of the wellbore are not assumed to be impermeable, and therefore, can contribute flow to the wellbore. Simulated hydraulic-conductivity estimates vary by more than three orders of magnitude across a lithologic unit, indicating a high degree of heterogeneity in volcanic and carbonate-rock units. The higher water transmitting potential of carbonate-rock units relative

  13. Interpretation of Flow Logs from Nevada Test Site Boreholes to Estimate Hydraulic conductivity Using Numerical Simulations Constrained by Single-Well Aquifer Tests

    Energy Technology Data Exchange (ETDEWEB)

    Garcia, C. Amanda; Halford, Keith J.; Laczniak, Randell J.

    2010-02-12

    Hydraulic conductivities of volcanic and carbonate lithologic units at the Nevada Test Site were estimated from flow logs and aquifer-test data. Borehole flow and drawdown were integrated and interpreted using a radial, axisymmetric flow model, AnalyzeHOLE. This integrated approach is used because complex well completions and heterogeneous aquifers and confining units produce vertical flow in the annular space and aquifers adjacent to the wellbore. AnalyzeHOLE simulates vertical flow, in addition to horizontal flow, which accounts for converging flow toward screen ends and diverging flow toward transmissive intervals. Simulated aquifers and confining units uniformly are subdivided by depth into intervals in which the hydraulic conductivity is estimated with the Parameter ESTimation (PEST) software. Between 50 and 150 hydraulic-conductivity parameters were estimated by minimizing weighted differences between simulated and measured flow and drawdown. Transmissivity estimates from single-well or multiple-well aquifer tests were used to constrain estimates of hydraulic conductivity. The distribution of hydraulic conductivity within each lithology had a minimum variance because estimates were constrained with Tikhonov regularization. AnalyzeHOLE simulated hydraulic-conductivity estimates for lithologic units across screened and cased intervals are as much as 100 times less than those estimated using proportional flow-log analyses applied across screened intervals only. Smaller estimates of hydraulic conductivity for individual lithologic units are simulated because sections of the unit behind cased intervals of the wellbore are not assumed to be impermeable, and therefore, can contribute flow to the wellbore. Simulated hydraulic-conductivity estimates vary by more than three orders of magnitude across a lithologic unit, indicating a high degree of heterogeneity in volcanic and carbonate-rock units. The higher water transmitting potential of carbonate-rock units relative

  14. A counting-weighted calibration method for a field-programmable-gate-array-based time-to-digital converter

    International Nuclear Information System (INIS)

    Chen, Yuan-Ho

    2017-01-01

    In this work, we propose a counting-weighted calibration method for field-programmable-gate-array (FPGA)-based time-to-digital converter (TDC) to provide non-linearity calibration for use in positron emission tomography (PET) scanners. To deal with the non-linearity in FPGA, we developed a counting-weighted delay line (CWD) to count the delay time of the delay cells in the TDC in order to reduce the differential non-linearity (DNL) values based on code density counts. The performance of the proposed CWD-TDC with regard to linearity far exceeds that of TDC with a traditional tapped delay line (TDL) architecture, without the need for nonlinearity calibration. When implemented in a Xilinx Vertix-5 FPGA device, the proposed CWD-TDC achieved time resolution of 60 ps with integral non-linearity (INL) and DNL of [−0.54, 0.24] and [−0.66, 0.65] least-significant-bit (LSB), respectively. This is a clear indication of the suitability of the proposed FPGA-based CWD-TDC for use in PET scanners.

  15. A counting-weighted calibration method for a field-programmable-gate-array-based time-to-digital converter

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yuan-Ho, E-mail: chenyh@mail.cgu.edu.tw [Department of Electronic Engineering, Chang Gung University, Tao-Yuan 333, Taiwan (China); Department of Radiation Oncology, Chang Gung Memorial Hospital, Tao-Yuan 333, Taiwan (China); Center for Reliability Sciences and Technologies, Chang Gung University, Tao-Yuan 333, Taiwan (China)

    2017-05-11

    In this work, we propose a counting-weighted calibration method for field-programmable-gate-array (FPGA)-based time-to-digital converter (TDC) to provide non-linearity calibration for use in positron emission tomography (PET) scanners. To deal with the non-linearity in FPGA, we developed a counting-weighted delay line (CWD) to count the delay time of the delay cells in the TDC in order to reduce the differential non-linearity (DNL) values based on code density counts. The performance of the proposed CWD-TDC with regard to linearity far exceeds that of TDC with a traditional tapped delay line (TDL) architecture, without the need for nonlinearity calibration. When implemented in a Xilinx Vertix-5 FPGA device, the proposed CWD-TDC achieved time resolution of 60 ps with integral non-linearity (INL) and DNL of [−0.54, 0.24] and [−0.66, 0.65] least-significant-bit (LSB), respectively. This is a clear indication of the suitability of the proposed FPGA-based CWD-TDC for use in PET scanners.

  16. Top quark mass measurement in the 2.9 fb-1 tight lepton and isolated track sample using neutrinoφ weighting method

    International Nuclear Information System (INIS)

    Bellettini, G.; Trovato, M.; Budagov, Yu.; Glagolev, V.; Sisakyan, A.; Suslov, I.; Chlachidze, G.; Velev, G.

    2008-01-01

    We report on a measurement of the top quark mass with tt bar dilepton events produced in pp bar collisions at the Fermilab Tevatron (√s 1.96 TeV) and collected by the CDF II detector. Events with a a charged muon or electron and an isolated track are searched for tt bar candidates. A sample of 328 events, corresponding to an integrated luminosity of 2.9 fb -1 , is obtained after all selection cuts. The top quark mass is reconstructed by minimizing a χ 2 function in the assumption of the tt bar dilepton hypothesis. The unconstrained kinematics of dilepton events is taken into account by the scan over the space of possibilities for the azimuthal angles of neutrinos, and a preferred mass is built for each event. In order to extract the top quark mass, a likelihood fit of the preferred mass distribution in data to a weighted sum of signal and background probability density functions is performed. Using the background constrained fit with 145.0±17.3 events expected from background we measure m t = 165.5 ± 3.3 3.4 (stat.) GeV/c 2 . The estimate of systematic error is 3.1 GeV/c 2

  17. Achieving Integration in Mixed Methods Designs—Principles and Practices

    Science.gov (United States)

    Fetters, Michael D; Curry, Leslie A; Creswell, John W

    2013-01-01

    Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs—exploratory sequential, explanatory sequential, and convergent—and through four advanced frameworks—multistage, intervention, case study, and participatory. Integration at the methods level occurs through four approaches. In connecting, one database links to the other through sampling. With building, one database informs the data collection approach of the other. When merging, the two databases are brought together for analysis. With embedding, data collection and analysis link at multiple points. Integration at the interpretation and reporting level occurs through narrative, data transformation, and joint display. The fit of integration describes the extent the qualitative and quantitative findings cohere. Understanding these principles and practices of integration can help health services researchers leverage the strengths of mixed methods. PMID:24279835

  18. Constrained evolution in numerical relativity

    Science.gov (United States)

    Anderson, Matthew William

    The strongest potential source of gravitational radiation for current and future detectors is the merger of binary black holes. Full numerical simulation of such mergers can provide realistic signal predictions and enhance the probability of detection. Numerical simulation of the Einstein equations, however, is fraught with difficulty. Stability even in static test cases of single black holes has proven elusive. Common to unstable simulations is the growth of constraint violations. This work examines the effect of controlling the growth of constraint violations by solving the constraints periodically during a simulation, an approach called constrained evolution. The effects of constrained evolution are contrasted with the results of unconstrained evolution, evolution where the constraints are not solved during the course of a simulation. Two different formulations of the Einstein equations are examined: the standard ADM formulation and the generalized Frittelli-Reula formulation. In most cases constrained evolution vastly improves the stability of a simulation at minimal computational cost when compared with unconstrained evolution. However, in the more demanding test cases examined, constrained evolution fails to produce simulations with long-term stability in spite of producing improvements in simulation lifetime when compared with unconstrained evolution. Constrained evolution is also examined in conjunction with a wide variety of promising numerical techniques, including mesh refinement and overlapping Cartesian and spherical computational grids. Constrained evolution in boosted black hole spacetimes is investigated using overlapping grids. Constrained evolution proves to be central to the host of innovations required in carrying out such intensive simulations.

  19. Inexact nonlinear improved fuzzy chance-constrained programming model for irrigation water management under uncertainty

    Science.gov (United States)

    Zhang, Chenglong; Zhang, Fan; Guo, Shanshan; Liu, Xiao; Guo, Ping

    2018-01-01

    An inexact nonlinear mλ-measure fuzzy chance-constrained programming (INMFCCP) model is developed for irrigation water allocation under uncertainty. Techniques of inexact quadratic programming (IQP), mλ-measure, and fuzzy chance-constrained programming (FCCP) are integrated into a general optimization framework. The INMFCCP model can deal with not only nonlinearities in the objective function, but also uncertainties presented as discrete intervals in the objective function, variables and left-hand side constraints and fuzziness in the right-hand side constraints. Moreover, this model improves upon the conventional fuzzy chance-constrained programming by introducing a linear combination of possibility measure and necessity measure with varying preference parameters. To demonstrate its applicability, the model is then applied to a case study in the middle reaches of Heihe River Basin, northwest China. An interval regression analysis method is used to obtain interval crop water production functions in the whole growth period under uncertainty. Therefore, more flexible solutions can be generated for optimal irrigation water allocation. The variation of results can be examined by giving different confidence levels and preference parameters. Besides, it can reflect interrelationships among system benefits, preference parameters, confidence levels and the corresponding risk levels. Comparison between interval crop water production functions and deterministic ones based on the developed INMFCCP model indicates that the former is capable of reflecting more complexities and uncertainties in practical application. These results can provide more reliable scientific basis for supporting irrigation water management in arid areas.

  20. Numerical method of singular problems on singular integrals

    International Nuclear Information System (INIS)

    Zhao Huaiguo; Mou Zongze

    1992-02-01

    As first part on the numerical research of singular problems, a numerical method is proposed for singular integrals. It is shown that the procedure is quite powerful for solving physics calculation with singularity such as the plasma dispersion function. Useful quadrature formulas for some class of the singular integrals are derived. In general, integrals with more complex singularities can be dealt by this method easily

  1. A method of identifying and weighting indicators of energy efficiency assessment in Chinese residential buildings

    International Nuclear Information System (INIS)

    Yang Yulan; Li Baizhan; Yao Runming

    2010-01-01

    This paper describes a method of identifying and weighting indicators for assessing the energy efficiency of residential buildings in China. A list of indicators of energy efficiency assessment in residential buildings in the hot summer and cold winter zone in China has been proposed, which supplies an important reference for policy makings in energy efficiency assessment in buildings. The research method applies a wide-ranging literature review and a questionnaire survey involving experts in the field. The group analytic hierarchy process (group AHP) has been used to weight the identified indicators. The size of survey samples are sufficient to support the results, which has been validated by consistency estimation. The proposed method could also be extended to develop the weighted indicators for other climate zones in China. - Research highlights: →Method of identifying indicators of building energy efficiency assessment. →The group AHP method for weighting indicators. →Method of solving multi-criteria decision making problems of choice and prioritisation in policy makings.

  2. A method of identifying and weighting indicators of energy efficiency assessment in Chinese residential buildings

    Energy Technology Data Exchange (ETDEWEB)

    Yang Yulan [Key Laboratory of the Three Gorges Reservoir Region' s Eco-Environment under Ministry of Education, Chongqing University, Chongqing (China); College of Civil Engineering and Architecture, Zhejiang University of Technology, Hangzhou (China); Li Baizhan [Key Laboratory of the Three Gorges Reservoir Region' s Eco-Environment under the Ministry of Education, Chongqing University, Chongqing (China); Yao Runming, E-mail: r.yao@reading.ac.u [School of Construction Management and Engineering, University of Reading, Reading (United Kingdom); Key Laboratory of the Three Gorges Reservoir Region' s Eco-Environment under Ministry of Education, Chongqing University, Chongqing (China)

    2010-12-15

    This paper describes a method of identifying and weighting indicators for assessing the energy efficiency of residential buildings in China. A list of indicators of energy efficiency assessment in residential buildings in the hot summer and cold winter zone in China has been proposed, which supplies an important reference for policy makings in energy efficiency assessment in buildings. The research method applies a wide-ranging literature review and a questionnaire survey involving experts in the field. The group analytic hierarchy process (group AHP) has been used to weight the identified indicators. The size of survey samples are sufficient to support the results, which has been validated by consistency estimation. The proposed method could also be extended to develop the weighted indicators for other climate zones in China. - Research highlights: {yields}Method of identifying indicators of building energy efficiency assessment. {yields}The group AHP method for weighting indicators. {yields}Method of solving multi-criteria decision making problems of choice and prioritisation in policy makings.

  3. A Support Vector Learning-Based Particle Filter Scheme for Target Localization in Communication-Constrained Underwater Acoustic Sensor Networks.

    Science.gov (United States)

    Li, Xinbin; Zhang, Chenglin; Yan, Lei; Han, Song; Guan, Xinping

    2017-12-21

    Target localization, which aims to estimate the location of an unknown target, is one of the key issues in applications of underwater acoustic sensor networks (UASNs). However, the constrained property of an underwater environment, such as restricted communication capacity of sensor nodes and sensing noises, makes target localization a challenging problem. This paper relies on fractional sensor nodes to formulate a support vector learning-based particle filter algorithm for the localization problem in communication-constrained underwater acoustic sensor networks. A node-selection strategy is exploited to pick fractional sensor nodes with short-distance pattern to participate in the sensing process at each time frame. Subsequently, we propose a least-square support vector regression (LSSVR)-based observation function, through which an iterative regression strategy is used to deal with the distorted data caused by sensing noises, to improve the observation accuracy. At the same time, we integrate the observation to formulate the likelihood function, which effectively update the weights of particles. Thus, the particle effectiveness is enhanced to avoid "particle degeneracy" problem and improve localization accuracy. In order to validate the performance of the proposed localization algorithm, two different noise scenarios are investigated. The simulation results show that the proposed localization algorithm can efficiently improve the localization accuracy. In addition, the node-selection strategy can effectively select the subset of sensor nodes to improve the communication efficiency of the sensor network.

  4. Self-consistent Bulge/Disk/Halo Galaxy Dynamical Modeling Using Integral Field Kinematics

    Science.gov (United States)

    Taranu, D. S.; Obreschkow, D.; Dubinski, J. J.; Fogarty, L. M. R.; van de Sande, J.; Catinella, B.; Cortese, L.; Moffett, A.; Robotham, A. S. G.; Allen, J. T.; Bland-Hawthorn, J.; Bryant, J. J.; Colless, M.; Croom, S. M.; D'Eugenio, F.; Davies, R. L.; Drinkwater, M. J.; Driver, S. P.; Goodwin, M.; Konstantopoulos, I. S.; Lawrence, J. S.; López-Sánchez, Á. R.; Lorente, N. P. F.; Medling, A. M.; Mould, J. R.; Owers, M. S.; Power, C.; Richards, S. N.; Tonini, C.

    2017-11-01

    We introduce a method for modeling disk galaxies designed to take full advantage of data from integral field spectroscopy (IFS). The method fits equilibrium models to simultaneously reproduce the surface brightness, rotation, and velocity dispersion profiles of a galaxy. The models are fully self-consistent 6D distribution functions for a galaxy with a Sérsic profile stellar bulge, exponential disk, and parametric dark-matter halo, generated by an updated version of GalactICS. By creating realistic flux-weighted maps of the kinematic moments (flux, mean velocity, and dispersion), we simultaneously fit photometric and spectroscopic data using both maximum-likelihood and Bayesian (MCMC) techniques. We apply the method to a GAMA spiral galaxy (G79635) with kinematics from the SAMI Galaxy Survey and deep g- and r-band photometry from the VST-KiDS survey, comparing parameter constraints with those from traditional 2D bulge-disk decomposition. Our method returns broadly consistent results for shared parameters while constraining the mass-to-light ratios of stellar components and reproducing the H I-inferred circular velocity well beyond the limits of the SAMI data. Although the method is tailored for fitting integral field kinematic data, it can use other dynamical constraints like central fiber dispersions and H I circular velocities, and is well-suited for modeling galaxies with a combination of deep imaging and H I and/or optical spectra (resolved or otherwise). Our implementation (MagRite) is computationally efficient and can generate well-resolved models and kinematic maps in under a minute on modern processors.

  5. Optimal Power Constrained Distributed Detection over a Noisy Multiaccess Channel

    Directory of Open Access Journals (Sweden)

    Zhiwen Hu

    2015-01-01

    Full Text Available The problem of optimal power constrained distributed detection over a noisy multiaccess channel (MAC is addressed. Under local power constraints, we define the transformation function for sensor to realize the mapping from local decision to transmitted waveform. The deflection coefficient maximization (DCM is used to optimize the performance of power constrained fusion system. Using optimality conditions, we derive the closed-form solution to the considered problem. Monte Carlo simulations are carried out to evaluate the performance of the proposed new method. Simulation results show that the proposed method could significantly improve the detection performance of the fusion system with low signal-to-noise ratio (SNR. We also show that the proposed new method has a robust detection performance for broad SNR region.

  6. Mean curvature and texture constrained composite weighted random walk algorithm for optic disc segmentation towards glaucoma screening.

    Science.gov (United States)

    Panda, Rashmi; Puhan, N B; Panda, Ganapati

    2018-02-01

    Accurate optic disc (OD) segmentation is an important step in obtaining cup-to-disc ratio-based glaucoma screening using fundus imaging. It is a challenging task because of the subtle OD boundary, blood vessel occlusion and intensity inhomogeneity. In this Letter, the authors propose an improved version of the random walk algorithm for OD segmentation to tackle such challenges. The algorithm incorporates the mean curvature and Gabor texture energy features to define the new composite weight function to compute the edge weights. Unlike the deformable model-based OD segmentation techniques, the proposed algorithm remains unaffected by curve initialisation and local energy minima problem. The effectiveness of the proposed method is verified with DRIVE, DIARETDB1, DRISHTI-GS and MESSIDOR database images using the performance measures such as mean absolute distance, overlapping ratio, dice coefficient, sensitivity, specificity and precision. The obtained OD segmentation results and quantitative performance measures show robustness and superiority of the proposed algorithm in handling the complex challenges in OD segmentation.

  7. The experience of weight management in normal weight adults.

    Science.gov (United States)

    Hernandez, Cheri Ann; Hernandez, David A; Wellington, Christine M; Kidd, Art

    2016-11-01

    No prior research has been done with normal weight persons specific to their experience of weight management. The purpose of this research was to discover the experience of weight management in normal weight individuals. Glaserian grounded theory was used. Qualitative data (focus group) and quantitative data (food diary, study questionnaire, and anthropometric measures) were collected. Weight management was an ongoing process of trying to focus on living (family, work, and social), while maintaining their normal weight targets through five consciously and unconsciously used strategies. Despite maintaining normal weights, the nutritional composition of foods eaten was grossly inadequate. These five strategies can be used to develop new weight management strategies that could be integrated into existing weight management programs, or could be developed into novel weight management interventions. Surprisingly, normal weight individuals require dietary assessment and nutrition education to prevent future negative health consequences. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Genome-wide conserved non-coding microsatellite (CNMS) marker-based integrative genetical genomics for quantitative dissection of seed weight in chickpea.

    Science.gov (United States)

    Bajaj, Deepak; Saxena, Maneesha S; Kujur, Alice; Das, Shouvik; Badoni, Saurabh; Tripathi, Shailesh; Upadhyaya, Hari D; Gowda, C L L; Sharma, Shivali; Singh, Sube; Tyagi, Akhilesh K; Parida, Swarup K

    2015-03-01

    Phylogenetic footprinting identified 666 genome-wide paralogous and orthologous CNMS (conserved non-coding microsatellite) markers from 5'-untranslated and regulatory regions (URRs) of 603 protein-coding chickpea genes. The (CT)n and (GA)n CNMS carrying CTRMCAMV35S and GAGA8BKN3 regulatory elements, respectively, are abundant in the chickpea genome. The mapped genic CNMS markers with robust amplification efficiencies (94.7%) detected higher intraspecific polymorphic potential (37.6%) among genotypes, implying their immense utility in chickpea breeding and genetic analyses. Seventeen differentially expressed CNMS marker-associated genes showing strong preferential and seed tissue/developmental stage-specific expression in contrasting genotypes were selected to narrow down the gene targets underlying seed weight quantitative trait loci (QTLs)/eQTLs (expression QTLs) through integrative genetical genomics. The integration of transcript profiling with seed weight QTL/eQTL mapping, molecular haplotyping, and association analyses identified potential molecular tags (GAGA8BKN3 and RAV1AAT regulatory elements and alleles/haplotypes) in the LOB-domain-containing protein- and KANADI protein-encoding transcription factor genes controlling the cis-regulated expression for seed weight in the chickpea. This emphasizes the potential of CNMS marker-based integrative genetical genomics for the quantitative genetic dissection of complex seed weight in chickpea. © The Author 2014. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  9. Time-constrained project scheduling with adjacent resources

    NARCIS (Netherlands)

    Hurink, Johann L.; Kok, A.L.; Paulus, J.J.; Schutten, Johannes M.J.

    We develop a decomposition method for the Time-Constrained Project Scheduling Problem (TCPSP) with adjacent resources. For adjacent resources the resource units are ordered and the units assigned to a job have to be adjacent. On top of that, adjacent resources are not required by single jobs, but by

  10. Time-constrained project scheduling with adjacent resources

    NARCIS (Netherlands)

    Hurink, Johann L.; Kok, A.L.; Paulus, J.J.; Schutten, Johannes M.J.

    2008-01-01

    We develop a decomposition method for the Time-Constrained Project Scheduling Problem (TCPSP) with Adjacent Resources. For adjacent resources the resource units are ordered and the units assigned to a job have to be adjacent. On top of that, adjacent resources are not required by single jobs, but by

  11. The bounds of feasible space on constrained nonconvex quadratic programming

    Science.gov (United States)

    Zhu, Jinghao

    2008-03-01

    This paper presents a method to estimate the bounds of the radius of the feasible space for a class of constrained nonconvex quadratic programmingsE Results show that one may compute a bound of the radius of the feasible space by a linear programming which is known to be a P-problem [N. Karmarkar, A new polynomial-time algorithm for linear programming, Combinatorica 4 (1984) 373-395]. It is proposed that one applies this method for using the canonical dual transformation [D.Y. Gao, Canonical duality theory and solutions to constrained nonconvex quadratic programming, J. Global Optimization 29 (2004) 377-399] for solving a standard quadratic programming problem.

  12. Numerov iteration method for second order integral-differential equation

    International Nuclear Information System (INIS)

    Zeng Fanan; Zhang Jiaju; Zhao Xuan

    1987-01-01

    In this paper, Numerov iterative method for second order integral-differential equation and system of equations are constructed. Numerical examples show that this method is better than direct method (Gauss elimination method) in CPU time and memoy requireing. Therefore, this method is an efficient method for solving integral-differential equation in nuclear physics

  13. An unusual mode of failure of a tripolar constrained acetabular liner: a case report.

    LENUS (Irish Health Repository)

    Banks, Louisa N

    2012-02-01

    Dislocation after primary total hip arthroplasty (THA) is the most commonly encountered complication and is unpleasant for both the patient and the surgeon. Constrained acetabular components can be used to treat or prevent instability after primary total hip arthroplasty. We present the case of a 42-year-old female with a BMI of 41. At 18 months post-primary THA the patient underwent further revision hip surgery after numerous (more than 20) dislocations. She had a tripolar Trident acetabular cup (Stryker-Howmedica-Osteonics, Rutherford, New Jersey) inserted. Shortly afterwards the unusual mode of failure of the constrained acetabular liner was noted from radiographs in that the inner liner had dissociated from the outer. The reinforcing ring remained intact and in place. We believe that the patient\\'s weight, combined with poor abductor musculature caused excessive demand on the device leading to failure at this interface when the patient flexed forward. Constrained acetabular components are useful implants to treat instability but have been shown to have up to 42% long-term failure rates with problems such as dissociated inserts, dissociated constraining rings and dissociated femoral rings being sited. Sometimes they may be the only option left in difficult cases such as illustrated here, but still unfortunately have the capacity to fail in unusual ways.

  14. An unusual mode of failure of a tripolar constrained acetabular liner: a case report.

    Science.gov (United States)

    Banks, Louisa N; McElwain, John P

    2010-04-01

    Dislocation after primary total hip arthroplasty (THA) is the most commonly encountered complication and is unpleasant for both the patient and the surgeon. Constrained acetabular components can be used to treat or prevent instability after primary total hip arthroplasty. We present the case of a 42-year-old female with a BMI of 41. At 18 months post-primary THA the patient underwent further revision hip surgery after numerous (more than 20) dislocations. She had a tripolar Trident acetabular cup (Stryker-Howmedica-Osteonics, Rutherford, New Jersey) inserted. Shortly afterwards the unusual mode of failure of the constrained acetabular liner was noted from radiographs in that the inner liner had dissociated from the outer. The reinforcing ring remained intact and in place. We believe that the patient's weight, combined with poor abductor musculature caused excessive demand on the device leading to failure at this interface when the patient flexed forward. Constrained acetabular components are useful implants to treat instability but have been shown to have up to 42% long-term failure rates with problems such as dissociated inserts, dissociated constraining rings and dissociated femoral rings being sited. Sometimes they may be the only option left in difficult cases such as illustrated here, but still unfortunately have the capacity to fail in unusual ways.

  15. Technical Note: On methodologies for determining the size-normalised weight of planktic foraminifera

    Directory of Open Access Journals (Sweden)

    C. J. Beer

    2010-07-01

    Full Text Available The size-normalised weight (SNW of planktic foraminifera, a measure of test wall thickness and density, is potentially a valuable palaeo-proxy for marine carbon chemistry. As increasing attention is given to developing this proxy it is important that methods are comparable between studies. Here, we compare SNW data generated using two different methods to account for variability in test size, namely (i the narrow (50 μm range sieve fraction method and (ii the individually measured test size method. Using specimens from the 200–250 μm sieve fraction range collected in multinet samples from the North Atlantic, we find that sieving does not constrain size sufficiently well to isolate changes in weight driven by variations in test wall thickness and density from those driven by size. We estimate that the SNW data produced as part of this study are associated with an uncertainty, or error bar, of about ±11%. Errors associated with the narrow sieve fraction method may be reduced by decreasing the size of the sieve window, by using larger tests and by increasing the number tests employed. In situations where numerous large tests are unavailable, however, substantial errors associated with this sieve method remain unavoidable. In such circumstances the individually measured test size method provides a better means for estimating SNW because, as our results show, this method isolates changes in weight driven by variations in test wall thickness and density from those driven by size.

  16. Achieving integration in mixed methods designs-principles and practices.

    Science.gov (United States)

    Fetters, Michael D; Curry, Leslie A; Creswell, John W

    2013-12-01

    Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs-exploratory sequential, explanatory sequential, and convergent-and through four advanced frameworks-multistage, intervention, case study, and participatory. Integration at the methods level occurs through four approaches. In connecting, one database links to the other through sampling. With building, one database informs the data collection approach of the other. When merging, the two databases are brought together for analysis. With embedding, data collection and analysis link at multiple points. Integration at the interpretation and reporting level occurs through narrative, data transformation, and joint display. The fit of integration describes the extent the qualitative and quantitative findings cohere. Understanding these principles and practices of integration can help health services researchers leverage the strengths of mixed methods. © Health Research and Educational Trust.

  17. Constrained convex minimization via model-based excessive gap

    OpenAIRE

    Tran Dinh, Quoc; Cevher, Volkan

    2014-01-01

    We introduce a model-based excessive gap technique to analyze first-order primal- dual methods for constrained convex minimization. As a result, we construct new primal-dual methods with optimal convergence rates on the objective residual and the primal feasibility gap of their iterates separately. Through a dual smoothing and prox-function selection strategy, our framework subsumes the augmented Lagrangian, and alternating methods as special cases, where our rates apply.

  18. Use of the dry-weight-rank method of botanical analysis in the ...

    African Journals Online (AJOL)

    The dry-weight-rank method of botanical analysis was tested in the highveld of the Eastern Transvaal and was found to be an efficient and accurate means of determining the botanical composition of veld herbage. Accuracy was increased by weighting ranks on the basis of quadrat yield, and by allocation of equal ranks to ...

  19. Comparison of Four Weighting Methods in Fuzzy-based Land Suitability to Predict Wheat Yield

    Directory of Open Access Journals (Sweden)

    Fatemeh Rahmati

    2017-06-01

    Full Text Available Introduction: Land suitability evaluation is a process to examine the degree of land fitness for specific utilization and also makes it possible to estimate land productivity potential. In 1976, FAO provided a general framework for land suitability classification. It has not been proposed a specific method to perform this classification in the framework. In later years, a collection of methods was presented based on the FAO framework. In parametric method, different land suitability aspects are defined as completely discrete groups and are separated from each other by distinguished and consistent ranges. Therefore, land units that have moderate suitability can only choose one of the characteristics of predefined classes of land suitability. Fuzzy logic is an extension of Boolean logic by LotfiZadeh in 1965 based on the mathematical theory of fuzzy sets, which is a generalization of the classical set theory. By introducing the notion of degree in the verification of a condition, fuzzy method enables a condition to be in a state other than true or false, as well as provides a very valuable flexibility for reasoning, which makes it possible to take into account inaccuracies and uncertainties. One advantage of fuzzy logic in order to formalize human reasoning is that the rules are set in natural language. In evaluation method based on fuzzy logic, the weights are used for land characteristics. The objective of this study was to compare four methods of weight calculation in the fuzzy logic to predict the yield of wheat in the study area covering 1500 ha in Kian town in Shahrekord (Chahrmahal and Bakhtiari province, Iran. Materials and Methods: In such investigations, climatic factors, and soil physical and chemical characteristics are studied. This investigation involves several studies including a lab study, and qualitative and quantitative land suitability evaluation with fuzzy logic for wheat. Factors affecting the wheat production consist of

  20. Integral Methods in Science and Engineering

    CERN Document Server

    Constanda, Christian

    2011-01-01

    An enormous array of problems encountered by scientists and engineers are based on the design of mathematical models using many different types of ordinary differential, partial differential, integral, and integro-differential equations. Accordingly, the solutions of these equations are of great interest to practitioners and to science in general. Presenting a wealth of cutting-edge research by a diverse group of experts in the field, Integral Methods in Science and Engineering: Computational and Analytic Aspects gives a vivid picture of both the development of theoretical integral techniques

  1. Quasicanonical structure of optimal control in constrained discrete systems

    Science.gov (United States)

    Sieniutycz, S.

    2003-06-01

    This paper considers discrete processes governed by difference rather than differential equations for the state transformation. The basic question asked is if and when Hamiltonian canonical structures are possible in optimal discrete systems. Considering constrained discrete control, general optimization algorithms are derived that constitute suitable theoretical and computational tools when evaluating extremum properties of constrained physical models. The mathematical basis of the general theory is the Bellman method of dynamic programming (DP) and its extension in the form of the so-called Carathéodory-Boltyanski (CB) stage criterion which allows a variation of the terminal state that is otherwise fixed in the Bellman's method. Two relatively unknown, powerful optimization algorithms are obtained: an unconventional discrete formalism of optimization based on a Hamiltonian for multistage systems with unconstrained intervals of holdup time, and the time interval constrained extension of the formalism. These results are general; namely, one arrives at: the discrete canonical Hamilton equations, maximum principles, and (at the continuous limit of processes with free intervals of time) the classical Hamilton-Jacobi theory along with all basic results of variational calculus. Vast spectrum of applications of the theory is briefly discussed.

  2. Selection of magnetorheological brake types via optimal design considering maximum torque and constrained volume

    International Nuclear Information System (INIS)

    Nguyen, Q H; Choi, S B

    2012-01-01

    This research focuses on optimal design of different types of magnetorheological brakes (MRBs), from which an optimal selection of MRB types is identified. In the optimization, common types of MRB such as disc-type, drum-type, hybrid-types, and T-shaped type are considered. The optimization problem is to find the optimal value of significant geometric dimensions of the MRB that can produce a maximum braking torque. The MRB is constrained in a cylindrical volume of a specific radius and length. After a brief description of the configuration of MRB types, the braking torques of the MRBs are derived based on the Herschel–Bulkley model of the MR fluid. The optimal design of MRBs constrained in a specific cylindrical volume is then analysed. The objective of the optimization is to maximize the braking torque while the torque ratio (the ratio of maximum braking torque and the zero-field friction torque) is constrained to be greater than a certain value. A finite element analysis integrated with an optimization tool is employed to obtain optimal solutions of the MRBs. Optimal solutions of MRBs constrained in different volumes are obtained based on the proposed optimization procedure. From the results, discussions on the optimal selection of MRB types depending on constrained volumes are given. (paper)

  3. Recent Advances in the Method of Forces: Integrated Force Method of Structural Analysis

    Science.gov (United States)

    Patnaik, Surya N.; Coroneos, Rula M.; Hopkins, Dale A.

    1998-01-01

    Stress that can be induced in an elastic continuum can be determined directly through the simultaneous application of the equilibrium equations and the compatibility conditions. In the literature, this direct stress formulation is referred to as the integrated force method. This method, which uses forces as the primary unknowns, complements the popular equilibrium-based stiffness method, which considers displacements as the unknowns. The integrated force method produces accurate stress, displacement, and frequency results even for modest finite element models. This version of the force method should be developed as an alternative to the stiffness method because the latter method, which has been researched for the past several decades, may have entered its developmental plateau. Stress plays a primary role in the development of aerospace and other products, and its analysis is difficult. Therefore, it is advisable to use both methods to calculate stress and eliminate errors through comparison. This paper examines the role of the integrated force method in analysis, animation and design.

  4. Research Notes Use of the dry-weight-rank method of botanical ...

    African Journals Online (AJOL)

    When used in combination with the double sampling (or comparative yield) method of yield estimation, the dry-weight-rank method of botanical analysis provides a rapid non-destructive means of estimating botanical composition. The composition is expressed in terms of the contribution of individual species to total herbage ...

  5. ASPECTS OF INTEGRATION MANAGEMENT METHODS

    Directory of Open Access Journals (Sweden)

    Artemy Varshapetian

    2015-10-01

    Full Text Available For manufacturing companies to succeed in today's unstable economic environment, it is necessary to restructure the main components of its activities: designing innovative product, production using modern reconfigurable manufacturing systems, a business model that takes into account the global strategy and management methods using modern management models and tools. The first three components are discussed in numerous publications, for example, (Koren, 2010 and is therefore not considered in the article. A large number of publications devoted to the methods and tools of production management, for example (Halevi, 2007. On the basis of what was said in the article discusses the possibility of the integration of only three methods have received in recent years, the most widely used, namely: Six Sigma method - SS (George et al., 2005 and supplements its-Design for six sigm? - DFSS (Taguchi, 2003; Lean production transformed with the development to the "Lean management" and further to the "Lean thinking" - Lean (Hirano et al., 2006; Theory of Constraints, developed E.Goldratt - TOC (Dettmer, 2001. The article investigates some aspects of this integration: applications in diverse fields, positive features, changes in management structure, etc.

  6. Low-lying excited states by constrained DFT

    Science.gov (United States)

    Ramos, Pablo; Pavanello, Michele

    2018-04-01

    Exploiting the machinery of Constrained Density Functional Theory (CDFT), we propose a variational method for calculating low-lying excited states of molecular systems. We dub this method eXcited CDFT (XCDFT). Excited states are obtained by self-consistently constraining a user-defined population of electrons, Nc, in the virtual space of a reference set of occupied orbitals. By imposing this population to be Nc = 1.0, we computed the first excited state of 15 molecules from a test set. Our results show that XCDFT achieves an accuracy in the predicted excitation energy only slightly worse than linear-response time-dependent DFT (TDDFT), but without incurring into problems of variational collapse typical of the more commonly adopted ΔSCF method. In addition, we selected a few challenging processes to test the limits of applicability of XCDFT. We find that in contrast to TDDFT, XCDFT is capable of reproducing energy surfaces featuring conical intersections (azobenzene and H3) with correct topology and correct overall energetics also away from the intersection. Venturing to condensed-phase systems, XCDFT reproduces the TDDFT solvatochromic shift of benzaldehyde when it is embedded by a cluster of water molecules. Thus, we find XCDFT to be a competitive method among single-reference methods for computations of excited states in terms of time to solution, rate of convergence, and accuracy of the result.

  7. Accelerated weight histogram method for exploring free energy landscapes

    Energy Technology Data Exchange (ETDEWEB)

    Lindahl, V.; Lidmar, J.; Hess, B. [Department of Theoretical Physics and Swedish e-Science Research Center, KTH Royal Institute of Technology, 10691 Stockholm (Sweden)

    2014-07-28

    Calculating free energies is an important and notoriously difficult task for molecular simulations. The rapid increase in computational power has made it possible to probe increasingly complex systems, yet extracting accurate free energies from these simulations remains a major challenge. Fully exploring the free energy landscape of, say, a biological macromolecule typically requires sampling large conformational changes and slow transitions. Often, the only feasible way to study such a system is to simulate it using an enhanced sampling method. The accelerated weight histogram (AWH) method is a new, efficient extended ensemble sampling technique which adaptively biases the simulation to promote exploration of the free energy landscape. The AWH method uses a probability weight histogram which allows for efficient free energy updates and results in an easy discretization procedure. A major advantage of the method is its general formulation, making it a powerful platform for developing further extensions and analyzing its relation to already existing methods. Here, we demonstrate its efficiency and general applicability by calculating the potential of mean force along a reaction coordinate for both a single dimension and multiple dimensions. We make use of a non-uniform, free energy dependent target distribution in reaction coordinate space so that computational efforts are not wasted on physically irrelevant regions. We present numerical results for molecular dynamics simulations of lithium acetate in solution and chignolin, a 10-residue long peptide that folds into a β-hairpin. We further present practical guidelines for setting up and running an AWH simulation.

  8. Design and Optimization of Composite Automotive Hatchback Using Integrated Material-Structure-Process-Performance Method

    Science.gov (United States)

    Yang, Xudong; Sun, Lingyu; Zhang, Cheng; Li, Lijun; Dai, Zongmiao; Xiong, Zhenkai

    2018-03-01

    The application of polymer composites as a substitution of metal is an effective approach to reduce vehicle weight. However, the final performance of composite structures is determined not only by the material types, structural designs and manufacturing process, but also by their mutual restrict. Hence, an integrated "material-structure-process-performance" method is proposed for the conceptual and detail design of composite components. The material selection is based on the principle of composite mechanics such as rule of mixture for laminate. The design of component geometry, dimension and stacking sequence is determined by parametric modeling and size optimization. The selection of process parameters are based on multi-physical field simulation. The stiffness and modal constraint conditions were obtained from the numerical analysis of metal benchmark under typical load conditions. The optimal design was found by multi-discipline optimization. Finally, the proposed method was validated by an application case of automotive hatchback using carbon fiber reinforced polymer. Compared with the metal benchmark, the weight of composite one reduces 38.8%, simultaneously, its torsion and bending stiffness increases 3.75% and 33.23%, respectively, and the first frequency also increases 44.78%.

  9. Quantization and training of object detection networks with low-precision weights and activations

    Science.gov (United States)

    Yang, Bo; Liu, Jian; Zhou, Li; Wang, Yun; Chen, Jie

    2018-01-01

    As convolutional neural networks have demonstrated state-of-the-art performance in object recognition and detection, there is a growing need for deploying these systems on resource-constrained mobile platforms. However, the computational burden and energy consumption of inference for these networks are significantly higher than what most low-power devices can afford. To address these limitations, this paper proposes a method to train object detection networks with low-precision weights and activations. The probability density functions of weights and activations of each layer are first directly estimated using piecewise Gaussian models. Then, the optimal quantization intervals and step sizes for each convolution layer are adaptively determined according to the distribution of weights and activations. As the most computationally expensive convolutions can be replaced by effective fixed point operations, the proposed method can drastically reduce computation complexity and memory footprint. Performing on the tiny you only look once (YOLO) and YOLO architectures, the proposed method achieves comparable accuracy to their 32-bit counterparts. As an illustration, the proposed 4-bit and 8-bit quantized versions of the YOLO model achieve a mean average precision of 62.6% and 63.9%, respectively, on the Pascal visual object classes 2012 test dataset. The mAP of the 32-bit full-precision baseline model is 64.0%.

  10. Evaluation of the depth-integration method of measuring water discharge in large rivers

    Science.gov (United States)

    Moody, J.A.; Troutman, B.M.

    1992-01-01

    The depth-integration method oor measuring water discharge makes a continuos measurement of the water velocity from the water surface to the bottom at 20 to 40 locations or verticals across a river. It is especially practical for large rivers where river traffic makes it impractical to use boats attached to taglines strung across the river or to use current meters suspended from bridges. This method has the additional advantage over the standard two- and eight-tenths method in that a discharge-weighted suspended-sediment sample can be collected at the same time. When this method is used in large rivers such as the Missouri, Mississippi and Ohio, a microwave navigation system is used to determine the ship's position at each vertical sampling location across the river, and to make accurate velocity corrections to compensate for shift drift. An essential feature is a hydraulic winch that can lower and raise the current meter at a constant transit velocity so that the velocities at all depths are measured for equal lengths of time. Field calibration measurements show that: (1) the mean velocity measured on the upcast (bottom to surface) is within 1% of the standard mean velocity determined by 9-11 point measurements; (2) if the transit velocity is less than 25% of the mean velocity, then average error in the mean velocity is 4% or less. The major source of bias error is a result of mounting the current meter above a sounding weight and sometimes above a suspended-sediment sampling bottle, which prevents measurement of the velocity all the way to the bottom. The measured mean velocity is slightly larger than the true mean velocity. This bias error in the discharge is largest in shallow water (approximately 8% for the Missouri River at Hermann, MO, where the mean depth was 4.3 m) and smallest in deeper water (approximately 3% for the Mississippi River at Vickbsurg, MS, where the mean depth was 14.5 m). The major source of random error in the discharge is the natural

  11. Splines and polynomial tools for flatness-based constrained motion planning

    Science.gov (United States)

    Suryawan, Fajar; De Doná, José; Seron, María

    2012-08-01

    This article addresses the problem of trajectory planning for flat systems with constraints. Flat systems have the useful property that the input and the state can be completely characterised by the so-called flat output. We propose a spline parametrisation for the flat output, the performance output, the states and the inputs. Using this parametrisation the problem of constrained trajectory planning can be cast into a simple quadratic programming problem. An important result is that the B-spline parametrisation used gives exact results for constrained linear continuous-time system. The result is exact in the sense that the constrained signal can be made arbitrarily close to the boundary without having intersampling issues (as one would have in sampled-data systems). Simulation examples are presented, involving the generation of rest-to-rest trajectories. In addition, an experimental result of the method is also presented, where two methods to generate trajectories for a magnetic-levitation (maglev) system in the presence of constraints are compared and each method's performance is discussed. The first method uses the nonlinear model of the plant, which turns out to belong to the class of flat systems. The second method uses a linearised version of the plant model around an operating point. In every case, a continuous-time description is used. The experimental results on a real maglev system reported here show that, in most scenarios, the nonlinear and linearised models produce almost similar, indistinguishable trajectories.

  12. Geometrically constrained kinematic global navigation satellite systems positioning: Implementation and performance

    Science.gov (United States)

    Asgari, Jamal; Mohammadloo, Tannaz H.; Amiri-Simkooei, Ali Reza

    2015-09-01

    GNSS kinematic techniques are capable of providing precise coordinates in extremely short observation time-span. These methods usually determine the coordinates of an unknown station with respect to a reference one. To enhance the precision, accuracy, reliability and integrity of the estimated unknown parameters, GNSS kinematic equations are to be augmented by possible constraints. Such constraints could be derived from the geometric relation of the receiver positions in motion. This contribution presents the formulation of the constrained kinematic global navigation satellite systems positioning. Constraints effectively restrict the definition domain of the unknown parameters from the three-dimensional space to a subspace defined by the equation of motion. To test the concept of the constrained kinematic positioning method, the equation of a circle is employed as a constraint. A device capable of moving on a circle was made and the observations from 11 positions on the circle were analyzed. Relative positioning was conducted by considering the center of the circle as the reference station. The equation of the receiver's motion was rewritten in the ECEF coordinates system. A special attention is drawn onto how a constraint is applied to kinematic positioning. Implementing the constraint in the positioning process provides much more precise results compared to the unconstrained case. This has been verified based on the results obtained from the covariance matrix of the estimated parameters and the empirical results using kinematic positioning samples as well. The theoretical standard deviations of the horizontal components are reduced by a factor ranging from 1.24 to 2.64. The improvement on the empirical standard deviation of the horizontal components ranges from 1.08 to 2.2.

  13. THE DUBINS TRAVELING SALESMAN PROBLEM WITH CONSTRAINED COLLECTING MANEUVERS

    Directory of Open Access Journals (Sweden)

    Petr Váňa

    2016-11-01

    Full Text Available In this paper, we introduce a variant of the Dubins traveling salesman problem (DTSP that is called the Dubins traveling salesman problem with constrained collecting maneuvers (DTSP-CM. In contrast to the ordinary formulation of the DTSP, in the proposed DTSP-CM, the vehicle is requested to visit each target by specified collecting maneuver to accomplish the mission. The proposed problem formulation is motivated by scenarios with unmanned aerial vehicles where particular maneuvers are necessary for accomplishing the mission, such as object dropping or data collection with sensor sensitive to changes in vehicle heading. We consider existing methods for the DTSP and propose its modifications to use these methods to address a variant of the introduced DTSP-CM, where the collecting maneuvers are constrained to straight line segments.

  14. Extending the Matrix Element Method beyond the Born approximation: calculating event weights at next-to-leading order accuracy

    International Nuclear Information System (INIS)

    Martini, Till; Uwer, Peter

    2015-01-01

    In this article we illustrate how event weights for jet events can be calculated efficiently at next-to-leading order (NLO) accuracy in QCD. This is a crucial prerequisite for the application of the Matrix Element Method in NLO. We modify the recombination procedure used in jet algorithms, to allow a factorisation of the phase space for the real corrections into resolved and unresolved regions. Using an appropriate infrared regulator the latter can be integrated numerically. As illustration, we reproduce differential distributions at NLO for two sample processes. As further application and proof of concept, we apply the Matrix Element Method in NLO accuracy to the mass determination of top quarks produced in e"+e"− annihilation. This analysis is relevant for a future Linear Collider. We observe a significant shift in the extracted mass depending on whether the Matrix Element Method is used in leading or next-to-leading order.

  15. Volume-constrained optimization of magnetorheological and electrorheological valves and dampers

    Science.gov (United States)

    Rosenfeld, Nicholas C.; Wereley, Norman M.

    2004-12-01

    This paper presents a case study of magnetorheological (MR) and electrorheological (ER) valve design within a constrained cylindrical volume. The primary purpose of this study is to establish general design guidelines for volume-constrained MR valves. Additionally, this study compares the performance of volume-constrained MR valves against similarly constrained ER valves. Starting from basic design guidelines for an MR valve, a method for constructing candidate volume-constrained valve geometries is presented. A magnetic FEM program is then used to evaluate the magnetic properties of the candidate valves. An optimized MR valve is chosen by evaluating non-dimensional parameters describing the candidate valves' damping performance. A derivation of the non-dimensional damping coefficient for valves with both active and passive volumes is presented to allow comparison of valves with differing proportions of active and passive volumes. The performance of the optimized MR valve is then compared to that of a geometrically similar ER valve using both analytical and numerical techniques. An analytical equation relating the damping performances of geometrically similar MR and ER valves in as a function of fluid yield stresses and relative active fluid volume, and numerical calculations are provided to calculate each valve's damping performance and to validate the analytical calculations.

  16. Network Constrained Transactive Control for Electric Vehicles Integration

    DEFF Research Database (Denmark)

    Hu, Junjie; Yang, Guangya; Bindner, Henrik W.

    2015-01-01

    . This paper applies the transactive control concept to integrate electric vehicles into the power distribution system with the purpose of minimizing the charging cost of electric vehicles as well as preventing grid congestions and voltage violations. A hierarchical EV management system is proposed where three...

  17. Applying recursive numerical integration techniques for solving high dimensional integrals

    International Nuclear Information System (INIS)

    Ammon, Andreas; Genz, Alan; Hartung, Tobias; Jansen, Karl; Volmer, Julia; Leoevey, Hernan

    2016-11-01

    The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.

  18. Applying recursive numerical integration techniques for solving high dimensional integrals

    Energy Technology Data Exchange (ETDEWEB)

    Ammon, Andreas [IVU Traffic Technologies AG, Berlin (Germany); Genz, Alan [Washington State Univ., Pullman, WA (United States). Dept. of Mathematics; Hartung, Tobias [King' s College, London (United Kingdom). Dept. of Mathematics; Jansen, Karl; Volmer, Julia [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Leoevey, Hernan [Humboldt Univ. Berlin (Germany). Inst. fuer Mathematik

    2016-11-15

    The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.

  19. Structural reliability calculation method based on the dual neural network and direct integration method.

    Science.gov (United States)

    Li, Haibin; He, Yun; Nie, Xiaobo

    2018-01-01

    Structural reliability analysis under uncertainty is paid wide attention by engineers and scholars due to reflecting the structural characteristics and the bearing actual situation. The direct integration method, started from the definition of reliability theory, is easy to be understood, but there are still mathematics difficulties in the calculation of multiple integrals. Therefore, a dual neural network method is proposed for calculating multiple integrals in this paper. Dual neural network consists of two neural networks. The neural network A is used to learn the integrand function, and the neural network B is used to simulate the original function. According to the derivative relationships between the network output and the network input, the neural network B is derived from the neural network A. On this basis, the performance function of normalization is employed in the proposed method to overcome the difficulty of multiple integrations and to improve the accuracy for reliability calculations. The comparisons between the proposed method and Monte Carlo simulation method, Hasofer-Lind method, the mean value first-order second moment method have demonstrated that the proposed method is an efficient and accurate reliability method for structural reliability problems.

  20. Constraining new physics models with isotope shift spectroscopy

    Science.gov (United States)

    Frugiuele, Claudia; Fuchs, Elina; Perez, Gilad; Schlaffer, Matthias

    2017-07-01

    Isotope shifts of transition frequencies in atoms constrain generic long- and intermediate-range interactions. We focus on new physics scenarios that can be most strongly constrained by King linearity violation such as models with B -L vector bosons, the Higgs portal, and chameleon models. With the anticipated precision, King linearity violation has the potential to set the strongest laboratory bounds on these models in some regions of parameter space. Furthermore, we show that this method can probe the couplings relevant for the protophobic interpretation of the recently reported Be anomaly. We extend the formalism to include an arbitrary number of transitions and isotope pairs and fit the new physics coupling to the currently available isotope shift measurements.

  1. Flux weighted method for solution of stiff neutron dynamic equations and its application

    International Nuclear Information System (INIS)

    Li Huiyun; Jiao Huixian

    1987-12-01

    To analyze reactivity event for nuclear power plants, it is necessary to solve the neutron dynamic equations, which is a group of typical stiff constant differential equations. Very small time steps could only be adopted when the group of equations is solved by common methods. However, a large time steps might be selected if the Flux Weighted Medthod introduced in this paper is used. Generally, weighted factor θ i1 is set as a constant. Naturally, this treatment method can decrease the accuracy of calculation for the increase of the steadiness of solving the equations. An accurate theoretical formula of 4 x 4 matrix of θ i1 is rigorously derived so that the accuracy of calculation is ensured, as well as the steadiness of solved equations is increased. This method have the advantage over classical Runge-kutta Method and other methods. The time steps could be increased by a factor of 1 ∼ 3 orders of magnitude so as to save a lot of computating time. The programe solving neutron dynamic equation, which is prepared by using Flux Weighted Method, could be sued for real time analog of training simulator, as well as for analysis and computation of reactivity event (including rod jumping out event)

  2. Brain network analysis: separating cost from topology using cost-integration.

    Directory of Open Access Journals (Sweden)

    Cedric E Ginestet

    Full Text Available A statistically principled way of conducting brain network analysis is still lacking. Comparison of different populations of brain networks is hard because topology is inherently dependent on wiring cost, where cost is defined as the number of edges in an unweighted graph. In this paper, we evaluate the benefits and limitations associated with using cost-integrated topological metrics. Our focus is on comparing populations of weighted undirected graphs that differ in mean association weight, using global efficiency. Our key result shows that integrating over cost is equivalent to controlling for any monotonic transformation of the weight set of a weighted graph. That is, when integrating over cost, we eliminate the differences in topology that may be due to a monotonic transformation of the weight set. Our result holds for any unweighted topological measure, and for any choice of distribution over cost levels. Cost-integration is therefore helpful in disentangling differences in cost from differences in topology. By contrast, we show that the use of the weighted version of a topological metric is generally not a valid approach to this problem. Indeed, we prove that, under weak conditions, the use of the weighted version of global efficiency is equivalent to simply comparing weighted costs. Thus, we recommend the reporting of (i differences in weighted costs and (ii differences in cost-integrated topological measures with respect to different distributions over the cost domain. We demonstrate the application of these techniques in a re-analysis of an fMRI working memory task. We also provide a Monte Carlo method for approximating cost-integrated topological measures. Finally, we discuss the limitations of integrating topology over cost, which may pose problems when some weights are zero, when multiplicities exist in the ranks of the weights, and when one expects subtle cost-dependent topological differences, which could be masked by cost-integration.

  3. Brain Network Analysis: Separating Cost from Topology Using Cost-Integration

    Science.gov (United States)

    Ginestet, Cedric E.; Nichols, Thomas E.; Bullmore, Ed T.; Simmons, Andrew

    2011-01-01

    A statistically principled way of conducting brain network analysis is still lacking. Comparison of different populations of brain networks is hard because topology is inherently dependent on wiring cost, where cost is defined as the number of edges in an unweighted graph. In this paper, we evaluate the benefits and limitations associated with using cost-integrated topological metrics. Our focus is on comparing populations of weighted undirected graphs that differ in mean association weight, using global efficiency. Our key result shows that integrating over cost is equivalent to controlling for any monotonic transformation of the weight set of a weighted graph. That is, when integrating over cost, we eliminate the differences in topology that may be due to a monotonic transformation of the weight set. Our result holds for any unweighted topological measure, and for any choice of distribution over cost levels. Cost-integration is therefore helpful in disentangling differences in cost from differences in topology. By contrast, we show that the use of the weighted version of a topological metric is generally not a valid approach to this problem. Indeed, we prove that, under weak conditions, the use of the weighted version of global efficiency is equivalent to simply comparing weighted costs. Thus, we recommend the reporting of (i) differences in weighted costs and (ii) differences in cost-integrated topological measures with respect to different distributions over the cost domain. We demonstrate the application of these techniques in a re-analysis of an fMRI working memory task. We also provide a Monte Carlo method for approximating cost-integrated topological measures. Finally, we discuss the limitations of integrating topology over cost, which may pose problems when some weights are zero, when multiplicities exist in the ranks of the weights, and when one expects subtle cost-dependent topological differences, which could be masked by cost-integration. PMID:21829437

  4. Evaluation of Behavioral Theory and Integrated Internet/telephone Technologies to Support Military Obesity and Weight Management Programs

    Science.gov (United States)

    2006-01-01

    Obesity - Cushing’s Syndrome (97%) - Hypothyroidism - Polycystic Ovary Syndrome (10-80%) - Growth Hormone Deficiency - Drug-Induced Weight Gain...interventions and two methods of follow up counseling on weight loss in overweight active duty military service members after 3 months. Participants...different weight control behaviors (dietary fat, fruits and vegetables, portion control, beverage choices, exercise) and weight loss after 3 months

  5. Constraining dark energy with Hubble parameter measurements: an analysis including future redshift-drift observations

    International Nuclear Information System (INIS)

    Guo, Rui-Yun; Zhang, Xin

    2016-01-01

    The nature of dark energy affects the Hubble expansion rate (namely, the expansion history) H(z) by an integral over w(z). However, the usual observables are the luminosity distances or the angular diameter distances, which measure the distance.redshift relation. Actually, the property of dark energy affects the distances (and the growth factor) by a further integration over functions of H(z). Thus, the direct measurements of the Hubble parameter H(z) at different redshifts are of great importance for constraining the properties of dark energy. In this paper, we show how the typical dark energy models, for example, the ΛCDM, wCDM, CPL, and holographic dark energy models, can be constrained by the current direct measurements of H(z) (31 data used in total in this paper, covering the redshift range of z @ element of [0.07, 2.34]). In fact, the future redshift-drift observations (also referred to as the Sandage-Loeb test) can also directly measure H(z) at higher redshifts, covering the range of z @ element of [2, 5]. We thus discuss what role the redshift-drift observations can play in constraining dark energy with the Hubble parameter measurements. We show that the constraints on dark energy can be improved greatly with the H(z) data from only a 10-year observation of redshift drift. (orig.)

  6. Binary classification posed as a quadratically constrained quadratic ...

    Indian Academy of Sciences (India)

    Binary classification is posed as a quadratically constrained quadratic problem and solved using the proposed method. Each class in the binary classification problem is modeled as a multidimensional ellipsoid to forma quadratic constraint in the problem. Particle swarms help in determining the optimal hyperplane or ...

  7. Integrating weight bias awareness and mental health promotion into obesity prevention delivery: a public health pilot study.

    Science.gov (United States)

    McVey, Gail L; Walker, Kathryn S; Beyers, Joanne; Harrison, Heather L; Simkins, Sari W; Russell-Mayhew, Shelly

    2013-04-04

    Promoting healthy weight is a top priority in Canada. Recent federal guidelines call for sustained, multisectoral partnerships that address childhood obesity on multiple levels. Current healthy weight messaging does not fully acknowledge the influence of social determinants of health on weight. An interactive workshop was developed and implemented by a team of academic researchers and health promoters from the psychology and public health disciplines to raise awareness about 1) weight bias and its negative effect on health, 2) ways to balance healthy weight messaging to prevent the triggering of weight and shape preoccupation, and 3) the incorporation of mental health promotion into healthy weight messaging. We conducted a full-day workshop with 342 Ontario public health promoters and administered a survey at preintervention, postintervention, and follow-up. Participation in the full-day workshop led to significant decreases in antifat attitudes and the internalization of media stereotypes and to significant increases in self-efficacy to address weight bias. Participants reported that the training heightened their awareness of their own personal weight biases and the need to broaden their scope of healthy weight promotion to include mental health promotion. There was consensus that additional sessions are warranted to help translate knowledge into action. Buy-in and resource support at the organizational level was also seen as pivotal. Professional development training in the area of weight bias awareness is associated with decreases in antifat attitudes and the internalization of media stereotypes around thinness. Health promoters' healthy weight messaging was improved by learning to avoid messages that trigger weight and shape preoccupation or unhealthful eating practices among children and youth. Participants also learned ways to integrate mental health promotion and resiliency-building into daily practice.

  8. Variational integrators for electric circuits

    International Nuclear Information System (INIS)

    Ober-Blöbaum, Sina; Tao, Molei; Cheng, Mulin; Owhadi, Houman; Marsden, Jerrold E.

    2013-01-01

    In this contribution, we develop a variational integrator for the simulation of (stochastic and multiscale) electric circuits. When considering the dynamics of an electric circuit, one is faced with three special situations: 1. The system involves external (control) forcing through external (controlled) voltage sources and resistors. 2. The system is constrained via the Kirchhoff current (KCL) and voltage laws (KVL). 3. The Lagrangian is degenerate. Based on a geometric setting, an appropriate variational formulation is presented to model the circuit from which the equations of motion are derived. A time-discrete variational formulation provides an iteration scheme for the simulation of the electric circuit. Dependent on the discretization, the intrinsic degeneracy of the system can be canceled for the discrete variational scheme. In this way, a variational integrator is constructed that gains several advantages compared to standard integration tools for circuits; in particular, a comparison to BDF methods (which are usually the method of choice for the simulation of electric circuits) shows that even for simple LCR circuits, a better energy behavior and frequency spectrum preservation can be observed using the developed variational integrator

  9. A decomposition method for network-constrained unit commitment with AC power flow constraints

    International Nuclear Information System (INIS)

    Bai, Yang; Zhong, Haiwang; Xia, Qing; Kang, Chongqing; Xie, Le

    2015-01-01

    To meet the increasingly high requirement of smart grid operations, considering AC power flow constraints in the NCUC (network-constrained unit commitment) is of great significance in terms of both security and economy. This paper proposes a decomposition method to solve NCUC with AC power flow constraints. With conic approximations of the AC power flow equations, the master problem is formulated as a MISOCP (mixed integer second-order cone programming) model. The key advantage of this model is that the active power and reactive power are co-optimised, and the transmission losses are considered. With the AC optimal power flow model, the AC feasibility of the UC result of the master problem is checked in subproblems. If infeasibility is detected, feedback constraints are generated based on the sensitivity of bus voltages to a change in the unit reactive power generation. They are then introduced into the master problem in the next iteration until all AC violations are eliminated. A 6-bus system, a modified IEEE 30-bus system and the IEEE 118-bus system are used to validate the performance of the proposed method, which provides a satisfactory solution with approximately 44-fold greater computational efficiency. - Highlights: • A decomposition method is proposed to solve the NCUC with AC power flow constraints • The master problem considers active power, reactive power and transmission losses. • OPF-based subproblems check the AC feasibility using parallel computing techniques. • An effective feedback constraint interacts between the master problem and subproblem. • Computational efficiency is significantly improved with satisfactory accuracy

  10. Exploring Constrained Creative Communication

    DEFF Research Database (Denmark)

    Sørensen, Jannick Kirk

    2017-01-01

    Creative collaboration via online tools offers a less ‘media rich’ exchange of information between participants than face-to-face collaboration. The participants’ freedom to communicate is restricted in means of communication, and rectified in terms of possibilities offered in the interface. How do...... these constrains influence the creative process and the outcome? In order to isolate the communication problem from the interface- and technology problem, we examine via a design game the creative communication on an open-ended task in a highly constrained setting, a design game. Via an experiment the relation...... between communicative constrains and participants’ perception of dialogue and creativity is examined. Four batches of students preparing for forming semester project groups were conducted and documented. Students were asked to create an unspecified object without any exchange of communication except...

  11. Complexity Quantification for Overhead Transmission Line Emergency Repair Scheme via a Graph Entropy Method Improved with Petri Net and AHP Weighting Method

    Directory of Open Access Journals (Sweden)

    Jing Zhou

    2014-01-01

    Full Text Available According to the characteristics of emergency repair in overhead transmission line accidents, a complexity quantification method for emergency repair scheme is proposed based on the entropy method in software engineering, which is improved by using group AHP (analytical hierarchical process method and Petri net. Firstly, information structure chart model and process control flowchart model could be built by Petri net. Then impact factors on complexity of emergency repair scheme could be quantified into corresponding entropy values, respectively. Finally, by using group AHP method, weight coefficient of each entropy value would be given before calculating the overall entropy value for the whole emergency repair scheme. By comparing group AHP weighting method with average weighting method, experiment results for the former showed a stronger correlation between quantified entropy values of complexity and the actual consumed time in repair, which indicates that this new method is more valid.

  12. Integrated fuzzy analytic hierarchy process and VIKOR method in the prioritization of pavement maintenance activities

    Directory of Open Access Journals (Sweden)

    Peyman Babashamsi

    2016-03-01

    Full Text Available Maintenance activities and pavement rehabilitation require the allocation of massive finances. Yet due to budget shortfalls, stakeholders and decision-makers must prioritize projects in maintenance and rehabilitation. This article addresses the prioritization of pavement maintenance alternatives by integrating the fuzzy analytic hierarchy process (AHP with the VIKOR method (which stands for ‘VlseKriterijumska Optimizacija I Kompromisno Resenje,’ meaning multi-criteria optimization and compromise solution for the process of multi-criteria decision analysis (MCDA by considering various pavement network indices. The indices selected include the pavement condition index (PCI, traffic congestion, pavement width, improvement and maintenance costs, and the time required to operate. In order to determine the weights of the indices, the fuzzy AHP is used. Subsequently, the alternatives’ priorities are ranked according to the indices weighted with the VIKOR model. The choice of these two independent methods was motivated by the fact that integrating fuzzy AHP with the VIKOR model can assist decision makers with solving MCDA problems. The case study was conducted on a pavement network within the same particular region in Tehran; three main streets were chosen that have an empirically higher maintenance demand. The most significant factors were evaluated and the project with the highest priority was selected for urgent maintenance. By comparing the index values of the alternative priorities, Delavaran Blvd. was revealed to have higher priority over the other streets in terms of maintenance and rehabilitation activities. Keywords: Maintenance and rehabilitation prioritization, Fuzzy analysis hierarchy process, VIKOR model, Pavement condition index, Multi-criteria decision analysis

  13. Method of preparing light-weight plugging mud

    Energy Technology Data Exchange (ETDEWEB)

    Gorskiy, V F; Melnichuk, A N; Vernikovskiy, A N

    1982-01-01

    A method is proposed for preparing a light-weight plugging mud which includes mixing Portland cement on an aqueous suspension of palygorskite. It is distinguished by the fact that in order to improve the quality of the mud and strength of the cement stone with simultaneous decrease in the gas permeability, before mixing the Portland cement, the aqueous suspension of palygorskite is dispersed to stabilization of viscosity, and after mixing the Portland cement, the obtained cement-clay mixture is exposed to additional dispersion under pressure. The ratio of the ingredients is the following (% by mass): Portland cement 32.0-61.0; palygorskite 1.2-2.9; water--the rest.

  14. Stochastic weighted particle methods for population balance equations with coagulation, fragmentation and spatial inhomogeneity

    International Nuclear Information System (INIS)

    Lee, Kok Foong; Patterson, Robert I.A.; Wagner, Wolfgang; Kraft, Markus

    2015-01-01

    Graphical abstract: -- Highlights: •Problems concerning multi-compartment population balance equations are studied. •A class of fragmentation weight transfer functions is presented. •Three stochastic weighted algorithms are compared against the direct simulation algorithm. •The numerical errors of the stochastic solutions are assessed as a function of fragmentation rate. •The algorithms are applied to a multi-dimensional granulation model. -- Abstract: This paper introduces stochastic weighted particle algorithms for the solution of multi-compartment population balance equations. In particular, it presents a class of fragmentation weight transfer functions which are constructed such that the number of computational particles stays constant during fragmentation events. The weight transfer functions are constructed based on systems of weighted computational particles and each of it leads to a stochastic particle algorithm for the numerical treatment of population balance equations. Besides fragmentation, the algorithms also consider physical processes such as coagulation and the exchange of mass with the surroundings. The numerical properties of the algorithms are compared to the direct simulation algorithm and an existing method for the fragmentation of weighted particles. It is found that the new algorithms show better numerical performance over the two existing methods especially for systems with significant amount of large particles and high fragmentation rates.

  15. Stochastic weighted particle methods for population balance equations with coagulation, fragmentation and spatial inhomogeneity

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kok Foong [Department of Chemical Engineering and Biotechnology, University of Cambridge, New Museums Site, Pembroke Street, Cambridge CB2 3RA (United Kingdom); Patterson, Robert I.A.; Wagner, Wolfgang [Weierstrass Institute for Applied Analysis and Stochastics, Mohrenstraße 39, 10117 Berlin (Germany); Kraft, Markus, E-mail: mk306@cam.ac.uk [Department of Chemical Engineering and Biotechnology, University of Cambridge, New Museums Site, Pembroke Street, Cambridge CB2 3RA (United Kingdom); School of Chemical and Biomedical Engineering, Nanyang Technological University, 62 Nanyang Drive, Singapore, 637459 (Singapore)

    2015-12-15

    Graphical abstract: -- Highlights: •Problems concerning multi-compartment population balance equations are studied. •A class of fragmentation weight transfer functions is presented. •Three stochastic weighted algorithms are compared against the direct simulation algorithm. •The numerical errors of the stochastic solutions are assessed as a function of fragmentation rate. •The algorithms are applied to a multi-dimensional granulation model. -- Abstract: This paper introduces stochastic weighted particle algorithms for the solution of multi-compartment population balance equations. In particular, it presents a class of fragmentation weight transfer functions which are constructed such that the number of computational particles stays constant during fragmentation events. The weight transfer functions are constructed based on systems of weighted computational particles and each of it leads to a stochastic particle algorithm for the numerical treatment of population balance equations. Besides fragmentation, the algorithms also consider physical processes such as coagulation and the exchange of mass with the surroundings. The numerical properties of the algorithms are compared to the direct simulation algorithm and an existing method for the fragmentation of weighted particles. It is found that the new algorithms show better numerical performance over the two existing methods especially for systems with significant amount of large particles and high fragmentation rates.

  16. A fuzzy MCDM model with objective and subjective weights for evaluating service quality in hotel industries

    Science.gov (United States)

    Zoraghi, Nima; Amiri, Maghsoud; Talebi, Golnaz; Zowghi, Mahdi

    2013-12-01

    This paper presents a fuzzy multi-criteria decision-making (FMCDM) model by integrating both subjective and objective weights for ranking and evaluating the service quality in hotels. The objective method selects weights of criteria through mathematical calculation, while the subjective method uses judgments of decision makers. In this paper, we use a combination of weights obtained by both approaches in evaluating service quality in hotel industries. A real case study that considered ranking five hotels is illustrated. Examples are shown to indicate capabilities of the proposed method.

  17. The Smoothing Artifact of Spatially Constrained Canonical Correlation Analysis in Functional MRI

    Directory of Open Access Journals (Sweden)

    Dietmar Cordes

    2012-01-01

    Full Text Available A wide range of studies show the capacity of multivariate statistical methods for fMRI to improve mapping of brain activations in a noisy environment. An advanced method uses local canonical correlation analysis (CCA to encompass a group of neighboring voxels instead of looking at the single voxel time course. The value of a suitable test statistic is used as a measure of activation. It is customary to assign the value to the center voxel; however, this is a choice of convenience and without constraints introduces artifacts, especially in regions of strong localized activation. To compensate for these deficiencies, different spatial constraints in CCA have been introduced to enforce dominance of the center voxel. However, even if the dominance condition for the center voxel is satisfied, constrained CCA can still lead to a smoothing artifact, often called the “bleeding artifact of CCA”, in fMRI activation patterns. In this paper a new method is introduced to measure and correct for the smoothing artifact for constrained CCA methods. It is shown that constrained CCA methods corrected for the smoothing artifact lead to more plausible activation patterns in fMRI as shown using data from a motor task and a memory task.

  18. Mixed methods in psychotherapy research: A review of method(ology) integration in psychotherapy science.

    Science.gov (United States)

    Bartholomew, Theodore T; Lockard, Allison J

    2018-06-13

    Mixed methods can foster depth and breadth in psychological research. However, its use remains in development in psychotherapy research. Our purpose was to review the use of mixed methods in psychotherapy research. Thirty-one studies were identified via the PRISMA systematic review method. Using Creswell & Plano Clark's typologies to identify design characteristics, we assessed each study for rigor and how each used mixed methods. Key features of mixed methods designs and these common patterns were identified: (a) integration of clients' perceptions via mixing; (b) understanding group psychotherapy; (c) integrating methods with cases and small samples; (d) analyzing clinical data as qualitative data; and (e) exploring cultural identities in psychotherapy through mixed methods. The review is discussed with respect to the value of integrating multiple data in single studies to enhance psychotherapy research. © 2018 Wiley Periodicals, Inc.

  19. Sustaining Lesson Study: Resources and Factors that Support and Constrain Mathematics Teachers' Ability to Continue After the Grant Ends

    Science.gov (United States)

    Druken, Bridget Kinsella

    Lesson study, a teacher-led vehicle for inquiring into teacher practice through creating, enacting, and reflecting on collaboratively designed research lessons, has been shown to improve mathematics teacher practice in the United States, such as improving knowledge about mathematics, changing teacher practice, and developing communities of teachers. Though it has been described as a sustainable form of professional development, little research exists on what might support teachers in continuing to engage in lesson study after a grant ends. This qualitative and multi-case study investigates the sustainability of lesson study as mathematics teachers engage in a district scale-up lesson study professional experience after participating in a three-year California Mathematics Science Partnership (CaMSP) grant to improve algebraic instruction. To do so, I first provide a description of material (e.g. curricular materials and time), human (attending district trainings and interacting with mathematics coaches), and social (qualities like trust, shared values, common goals, and expectations developed through relationships with others) resources present in the context of two school districts as reported by participants. I then describe practices of lesson study reported to have continued. I also report on teachers' conceptions of what it means to engage in lesson study. I conclude by describing how these results suggest factors that supported and constrained teachers' in continuing lesson study. To accomplish this work, I used qualitative methods of grounded theory informed by a modified sustainability framework on interview, survey, and case study data about teachers, principals, and Teachers on Special Assignment (TOSAs). Four cases were selected to show the varying levels of lesson study practices that continued past the conclusion of the grant. Analyses reveal varying levels of integration, linkage, and synergy among both formally and informally arranged groups of

  20. In vitro transcription of a torsionally constrained template

    DEFF Research Database (Denmark)

    Bentin, Thomas; Nielsen, Peter E

    2002-01-01

    of torsionally constrained DNA by free RNAP. We asked whether or not a newly synthesized RNA chain would limit transcription elongation. For this purpose we developed a method to immobilize covalently closed circular DNA to streptavidin-coated beads via a peptide nucleic acid (PNA)-biotin conjugate in principle...

  1. Momentum integral network method for thermal-hydraulic transient analysis

    International Nuclear Information System (INIS)

    Van Tuyle, G.J.

    1983-01-01

    A new momentum integral network method has been developed, and tested in the MINET computer code. The method was developed in order to facilitate the transient analysis of complex fluid flow and heat transfer networks, such as those found in the balance of plant of power generating facilities. The method employed in the MINET code is a major extension of a momentum integral method reported by Meyer. Meyer integrated the momentum equation over several linked nodes, called a segment, and used a segment average pressure, evaluated from the pressures at both ends. Nodal mass and energy conservation determined nodal flows and enthalpies, accounting for fluid compression and thermal expansion

  2. Extending the Matrix Element Method beyond the Born approximation: calculating event weights at next-to-leading order accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Martini, Till; Uwer, Peter [Humboldt-Universität zu Berlin, Institut für Physik,Newtonstraße 15, 12489 Berlin (Germany)

    2015-09-14

    In this article we illustrate how event weights for jet events can be calculated efficiently at next-to-leading order (NLO) accuracy in QCD. This is a crucial prerequisite for the application of the Matrix Element Method in NLO. We modify the recombination procedure used in jet algorithms, to allow a factorisation of the phase space for the real corrections into resolved and unresolved regions. Using an appropriate infrared regulator the latter can be integrated numerically. As illustration, we reproduce differential distributions at NLO for two sample processes. As further application and proof of concept, we apply the Matrix Element Method in NLO accuracy to the mass determination of top quarks produced in e{sup +}e{sup −} annihilation. This analysis is relevant for a future Linear Collider. We observe a significant shift in the extracted mass depending on whether the Matrix Element Method is used in leading or next-to-leading order.

  3. Probabilistic Constrained Load Flow Considering Integration of Wind Power Generation and Electric Vehicles

    DEFF Research Database (Denmark)

    Vlachogiannis, Ioannis (John)

    2009-01-01

    A new formulation and solution of probabilistic constrained load flow (PCLF) problem suitable for modern power systems with wind power generation and electric vehicles (EV) demand or supply is represented. The developed stochastic model of EV demand/supply and the wind power generation model...... are incorporated into load flow studies. In the resulted PCLF formulation, discrete and continuous control parameters are engaged. Therefore, a hybrid learning automata system (HLAS) is developed to find the optimal offline control settings over a whole planning period of power system. The process of HLAS...

  4. A method of segment weight optimization for intensity modulated radiation therapy

    International Nuclear Information System (INIS)

    Pei Xi; Cao Ruifen; Jing Jia; Cheng Mengyun; Zheng Huaqing; Li Jia; Huang Shanqing; Li Gui; Song Gang; Wang Weihua; Wu Yican; FDS Team

    2011-01-01

    The error caused by leaf sequencing often leads to planning of Intensity-Modulated Radiation Therapy (IMRT) arrange system couldn't meet clinical demand. The optimization approach in this paper can reduce this error and improve efficiency of plan-making effectively. Conjugate Gradient algorithm was used to optimize segment weight and readjust segment shape, which could minimize the error anterior-posterior leaf sequencing eventually. Frequent clinical cases were tasted by precise radiotherapy system, and then compared Dose-Volume histogram between target area and organ at risk as well as isodose line in computed tomography (CT) film, we found that the effect was improved significantly after optimizing segment weight. Segment weight optimizing approach based on Conjugate Gradient method can make treatment planning meet clinical request more efficiently, so that has extensive application perspective. (authors)

  5. Weighted -Integral Representations of -Functions in

    Directory of Open Access Journals (Sweden)

    Arman H. Karapetyan

    2012-01-01

    Full Text Available For 1-functions , given in the complex space , integral representations of the form =(−( are obtained. Here, is the orthogonal projector of the space 2{;−||||(} onto its subspace of entire functions and the integral operator appears by means of explicitly constructed kernel Φ which is investigated in detail.

  6. Statistical mechanics of budget-constrained auctions

    OpenAIRE

    Altarelli, F.; Braunstein, A.; Realpe-Gomez, J.; Zecchina, R.

    2009-01-01

    Finding the optimal assignment in budget-constrained auctions is a combinatorial optimization problem with many important applications, a notable example being the sale of advertisement space by search engines (in this context the problem is often referred to as the off-line AdWords problem). Based on the cavity method of statistical mechanics, we introduce a message passing algorithm that is capable of solving efficiently random instances of the problem extracted from a natural distribution,...

  7. Excavation of attractor modules for nasopharyngeal carcinoma via integrating systemic module inference with attract method.

    Science.gov (United States)

    Jiang, T; Jiang, C-Y; Shu, J-H; Xu, Y-J

    2017-07-10

    The molecular mechanism of nasopharyngeal carcinoma (NPC) is poorly understood and effective therapeutic approaches are needed. This research aimed to excavate the attractor modules involved in the progression of NPC and provide further understanding of the underlying mechanism of NPC. Based on the gene expression data of NPC, two specific protein-protein interaction networks for NPC and control conditions were re-weighted using Pearson correlation coefficient. Then, a systematic tracking of candidate modules was conducted on the re-weighted networks via cliques algorithm, and a total of 19 and 38 modules were separately identified from NPC and control networks, respectively. Among them, 8 pairs of modules with similar gene composition were selected, and 2 attractor modules were identified via the attract method. Functional analysis indicated that these two attractor modules participate in one common bioprocess of cell division. Based on the strategy of integrating systemic module inference with the attract method, we successfully identified 2 attractor modules. These attractor modules might play important roles in the molecular pathogenesis of NPC via affecting the bioprocess of cell division in a conjunct way. Further research is needed to explore the correlations between cell division and NPC.

  8. Integrals of Frullani type and the method of brackets

    Directory of Open Access Journals (Sweden)

    Bravo Sergio

    2017-01-01

    Full Text Available The method of brackets is a collection of heuristic rules, some of which have being made rigorous, that provide a flexible, direct method for the evaluation of definite integrals. The present work uses this method to establish classical formulas due to Frullani which provide values of a specific family of integrals. Some generalizations are established.

  9. Perceptions of Weight and Health Practices in Hispanic Children: A Mixed-Methods Study

    Directory of Open Access Journals (Sweden)

    Byron Alexander Foster

    2015-01-01

    Full Text Available Background. Perception of weight by parents of obese children may be associated with willingness to engage in behavior change. The relationship between parents’ perception of their child’s weight and their health beliefs and practices is poorly understood, especially among the Hispanic population which experiences disparities in childhood obesity. This study sought to explore the relationship between perceptions of weight and health beliefs and practices in a Hispanic population. Methods. A cross-sectional, mixed-methods approach was used with semistructured interviews conducted with parent-child (2–5 years old dyads in a primarily Hispanic, low-income population. Parents were queried on their perceptions of their child’s health, health practices, activities, behaviors, and beliefs. A grounded theory approach was used to analyze participants’ discussion of health practices and behaviors. Results. Forty parent-child dyads completed the interview. Most (58% of the parents of overweight and obese children misclassified their child’s weight status. The qualitative analysis showed that accurate perception of weight was associated with internal motivation and more concrete ideas of what healthy meant for their child. Conclusions. The qualitative data suggest there may be populations at different stages of readiness for change among parents of overweight and obese children, incorporating this understanding should be considered for interventions.

  10. Alternative containment integrity test methods, an overview of possible techniques

    International Nuclear Information System (INIS)

    Spletzer, B.L.

    1986-01-01

    A study is being conducted to develop and analyze alternative methods for testing of containment integrity. The study is focused on techniques for continuously monitoring containment integrity to provide rapid detection of existing leaks, thus providing greater certainty of the integrity of the containment at any time. The study is also intended to develop techniques applicable to the currently required Type A integrated leakage rate tests. A brief discussion of the range of alternative methods currently being considered is presented. The methods include applicability to all major containment types, operating and shutdown plant conditions, and quantitative and qualitative leakage measurements. The techniques are analyzed in accordance with the current state of knowledge of each method. The bulk of the techniques discussed are in the conceptual stage, have not been tested in actual plant conditions, and are presented here as a possible future direction for evaluating containment integrity. Of the methods considered, no single method provides optimum performance for all containment types. Several methods are limited in the types of containment for which they are applicable. The results of the study to date indicate that techniques for continuous monitoring of containment integrity exist for many plants and may be implemented at modest cost

  11. Integrated environmental and economic assessment of waste management systems

    DEFF Research Database (Denmark)

    Martinez Sanchez, Veronica

    in the “Optimization approach” the scenarios are the results of an optimization process. • The cost approach describes cost principles and level of LCA integration. Conventional and Environmental LCCs are financial assessments, i.e. include marketed goods/services, but while Environmental LCCs include environmental...... assessment of SWM systems alongside environmental impacts assessment to take budget constrains into account. In light of the need for combined environmental and economic assessment of SWM, this PhD thesis developed a consistent and comprehensive method for integrated environmental and economic assessment...... of SWM technologies and systems. The method resulted from developing further the generic Life Cycle Costing (LCC) framework suggested by Hunkeler et al. (2008) and Swarr et al. (2011) to apply it on the field of SWM. The method developed includes: two modelling approaches (Accounting and Optimization...

  12. Constrained-DFT method for accurate energy-level alignment of metal/molecule interfaces

    KAUST Repository

    Souza, A. M.

    2013-10-07

    We present a computational scheme for extracting the energy-level alignment of a metal/molecule interface, based on constrained density functional theory and local exchange and correlation functionals. The method, applied here to benzene on Li(100), allows us to evaluate charge-transfer energies, as well as the spatial distribution of the image charge induced on the metal surface. We systematically study the energies for charge transfer from the molecule to the substrate as function of the molecule-substrate distance, and investigate the effects arising from image-charge confinement and local charge neutrality violation. For benzene on Li(100) we find that the image-charge plane is located at about 1.8 Å above the Li surface, and that our calculated charge-transfer energies compare perfectly with those obtained with a classical electrostatic model having the image plane located at the same position. The methodology outlined here can be applied to study any metal/organic interface in the weak coupling limit at the computational cost of a total energy calculation. Most importantly, as the scheme is based on total energies and not on correcting the Kohn-Sham quasiparticle spectrum, accurate results can be obtained with local/semilocal exchange and correlation functionals. This enables a systematic approach to convergence.

  13. Constrained-DFT method for accurate energy-level alignment of metal/molecule interfaces

    KAUST Repository

    Souza, A. M.; Rungger, I.; Pemmaraju, C. D.; Schwingenschlö gl, Udo; Sanvito, S.

    2013-01-01

    We present a computational scheme for extracting the energy-level alignment of a metal/molecule interface, based on constrained density functional theory and local exchange and correlation functionals. The method, applied here to benzene on Li(100), allows us to evaluate charge-transfer energies, as well as the spatial distribution of the image charge induced on the metal surface. We systematically study the energies for charge transfer from the molecule to the substrate as function of the molecule-substrate distance, and investigate the effects arising from image-charge confinement and local charge neutrality violation. For benzene on Li(100) we find that the image-charge plane is located at about 1.8 Å above the Li surface, and that our calculated charge-transfer energies compare perfectly with those obtained with a classical electrostatic model having the image plane located at the same position. The methodology outlined here can be applied to study any metal/organic interface in the weak coupling limit at the computational cost of a total energy calculation. Most importantly, as the scheme is based on total energies and not on correcting the Kohn-Sham quasiparticle spectrum, accurate results can be obtained with local/semilocal exchange and correlation functionals. This enables a systematic approach to convergence.

  14. Constrained Optimization Methods in Health Services Research-An Introduction: Report 1 of the ISPOR Optimization Methods Emerging Good Practices Task Force.

    Science.gov (United States)

    Crown, William; Buyukkaramikli, Nasuh; Thokala, Praveen; Morton, Alec; Sir, Mustafa Y; Marshall, Deborah A; Tosh, Jon; Padula, William V; Ijzerman, Maarten J; Wong, Peter K; Pasupathy, Kalyan S

    2017-03-01

    Providing health services with the greatest possible value to patients and society given the constraints imposed by patient characteristics, health care system characteristics, budgets, and so forth relies heavily on the design of structures and processes. Such problems are complex and require a rigorous and systematic approach to identify the best solution. Constrained optimization is a set of methods designed to identify efficiently and systematically the best solution (the optimal solution) to a problem characterized by a number of potential solutions in the presence of identified constraints. This report identifies 1) key concepts and the main steps in building an optimization model; 2) the types of problems for which optimal solutions can be determined in real-world health applications; and 3) the appropriate optimization methods for these problems. We first present a simple graphical model based on the treatment of "regular" and "severe" patients, which maximizes the overall health benefit subject to time and budget constraints. We then relate it back to how optimization is relevant in health services research for addressing present day challenges. We also explain how these mathematical optimization methods relate to simulation methods, to standard health economic analysis techniques, and to the emergent fields of analytics and machine learning. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  15. Scheduling of resource-constrained projects

    CERN Document Server

    Klein, Robert

    2000-01-01

    Project management has become a widespread instrument enabling organizations to efficiently master the challenges of steadily shortening product life cycles, global markets and decreasing profit margins. With projects increasing in size and complexity, their planning and control represents one of the most crucial management tasks. This is especially true for scheduling, which is concerned with establishing execution dates for the sub-activities to be performed in order to complete the project. The ability to manage projects where resources must be allocated between concurrent projects or even sub-activities of a single project requires the use of commercial project management software packages. However, the results yielded by the solution procedures included are often rather unsatisfactory. Scheduling of Resource-Constrained Projects develops more efficient procedures, which can easily be integrated into software packages by incorporated programming languages, and thus should be of great interest for practiti...

  16. Approximation of the exponential integral (well function) using sampling methods

    Science.gov (United States)

    Baalousha, Husam Musa

    2015-04-01

    Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on sampling methods. Three different sampling methods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of sampling methods were compared with results obtained by Mathematica software, which was used as a benchmark. All three sampling methods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.

  17. Macroscopically constrained Wang-Landau method for systems with multiple order parameters and its application to drawing complex phase diagrams

    Science.gov (United States)

    Chan, C. H.; Brown, G.; Rikvold, P. A.

    2017-05-01

    A generalized approach to Wang-Landau simulations, macroscopically constrained Wang-Landau, is proposed to simulate the density of states of a system with multiple macroscopic order parameters. The method breaks a multidimensional random-walk process in phase space into many separate, one-dimensional random-walk processes in well-defined subspaces. Each of these random walks is constrained to a different set of values of the macroscopic order parameters. When the multivariable density of states is obtained for one set of values of fieldlike model parameters, the density of states for any other values of these parameters can be obtained by a simple transformation of the total system energy. All thermodynamic quantities of the system can then be rapidly calculated at any point in the phase diagram. We demonstrate how to use the multivariable density of states to draw the phase diagram, as well as order-parameter probability distributions at specific phase points, for a model spin-crossover material: an antiferromagnetic Ising model with ferromagnetic long-range interactions. The fieldlike parameters in this model are an effective magnetic field and the strength of the long-range interaction.

  18. Integration of alternative feedstreams for biomass treatment and utilization

    Science.gov (United States)

    Hennessey, Susan Marie [Avondale, PA; Friend, Julie [Claymont, DE; Dunson, Jr., James B.; Tucker, III, Melvin P.; Elander, Richard T [Evergreen, CO; Hames, Bonnie [Westminster, CO

    2011-03-22

    The present invention provides a method for treating biomass composed of integrated feedstocks to produce fermentable sugars. One aspect of the methods described herein includes a pretreatment step wherein biomass is integrated with an alternative feedstream and the resulting integrated feedstock, at relatively high concentrations, is treated with a low concentration of ammonia relative to the dry weight of biomass. In another aspect, a high solids concentration of pretreated biomass is integrated with an alternative feedstream for saccharifiaction.

  19. Minimum weight design of prestressed concrete reactor pressure vessels

    International Nuclear Information System (INIS)

    Boes, R.

    1975-01-01

    A method of non-linear programming for the minimization of the volume of rotationally symmetric prestressed concrete reactor pressure vessels is presented. It is assumed that the inner shape, the loads and the degree of prestressing are prescribed, whereas the outer shape is to be detemined. Prestressing includes rotational and vertical tension. The objective function minimizes the weight of the PCRV. The constrained minimization problem is converted into an unconstrained problem by the addition of interior penalty functions to the objective function. The minimum is determined by the variable metric method (Davidson-Fletcher-Powell), using both values and derivatives of the modified objective function. The one-dimensional search is approximated by a method of Kund. Optimization variables are scaled. The method is applied to a pressure vessel like for THTR. It is found that the thickness of the cylindrical wall may be reduced considerably for the load cases considered in the optimization. The thickness of the cover is reduced slightly. The largest reduction in wall thickness occurs at the junction of wall and cover. (Auth.)

  20. Coherent states in constrained systems

    International Nuclear Information System (INIS)

    Nakamura, M.; Kojima, K.

    2001-01-01

    When quantizing the constrained systems, there often arise the quantum corrections due to the non-commutativity in the re-ordering of constraint operators in the products of operators. In the bosonic second-class constraints, furthermore, the quantum corrections caused by the uncertainty principle should be taken into account. In order to treat these corrections simultaneously, the alternative projection technique of operators is proposed by introducing the available minimal uncertainty states of the constraint operators. Using this projection technique together with the projection operator method (POM), these two kinds of quantum corrections were investigated

  1. Feature and Pose Constrained Visual Aided Inertial Navigation for Computationally Constrained Aerial Vehicles

    Science.gov (United States)

    Williams, Brian; Hudson, Nicolas; Tweddle, Brent; Brockers, Roland; Matthies, Larry

    2011-01-01

    A Feature and Pose Constrained Extended Kalman Filter (FPC-EKF) is developed for highly dynamic computationally constrained micro aerial vehicles. Vehicle localization is achieved using only a low performance inertial measurement unit and a single camera. The FPC-EKF framework augments the vehicle's state with both previous vehicle poses and critical environmental features, including vertical edges. This filter framework efficiently incorporates measurements from hundreds of opportunistic visual features to constrain the motion estimate, while allowing navigating and sustained tracking with respect to a few persistent features. In addition, vertical features in the environment are opportunistically used to provide global attitude references. Accurate pose estimation is demonstrated on a sequence including fast traversing, where visual features enter and exit the field-of-view quickly, as well as hover and ingress maneuvers where drift free navigation is achieved with respect to the environment.

  2. Current-State Constrained Filter Bank for Wald Testing of Spacecraft Conjunctions

    Science.gov (United States)

    Carpenter, J. Russell; Markley, F. Landis

    2012-01-01

    We propose a filter bank consisting of an ordinary current-state extended Kalman filter, and two similar but constrained filters: one is constrained by a null hypothesis that the miss distance between two conjuncting spacecraft is inside their combined hard body radius at the predicted time of closest approach, and one is constrained by an alternative complementary hypothesis. The unconstrained filter is the basis of an initial screening for close approaches of interest. Once the initial screening detects a possibly risky conjunction, the unconstrained filter also governs measurement editing for all three filters, and predicts the time of closest approach. The constrained filters operate only when conjunctions of interest occur. The computed likelihoods of the innovations of the two constrained filters form a ratio for a Wald sequential probability ratio test. The Wald test guides risk mitigation maneuver decisions based on explicit false alarm and missed detection criteria. Since only current-state Kalman filtering is required to compute the innovations for the likelihood ratio, the present approach does not require the mapping of probability density forward to the time of closest approach. Instead, the hard-body constraint manifold is mapped to the filter update time by applying a sigma-point transformation to a projection function. Although many projectors are available, we choose one based on Lambert-style differential correction of the current-state velocity. We have tested our method using a scenario based on the Magnetospheric Multi-Scale mission, scheduled for launch in late 2014. This mission involves formation flight in highly elliptical orbits of four spinning spacecraft equipped with antennas extending 120 meters tip-to-tip. Eccentricities range from 0.82 to 0.91, and close approaches generally occur in the vicinity of perigee, where rapid changes in geometry may occur. Testing the method using two 12,000-case Monte Carlo simulations, we found the

  3. A Robust WLS Power System State Estimation Method Integrating a Wide-Area Measurement System and SCADA Technology

    Directory of Open Access Journals (Sweden)

    Tao Jin

    2015-04-01

    Full Text Available With the development of modern society, the scale of the power system is rapidly increased accordingly, and the framework and mode of running of power systems are trending towards more complexity. It is nowadays much more important for the dispatchers to know exactly the state parameters of the power network through state estimation. This paper proposes a robust power system WLS state estimation method integrating a wide-area measurement system (WAMS and SCADA technology, incorporating phasor measurements and the results of the traditional state estimator in a post-processing estimator, which greatly reduces the scale of the non-linear estimation problem as well as the number of iterations and the processing time per iteration. This paper firstly analyzes the wide-area state estimation model in detail, then according to the issue that least squares does not account for bad data and outliers, the paper proposes a robust weighted least squares (WLS method that combines a robust estimation principle with least squares by equivalent weight. The performance assessment is discussed through setting up mathematical models of the distribution network. The effectiveness of the proposed method was proved to be accurate and reliable by simulations and experiments.

  4. Bidirectional Dynamic Diversity Evolutionary Algorithm for Constrained Optimization

    Directory of Open Access Journals (Sweden)

    Weishang Gao

    2013-01-01

    Full Text Available Evolutionary algorithms (EAs were shown to be effective for complex constrained optimization problems. However, inflexible exploration-exploitation and improper penalty in EAs with penalty function would lead to losing the global optimum nearby or on the constrained boundary. To determine an appropriate penalty coefficient is also difficult in most studies. In this paper, we propose a bidirectional dynamic diversity evolutionary algorithm (Bi-DDEA with multiagents guiding exploration-exploitation through local extrema to the global optimum in suitable steps. In Bi-DDEA potential advantage is detected by three kinds of agents. The scale and the density of agents will change dynamically according to the emerging of potential optimal area, which play an important role of flexible exploration-exploitation. Meanwhile, a novel double optimum estimation strategy with objective fitness and penalty fitness is suggested to compute, respectively, the dominance trend of agents in feasible region and forbidden region. This bidirectional evolving with multiagents can not only effectively avoid the problem of determining penalty coefficient but also quickly converge to the global optimum nearby or on the constrained boundary. By examining the rapidity and veracity of Bi-DDEA across benchmark functions, the proposed method is shown to be effective.

  5. Hybrid Pixel-Based Method for Cardiac Ultrasound Fusion Based on Integration of PCA and DWT

    Directory of Open Access Journals (Sweden)

    Samaneh Mazaheri

    2015-01-01

    Full Text Available Medical image fusion is the procedure of combining several images from one or multiple imaging modalities. In spite of numerous attempts in direction of automation ventricle segmentation and tracking in echocardiography, due to low quality images with missing anatomical details or speckle noises and restricted field of view, this problem is a challenging task. This paper presents a fusion method which particularly intends to increase the segment-ability of echocardiography features such as endocardial and improving the image contrast. In addition, it tries to expand the field of view, decreasing impact of noise and artifacts and enhancing the signal to noise ratio of the echo images. The proposed algorithm weights the image information regarding an integration feature between all the overlapping images, by using a combination of principal component analysis and discrete wavelet transform. For evaluation, a comparison has been done between results of some well-known techniques and the proposed method. Also, different metrics are implemented to evaluate the performance of proposed algorithm. It has been concluded that the presented pixel-based method based on the integration of PCA and DWT has the best result for the segment-ability of cardiac ultrasound images and better performance in all metrics.

  6. Integrated resource planning and the environment: A guide to the use of multi-criteria decision methods

    Energy Technology Data Exchange (ETDEWEB)

    Hobbs, B.F.; Meier, P. [IDEA, Inc., Washington, DC (United States)

    1994-07-01

    This report is intended as a guide to the use of multi-criteria decision-making methods (MCDM) for incorporating environmental factors in electric utility integrated resource planning (IRP). Application of MCDM is emerging as an alternative and complementary method to explicit economic valuation for weighting environmental effects. We provide a step-by-step guide to the elements that are common to all MCDM applications. The report discusses how environmental attributes should be selected and defined; how options should be selected (and how risk and uncertainty should be accounted for); how environmental impacts should be quantified (with particular attention to the problems of location); how screening should be conducted; the construction and analysis of trade-off curves; dominance analysis, which seeks to identify clearly superior options, and reject clearly inferior options; scaling of impacts, in which we translate social, economic and environmental impacts into value functions; the determination of weights, with particular emphasis on ensuring that the weights reflect the trade-offs that decision-makers are actually willing to make; the amalgamation of attributes into overall plan rankings; and the resolution of differences among methods, and between individuals. There are many MCDM methods available for accomplishing these steps. They can differ in their appropriateness, ease of use, validity, and results. This report also includes an extensive review of past applications, in which we use the step-by-step guide to examine how these applications satisfied the criteria of appropriateness, ease of use, and validity. Case material is drawn from a wide field of utility applications, ranging from project-level environmental impact statements to capacity bidding programs, and from the results of two case studies conducted as part of this research.

  7. Direct integration multiple collision integral transport analysis method for high energy fusion neutronics

    International Nuclear Information System (INIS)

    Koch, K.R.

    1985-01-01

    A new analysis method specially suited for the inherent difficulties of fusion neutronics was developed to provide detailed studies of the fusion neutron transport physics. These studies should provide a better understanding of the limitations and accuracies of typical fusion neutronics calculations. The new analysis method is based on the direct integration of the integral form of the neutron transport equation and employs a continuous energy formulation with the exact treatment of the energy angle kinematics of the scattering process. In addition, the overall solution is analyzed in terms of uncollided, once-collided, and multi-collided solution components based on a multiple collision treatment. Furthermore, the numerical evaluations of integrals use quadrature schemes that are based on the actual dependencies exhibited in the integrands. The new DITRAN computer code was developed on the Cyber 205 vector supercomputer to implement this direct integration multiple-collision fusion neutronics analysis. Three representative fusion reactor models were devised and the solutions to these problems were studied to provide suitable choices for the numerical quadrature orders as well as the discretized solution grid and to understand the limitations of the new analysis method. As further verification and as a first step in assessing the accuracy of existing fusion-neutronics calculations, solutions obtained using the new analysis method were compared to typical multigroup discrete ordinates calculations

  8. The continuous end-state comfort effect: weighted integration of multiple biases.

    Science.gov (United States)

    Herbort, Oliver; Butz, Martin V

    2012-05-01

    The grasp orientation when grasping an object is frequently aligned in anticipation of the intended rotation of the object (end-state comfort effect). We analyzed grasp orientation selection in a continuous task to determine the mechanisms underlying the end-state comfort effect. Participants had to grasp a box by a circular handle-which allowed for arbitrary grasp orientations-and then had to rotate the box by various angles. Experiments 1 and 2 revealed both that the rotation's direction considerably determined grasp orientations and that end-postures varied considerably. Experiments 3 and 4 further showed that visual stimuli and initial arm postures biased grasp orientations if the intended rotation could be easily achieved. The data show that end-state comfort but also other factors determine grasp orientation selection. A simple mechanism that integrates multiple weighted biases can account for the data.

  9. Development of gel-filter method for high enrichment of low-molecular weight proteins from serum.

    Directory of Open Access Journals (Sweden)

    Lingsheng Chen

    Full Text Available The human serum proteome has been extensively screened for biomarkers. However, the large dynamic range of protein concentrations in serum and the presence of highly abundant and large molecular weight proteins, make identification and detection changes in the amount of low-molecular weight proteins (LMW, molecular weight ≤ 30kDa difficult. Here, we developed a gel-filter method including four layers of different concentration of tricine SDS-PAGE-based gels to block high-molecular weight proteins and enrich LMW proteins. By utilizing this method, we identified 1,576 proteins (n = 2 from 10 μL serum. Among them, 559 (n = 2 proteins belonged to LMW proteins. Furthermore, this gel-filter method could identify 67.4% and 39.8% more LMW proteins than that in representative methods of glycine SDS-PAGE and optimized-DS, respectively. By utilizing SILAC-AQUA approach with labeled recombinant protein as internal standard, the recovery rate for GST spiked in serum during the treatment of gel-filter, optimized-DS, and ProteoMiner was 33.1 ± 0.01%, 18.7 ± 0.01% and 9.6 ± 0.03%, respectively. These results demonstrate that the gel-filter method offers a rapid, highly reproducible and efficient approach for screening biomarkers from serum through proteomic analyses.

  10. Level set method for image segmentation based on moment competition

    Science.gov (United States)

    Min, Hai; Wang, Xiao-Feng; Huang, De-Shuang; Jin, Jing; Wang, Hong-Zhi; Li, Hai

    2015-05-01

    We propose a level set method for image segmentation which introduces the moment competition and weakly supervised information into the energy functional construction. Different from the region-based level set methods which use force competition, the moment competition is adopted to drive the contour evolution. Here, a so-called three-point labeling scheme is proposed to manually label three independent points (weakly supervised information) on the image. Then the intensity differences between the three points and the unlabeled pixels are used to construct the force arms for each image pixel. The corresponding force is generated from the global statistical information of a region-based method and weighted by the force arm. As a result, the moment can be constructed and incorporated into the energy functional to drive the evolving contour to approach the object boundary. In our method, the force arm can take full advantage of the three-point labeling scheme to constrain the moment competition. Additionally, the global statistical information and weakly supervised information are successfully integrated, which makes the proposed method more robust than traditional methods for initial contour placement and parameter setting. Experimental results with performance analysis also show the superiority of the proposed method on segmenting different types of complicated images, such as noisy images, three-phase images, images with intensity inhomogeneity, and texture images.

  11. Achieving Integration in Mixed Methods Designs—Principles and Practices

    OpenAIRE

    Fetters, Michael D; Curry, Leslie A; Creswell, John W

    2013-01-01

    Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs—exploratory sequential, explanatory sequential, and convergent—and through four advanced frameworks—multistage, intervention, case study, and participato...

  12. Constraining neutrinoless double beta decay

    International Nuclear Information System (INIS)

    Dorame, L.; Meloni, D.; Morisi, S.; Peinado, E.; Valle, J.W.F.

    2012-01-01

    A class of discrete flavor-symmetry-based models predicts constrained neutrino mass matrix schemes that lead to specific neutrino mass sum-rules (MSR). We show how these theories may constrain the absolute scale of neutrino mass, leading in most of the cases to a lower bound on the neutrinoless double beta decay effective amplitude.

  13. HotpathVM: An Effective JIT for Resource-constrained Devices

    DEFF Research Database (Denmark)

    Gal, Andreas; Franz, Michael; Probst, Christian

    2006-01-01

    We present a just-in-time compiler for a Java VM that is small enough to fit on resource-constrained devices, yet surprisingly effective. Our system dynamically identifies traces of frequently executed bytecode instructions (which may span several basic blocks across several methods) and compiles...

  14. Wafer-level testing and test during burn-in for integrated circuits

    CERN Document Server

    Bahukudumbi, Sudarshan

    2010-01-01

    Wafer-level testing refers to a critical process of subjecting integrated circuits and semiconductor devices to electrical testing while they are still in wafer form. Burn-in is a temperature/bias reliability stress test used in detecting and screening out potential early life device failures. This hands-on resource provides a comprehensive analysis of these methods, showing how wafer-level testing during burn-in (WLTBI) helps lower product cost in semiconductor manufacturing.Engineers learn how to implement the testing of integrated circuits at the wafer-level under various resource constrain

  15. A New Continuous-Time Equality-Constrained Optimization to Avoid Singularity.

    Science.gov (United States)

    Quan, Quan; Cai, Kai-Yuan

    2016-02-01

    In equality-constrained optimization, a standard regularity assumption is often associated with feasible point methods, namely, that the gradients of constraints are linearly independent. In practice, the regularity assumption may be violated. In order to avoid such a singularity, a new projection matrix is proposed based on which a feasible point method to continuous-time, equality-constrained optimization is developed. First, the equality constraint is transformed into a continuous-time dynamical system with solutions that always satisfy the equality constraint. Second, a new projection matrix without singularity is proposed to realize the transformation. An update (or say a controller) is subsequently designed to decrease the objective function along the solutions of the transformed continuous-time dynamical system. The invariance principle is then applied to analyze the behavior of the solution. Furthermore, the proposed method is modified to address cases in which solutions do not satisfy the equality constraint. Finally, the proposed optimization approach is applied to three examples to demonstrate its effectiveness.

  16. Measuring decision weights in recognition experiments with multiple response alternatives: comparing the correlation and multinomial-logistic-regression methods.

    Science.gov (United States)

    Dai, Huanping; Micheyl, Christophe

    2012-11-01

    Psychophysical "reverse-correlation" methods allow researchers to gain insight into the perceptual representations and decision weighting strategies of individual subjects in perceptual tasks. Although these methods have gained momentum, until recently their development was limited to experiments involving only two response categories. Recently, two approaches for estimating decision weights in m-alternative experiments have been put forward. One approach extends the two-category correlation method to m > 2 alternatives; the second uses multinomial logistic regression (MLR). In this article, the relative merits of the two methods are discussed, and the issues of convergence and statistical efficiency of the methods are evaluated quantitatively using Monte Carlo simulations. The results indicate that, for a range of values of the number of trials, the estimated weighting patterns are closer to their asymptotic values for the correlation method than for the MLR method. Moreover, for the MLR method, weight estimates for different stimulus components can exhibit strong correlations, making the analysis and interpretation of measured weighting patterns less straightforward than for the correlation method. These and other advantages of the correlation method, which include computational simplicity and a close relationship to other well-established psychophysical reverse-correlation methods, make it an attractive tool to uncover decision strategies in m-alternative experiments.

  17. Integrating Automatic Speech Recognition and Machine Translation for Better Translation Outputs

    DEFF Research Database (Denmark)

    Liyanapathirana, Jeevanthi

    translations, combining machine translation with computer assisted translation has drawn attention in current research. This combines two prospects: the opportunity of ensuring high quality translation along with a significant performance gain. Automatic Speech Recognition (ASR) is another important area......, which caters important functionalities in language processing and natural language understanding tasks. In this work we integrate automatic speech recognition and machine translation in parallel. We aim to avoid manual typing of possible translations as dictating the translation would take less time...... to the n-best list rescoring, we also use word graphs with the expectation of arriving at a tighter integration of ASR and MT models. Integration methods include constraining ASR models using language and translation models of MT, and vice versa. We currently develop and experiment different methods...

  18. Active constrained layer damping of geometrically nonlinear vibrations of functionally graded plates using piezoelectric fiber-reinforced composites

    International Nuclear Information System (INIS)

    Panda, Satyajit; Ray, M C

    2008-01-01

    In this paper, a geometrically nonlinear dynamic analysis has been presented for functionally graded (FG) plates integrated with a patch of active constrained layer damping (ACLD) treatment and subjected to a temperature field. The constraining layer of the ACLD treatment is considered to be made of the piezoelectric fiber-reinforced composite (PFRC) material. The temperature field is assumed to be spatially uniform over the substrate plate surfaces and varied through the thickness of the host FG plates. The temperature-dependent material properties of the FG substrate plates are assumed to be graded in the thickness direction of the plates according to a power-law distribution while the Poisson's ratio is assumed to be a constant over the domain of the plate. The constrained viscoelastic layer of the ACLD treatment is modeled using the Golla–Hughes–McTavish (GHM) method. Based on the first-order shear deformation theory, a three-dimensional finite element model has been developed to model the open-loop and closed-loop nonlinear dynamics of the overall FG substrate plates under the thermal environment. The analysis suggests the potential use of the ACLD treatment with its constraining layer made of the PFRC material for active control of geometrically nonlinear vibrations of FG plates in the absence or the presence of the temperature gradient across the thickness of the plates. It is found that the ACLD treatment is more effective in controlling the geometrically nonlinear vibrations of FG plates than in controlling their linear vibrations. The analysis also reveals that the ACLD patch is more effective for controlling the nonlinear vibrations of FG plates when it is attached to the softest surface of the FG plates than when it is bonded to the stiffest surface of the plates. The effect of piezoelectric fiber orientation in the active constraining PFRC layer on the damping characteristics of the overall FG plates is also discussed

  19. Active constrained layer damping of geometrically nonlinear vibrations of functionally graded plates using piezoelectric fiber-reinforced composites

    Science.gov (United States)

    Panda, Satyajit; Ray, M. C.

    2008-04-01

    In this paper, a geometrically nonlinear dynamic analysis has been presented for functionally graded (FG) plates integrated with a patch of active constrained layer damping (ACLD) treatment and subjected to a temperature field. The constraining layer of the ACLD treatment is considered to be made of the piezoelectric fiber-reinforced composite (PFRC) material. The temperature field is assumed to be spatially uniform over the substrate plate surfaces and varied through the thickness of the host FG plates. The temperature-dependent material properties of the FG substrate plates are assumed to be graded in the thickness direction of the plates according to a power-law distribution while the Poisson's ratio is assumed to be a constant over the domain of the plate. The constrained viscoelastic layer of the ACLD treatment is modeled using the Golla-Hughes-McTavish (GHM) method. Based on the first-order shear deformation theory, a three-dimensional finite element model has been developed to model the open-loop and closed-loop nonlinear dynamics of the overall FG substrate plates under the thermal environment. The analysis suggests the potential use of the ACLD treatment with its constraining layer made of the PFRC material for active control of geometrically nonlinear vibrations of FG plates in the absence or the presence of the temperature gradient across the thickness of the plates. It is found that the ACLD treatment is more effective in controlling the geometrically nonlinear vibrations of FG plates than in controlling their linear vibrations. The analysis also reveals that the ACLD patch is more effective for controlling the nonlinear vibrations of FG plates when it is attached to the softest surface of the FG plates than when it is bonded to the stiffest surface of the plates. The effect of piezoelectric fiber orientation in the active constraining PFRC layer on the damping characteristics of the overall FG plates is also discussed.

  20. Balancing computation and communication power in power constrained clusters

    Science.gov (United States)

    Piga, Leonardo; Paul, Indrani; Huang, Wei

    2018-05-29

    Systems, apparatuses, and methods for balancing computation and communication power in power constrained environments. A data processing cluster with a plurality of compute nodes may perform parallel processing of a workload in a power constrained environment. Nodes that finish tasks early may be power-gated based on one or more conditions. In some scenarios, a node may predict a wait duration and go into a reduced power consumption state if the wait duration is predicted to be greater than a threshold. The power saved by power-gating one or more nodes may be reassigned for use by other nodes. A cluster agent may be configured to reassign the unused power to the active nodes to expedite workload processing.

  1. Day-ahead optimal dispatch for wind integrated power system considering zonal reserve requirements

    International Nuclear Information System (INIS)

    Liu, Fan; Bie, Zhaohong; Liu, Shiyu; Ding, Tao

    2017-01-01

    Highlights: • Analyzing zonal reserve requirements for wind integrated power system. • Modeling day-ahead optimal dispatch solved by chance constrained programming theory. • Determining optimal zonal reserve demand with minimum confidence interval. • Analyzing numerical results on test and large-scale real-life power systems. - Abstract: Large-scale integration of renewable power presents a great challenge for day-ahead dispatch to manage renewable resources while provide available reserve for system security. Considering zonal reserve is an effective way to ensure reserve deliverability when network congested, a random day-ahead dispatch optimization of wind integrated power system for a least operational cost is modeled including zonal reserve requirements and N − 1 security constraints. The random model is transformed into a deterministic one based on the theory of chance constrained programming and a determination method of optimal zonal reserve demand is proposed using the minimum confidence interval. After solving the deterministic model, the stochastic simulation is conducted to verify the validity of solution. Numerical tests and results on the IEEE 39 bus system and a large-scale real-life power system demonstrate the optimal day-ahead dispatch scheme is available and the proposed method is effective for improving reserve deliverability and reducing load shedding after large-capacity power outage.

  2. Treatment of uncertainty through the interval smart/swing weighting method: a case study

    Directory of Open Access Journals (Sweden)

    Luiz Flávio Autran Monteiro Gomes

    2011-12-01

    Full Text Available An increasingly competitive market means that many decisions must be taken, quickly and with precision, in complex, high risk scenarios. This combination of factors makes it necessary to use decision aiding methods which provide a means of dealing with uncertainty in the judgement of the alternatives. This work presents the use of the MAUT method, combined with the INTERVAL SMART/SWING WEIGHTING method. Although multicriteria decision aiding was not conceived specifically for tackling uncertainty, the combined use of MAUT and the INTERVAL SMART/SWING WEIGHTING method allows approaching decision problems under uncertainty. The main concepts which are involved in these two methods are described and their joint application to the case study concerning the selection of a printing service supplier is presented. The case study makes use of the WINPRE software as a support tool for the calculation of dominance. It is then concluded that the proposed approach can be applied to decision making problems under uncertainty.

  3. CALF CIRCUMFERENCE AT BIRTH: A SCREENING METHOD FOR DETECTION OF LOW BIRTH WEIGHT

    Directory of Open Access Journals (Sweden)

    Sandip Kumar

    2012-12-01

    Full Text Available Background: Low Birth Weight (LBW babies run a higher risk of morbidity and mortality in the perinatal period. However, in our country where almost 70-80% births take place at home and peripheral hospitals, taking accurate weight is a problem due to unavailability of weighing scale and trained personnel. Hence there is a constant search for newer methods to detect LBW babies so that early interventions can be instituted. Various authors have used different surrogate anthropometric measurements from different parts of our country. In the present study, an attempt was made to validate the feasibility of using calf circumference as a predictor of LBW babies that can be used by a trained or untrained person. Objectives: To study various anthropometric measurements including calf circumference in newborns and to correlate various measurements with birth weight. Methods: The present study was conducted in the department of Social & Preventive Medicine, MLB Medical College, Jhansi (UP for a period of one year. The study included 1100 consecutively delivered neonates in the maternity ward of MLB Medical College Hospital, Jhansi (UP. The birth weight (Wt, crown heel length (CHL, crown rump length (CRL, head circumference (HC, chest circumference (CC, mid arm circumference (MAC, thigh circumference (TC and calf circumference (CC by standard techniques. All the measurements were taken by a single person throughout the study period with in 24 hours of delivery. Standard statistical methods were adopted for determination of critical limit, sensitivity, specificity and correlation coefficient of different anthropometric measurements in relation to birth weight. Results: Analysis of data indicates that out of 1100 newborns, 55.64% were low birth weight. The percentage of newborns > 2500gm was 44.36. Overall average birth weight was 2348 ± 505gm. Out of 1100 newborns, 608 (55.27% were males and 492 (44.73% were females. Average birth weight for males was 2412

  4. Hydrologic and hydraulic flood forecasting constrained by remote sensing data

    Science.gov (United States)

    Li, Y.; Grimaldi, S.; Pauwels, V. R. N.; Walker, J. P.; Wright, A. J.

    2017-12-01

    Flooding is one of the most destructive natural disasters, resulting in many deaths and billions of dollars of damages each year. An indispensable tool to mitigate the effect of floods is to provide accurate and timely forecasts. An operational flood forecasting system typically consists of a hydrologic model, converting rainfall data into flood volumes entering the river system, and a hydraulic model, converting these flood volumes into water levels and flood extents. Such a system is prone to various sources of uncertainties from the initial conditions, meteorological forcing, topographic data, model parameters and model structure. To reduce those uncertainties, current forecasting systems are typically calibrated and/or updated using ground-based streamflow measurements, and such applications are limited to well-gauged areas. The recent increasing availability of spatially distributed remote sensing (RS) data offers new opportunities to improve flood forecasting skill. Based on an Australian case study, this presentation will discuss the use of 1) RS soil moisture to constrain a hydrologic model, and 2) RS flood extent and level to constrain a hydraulic model.The GRKAL hydrological model is calibrated through a joint calibration scheme using both ground-based streamflow and RS soil moisture observations. A lag-aware data assimilation approach is tested through a set of synthetic experiments to integrate RS soil moisture to constrain the streamflow forecasting in real-time.The hydraulic model is LISFLOOD-FP which solves the 2-dimensional inertial approximation of the Shallow Water Equations. Gauged water level time series and RS-derived flood extent and levels are used to apply a multi-objective calibration protocol. The effectiveness with which each data source or combination of data sources constrained the parameter space will be discussed.

  5. Entropy-Weighted Instance Matching Between Different Sourcing Points of Interest

    Directory of Open Access Journals (Sweden)

    Lin Li

    2016-01-01

    Full Text Available The crucial problem for integrating geospatial data is finding the corresponding objects (the counterpart from different sources. Most current studies focus on object matching with individual attributes such as spatial, name, or other attributes, which avoids the difficulty of integrating those attributes, but at the cost of an ineffective matching. In this study, we propose an approach for matching instances by integrating heterogeneous attributes with the allocation of suitable attribute weights via information entropy. First, a normalized similarity formula is developed, which can simplify the calculation of spatial attribute similarity. Second, sound-based and word segmentation-based methods are adopted to eliminate the semantic ambiguity when there is a lack of a normative coding standard in geospatial data to express the name attribute. Third, category mapping is established to address the heterogeneity among different classifications. Finally, to address the non-linear characteristic of attribute similarity, the weights of the attributes are calculated by the entropy of the attributes. Experiments demonstrate that the Entropy-Weighted Approach (EWA has good performance both in terms of precision and recall for instance matching from different data sets.

  6. Interfaces and Integration of Medical Image Analysis Frameworks: Challenges and Opportunities.

    Science.gov (United States)

    Covington, Kelsie; McCreedy, Evan S; Chen, Min; Carass, Aaron; Aucoin, Nicole; Landman, Bennett A

    2010-05-25

    Clinical research with medical imaging typically involves large-scale data analysis with interdependent software toolsets tied together in a processing workflow. Numerous, complementary platforms are available, but these are not readily compatible in terms of workflows or data formats. Both image scientists and clinical investigators could benefit from using the framework which is a most natural fit to the specific problem at hand, but pragmatic choices often dictate that a compromise platform is used for collaboration. Manual merging of platforms through carefully tuned scripts has been effective, but exceptionally time consuming and is not feasible for large-scale integration efforts. Hence, the benefits of innovation are constrained by platform dependence. Removing this constraint via integration of algorithms from one framework into another is the focus of this work. We propose and demonstrate a light-weight interface system to expose parameters across platforms and provide seamless integration. In this initial effort, we focus on four platforms Medical Image Analysis and Visualization (MIPAV), Java Image Science Toolkit (JIST), command line tools, and 3D Slicer. We explore three case studies: (1) providing a system for MIPAV to expose internal algorithms and utilize these algorithms within JIST, (2) exposing JIST modules through self-documenting command line interface for inclusion in scripting environments, and (3) detecting and using JIST modules in 3D Slicer. We review the challenges and opportunities for light-weight software integration both within development language (e.g., Java in MIPAV and JIST) and across languages (e.g., C/C++ in 3D Slicer and shell in command line tools).

  7. Registration of T2-weighted and diffusion-weighted MR images of the prostate: comparison between manual and landmark-based methods

    Science.gov (United States)

    Peng, Yahui; Jiang, Yulei; Soylu, Fatma N.; Tomek, Mark; Sensakovic, William; Oto, Aytekin

    2012-02-01

    Quantitative analysis of multi-parametric magnetic resonance (MR) images of the prostate, including T2-weighted (T2w) and diffusion-weighted (DW) images, requires accurate image registration. We compared two registration methods between T2w and DW images. We collected pre-operative MR images of 124 prostate cancer patients (68 patients scanned with a GE scanner and 56 with Philips scanners). A landmark-based rigid registration was done based on six prostate landmarks in both T2w and DW images identified by a radiologist. Independently, a researcher manually registered the same images. A radiologist visually evaluated the registration results by using a 5-point ordinal scale of 1 (worst) to 5 (best). The Wilcoxon signed-rank test was used to determine whether the radiologist's ratings of the results of the two registration methods were significantly different. Results demonstrated that both methods were accurate: the average ratings were 4.2, 3.3, and 3.8 for GE, Philips, and all images, respectively, for the landmark-based method; and 4.6, 3.7, and 4.2, respectively, for the manual method. The manual registration results were more accurate than the landmark-based registration results (p < 0.0001 for GE, Philips, and all images). Therefore, the manual method produces more accurate registration between T2w and DW images than the landmark-based method.

  8. The Combinatorial Multi-Mode Resource Constrained Multi-Project Scheduling Problem

    Directory of Open Access Journals (Sweden)

    Denis Pinha

    2016-11-01

    Full Text Available This paper presents the formulation and solution of the Combinatorial Multi-Mode Resource Constrained Multi-Project Scheduling Problem. The focus of the proposed method is not on finding a single optimal solution, instead on presenting multiple feasible solutions, with cost and duration information to the project manager. The motivation for developing such an approach is due in part to practical situations where the definition of optimal changes on a regular basis. The proposed approach empowers the project manager to determine what is optimal, on a given day, under the current constraints, such as, change of priorities, lack of skilled worker. The proposed method utilizes a simulation approach to determine feasible solutions, under the current constraints. Resources can be non-consumable, consumable, or doubly constrained. The paper also presents a real-life case study dealing with scheduling of ship repair activities.

  9. Analysis of inconsistent source sampling in monte carlo weight-window variance reduction methods

    Directory of Open Access Journals (Sweden)

    David P. Griesheimer

    2017-09-01

    Full Text Available The application of Monte Carlo (MC to large-scale fixed-source problems has recently become possible with new hybrid methods that automate generation of parameters for variance reduction techniques. Two common variance reduction techniques, weight windows and source biasing, have been automated and popularized by the consistent adjoint-driven importance sampling (CADIS method. This method uses the adjoint solution from an inexpensive deterministic calculation to define a consistent set of weight windows and source particles for a subsequent MC calculation. One of the motivations for source consistency is to avoid the splitting or rouletting of particles at birth, which requires computational resources. However, it is not always possible or desirable to implement such consistency, which results in inconsistent source biasing. This paper develops an original framework that mathematically expresses the coupling of the weight window and source biasing techniques, allowing the authors to explore the impact of inconsistent source sampling on the variance of MC results. A numerical experiment supports this new framework and suggests that certain classes of problems may be relatively insensitive to inconsistent source sampling schemes with moderate levels of splitting and rouletting.

  10. [Development method of healthcare information system integration based on business collaboration model].

    Science.gov (United States)

    Li, Shasha; Nie, Hongchao; Lu, Xudong; Duan, Huilong

    2015-02-01

    Integration of heterogeneous systems is the key to hospital information construction due to complexity of the healthcare environment. Currently, during the process of healthcare information system integration, people participating in integration project usually communicate by free-format document, which impairs the efficiency and adaptability of integration. A method utilizing business process model and notation (BPMN) to model integration requirement and automatically transforming it to executable integration configuration was proposed in this paper. Based on the method, a tool was developed to model integration requirement and transform it to integration configuration. In addition, an integration case in radiology scenario was used to verify the method.

  11. Explicit integration of extremely stiff reaction networks: partial equilibrium methods

    International Nuclear Information System (INIS)

    Guidry, M W; Hix, W R; Billings, J J

    2013-01-01

    In two preceding papers (Guidry et al 2013 Comput. Sci. Disc. 6 015001 and Guidry and Harris 2013 Comput. Sci. Disc. 6 015002), we have shown that when reaction networks are well removed from equilibrium, explicit asymptotic and quasi-steady-state approximations can give algebraically stabilized integration schemes that rival standard implicit methods in accuracy and speed for extremely stiff systems. However, we also showed that these explicit methods remain accurate but are no longer competitive in speed as the network approaches equilibrium. In this paper, we analyze this failure and show that it is associated with the presence of fast equilibration timescales that neither asymptotic nor quasi-steady-state approximations are able to remove efficiently from the numerical integration. Based on this understanding, we develop a partial equilibrium method to deal effectively with the approach to equilibrium and show that explicit asymptotic methods, combined with the new partial equilibrium methods, give an integration scheme that can plausibly deal with the stiffest networks, even in the approach to equilibrium, with accuracy and speed competitive with that of implicit methods. Thus we demonstrate that such explicit methods may offer alternatives to implicit integration of even extremely stiff systems and that these methods may permit integration of much larger networks than have been possible before in a number of fields. (paper)

  12. Early failure mechanisms of constrained tripolar acetabular sockets used in revision total hip arthroplasty.

    Science.gov (United States)

    Cooke, Christopher C; Hozack, William; Lavernia, Carlos; Sharkey, Peter; Shastri, Shani; Rothman, Richard H

    2003-10-01

    Fifty-eight patients received an Osteonics constrained acetabular implant for recurrent instability (46), girdlestone reimplant (8), correction of leg lengthening (3), and periprosthetic fracture (1). The constrained liner was inserted into a cementless shell (49), cemented into a pre-existing cementless shell (6), cemented into a cage (2), and cemented directly into the acetabular bone (1). Eight patients (13.8%) required reoperation for failure of the constrained implant. Type I failure (bone-prosthesis interface) occurred in 3 cases. Two cementless shells became loose, and in 1 patient, the constrained liner was cemented into an acetabular cage, which then failed by pivoting laterally about the superior fixation screws. Type II failure (liner locking mechanism) occurred in 2 cases. Type III failure (femoral head locking mechanism) occurred in 3 patients. Seven of the 8 failures occurred in patients with recurrent instability. Constrained liners are an effective method for treatment during revision total hip arthroplasty but should be used in select cases only.

  13. Nested Sampling with Constrained Hamiltonian Monte Carlo

    OpenAIRE

    Betancourt, M. J.

    2010-01-01

    Nested sampling is a powerful approach to Bayesian inference ultimately limited by the computationally demanding task of sampling from a heavily constrained probability distribution. An effective algorithm in its own right, Hamiltonian Monte Carlo is readily adapted to efficiently sample from any smooth, constrained distribution. Utilizing this constrained Hamiltonian Monte Carlo, I introduce a general implementation of the nested sampling algorithm.

  14. Classical and modern optimization methods in minimum weight design of elastic rotating disk with variable thickness and density

    International Nuclear Information System (INIS)

    Jafari, S.; Hojjati, M.H.; Fathi, A.

    2012-01-01

    Rotating disks work mostly at high angular velocity and this results a large centrifugal force and consequently induce large stresses and deformations. Minimizing weight of such disks yields to benefits such as low dead weights and lower costs. This paper aims at finding an optimal disk profiles for minimum weight design using the Karush-Kuhn-Tucker method (KKT) as a classical optimization method, simulated annealing (SA) and particle swarm optimization (PSO) as two modern optimization techniques. Some semi-analytical solutions for the elastic stress distribution in a rotating annular disk with uniform and variable thickness and density proposed by the authors in the previous works have been used. The von Mises failure criterion of optimum disk is used as an inequality constraint to make sure that the rotating disk does not fail. The results show that the minimum weight obtained for all three methods is almost identical. The KKT method gives a profile with slightly less weight (6% less than SA and 1% less than PSO) while the implementation of PSO and SA methods are easier and provide more flexibility compared with those of the KKT method. The effectiveness of the proposed optimization methods is shown. - Highlights: ► Karush-Kuhn-Tucker, simulated annealing and particle swarm methods are used. ► The KKT gives slightly less weight (6% less than SA and 1% less than PSO). ► Implementation of PSO and SA methods are easier and provide more flexibility. ► The effectiveness of the proposed optimization methods is shown.

  15. Classical and modern optimization methods in minimum weight design of elastic rotating disk with variable thickness and density

    Energy Technology Data Exchange (ETDEWEB)

    Jafari, S. [Faculty of Mechanical Engineering, Babol University of Technology, P.O. Box 484, Babol (Iran, Islamic Republic of); Hojjati, M.H., E-mail: Hojjati@nit.ac.ir [Faculty of Mechanical Engineering, Babol University of Technology, P.O. Box 484, Babol (Iran, Islamic Republic of); Fathi, A. [Faculty of Mechanical Engineering, Babol University of Technology, P.O. Box 484, Babol (Iran, Islamic Republic of)

    2012-04-15

    Rotating disks work mostly at high angular velocity and this results a large centrifugal force and consequently induce large stresses and deformations. Minimizing weight of such disks yields to benefits such as low dead weights and lower costs. This paper aims at finding an optimal disk profiles for minimum weight design using the Karush-Kuhn-Tucker method (KKT) as a classical optimization method, simulated annealing (SA) and particle swarm optimization (PSO) as two modern optimization techniques. Some semi-analytical solutions for the elastic stress distribution in a rotating annular disk with uniform and variable thickness and density proposed by the authors in the previous works have been used. The von Mises failure criterion of optimum disk is used as an inequality constraint to make sure that the rotating disk does not fail. The results show that the minimum weight obtained for all three methods is almost identical. The KKT method gives a profile with slightly less weight (6% less than SA and 1% less than PSO) while the implementation of PSO and SA methods are easier and provide more flexibility compared with those of the KKT method. The effectiveness of the proposed optimization methods is shown. - Highlights: Black-Right-Pointing-Pointer Karush-Kuhn-Tucker, simulated annealing and particle swarm methods are used. Black-Right-Pointing-Pointer The KKT gives slightly less weight (6% less than SA and 1% less than PSO). Black-Right-Pointing-Pointer Implementation of PSO and SA methods are easier and provide more flexibility. Black-Right-Pointing-Pointer The effectiveness of the proposed optimization methods is shown.

  16. Top-quark mass measurement in the 2.1 fb-1 tight lepton and isolated track sample using neutrino φ weighting method

    International Nuclear Information System (INIS)

    Artikov, A.; Bellettini, G.; Trovato, M.; Budagov, Yu.; Glagolev, V.; Pukhov, O.; Sisakyan, A.; Suslov, I.; Chlachidze, G.; Chokheli, D.; Velev, G.

    2008-01-01

    We report on a measurement of the top quark mass in the tight lepton and isolated track sample using the neutrino φ weighting method. After applying the selection cuts for the data sample with the integrated luminosity of 2.1 fb -1 236 events were obtained. These events were reconstructed according to the tt bar hypothesis and fitted as a superposition of signal and combined background. For the expected number of background 105.8±12.9 we measure the top quark mass to be M top =167.7±4.2/4.0 (stat.) ±3.1 (syst.) GeV/c 2

  17. An integral nodal variational method for multigroup criticality calculations

    International Nuclear Information System (INIS)

    Lewis, E.E.; Tsoulfanidis, N.

    2003-01-01

    An integral formulation of the variational nodal method is presented and applied to a series of benchmark critically problems. The method combines an integral transport treatment of the even-parity flux within the spatial node with an odd-parity spherical harmonics expansion of the Lagrange multipliers at the node interfaces. The response matrices that result from this formulation are compatible with those in the VARIANT code at Argonne National Laboratory. Either homogeneous or heterogeneous nodes may be employed. In general, for calculations requiring higher-order angular approximations, the integral method yields solutions with comparable accuracy while requiring substantially less CPU time and memory than the standard spherical harmonics expansion using the same spatial approximations. (author)

  18. An age estimation method using brain local features for T1-weighted images.

    Science.gov (United States)

    Kondo, Chihiro; Ito, Koichi; Kai Wu; Sato, Kazunori; Taki, Yasuyuki; Fukuda, Hiroshi; Aoki, Takafumi

    2015-08-01

    Previous statistical analysis studies using large-scale brain magnetic resonance (MR) image databases have examined that brain tissues have age-related morphological changes. This fact indicates that one can estimate the age of a subject from his/her brain MR image by evaluating morphological changes with healthy aging. This paper proposes an age estimation method using local features extracted from T1-weighted MR images. The brain local features are defined by volumes of brain tissues parcellated into local regions defined by the automated anatomical labeling atlas. The proposed method selects optimal local regions to improve the performance of age estimation. We evaluate performance of the proposed method using 1,146 T1-weighted images from a Japanese MR image database. We also discuss the medical implication of selected optimal local regions.

  19. The application of entropy weight topsis method for optimal choice in low radiological decorative building materials

    International Nuclear Information System (INIS)

    Feng Guangwen; Hu Youhua; Liu Qian

    2010-01-01

    In this paper, the principle of TOPSIS method was introduced and applied to sorting the given indexes of glazed brick and granite respectively in different areas' decorative building materials in order to selecting the optimal low radiological decorative building materials. First, the entropy weight TOPSIS method was used for data processing about the sample numbers and radio nuclides content, and then different weights were given to different indexes. Finally, by using the SAS software for data analysis and sorting, we obtained that the optimal low radiological decorative building materials were Sichuan glazed brick and Henan granite. Through the results, it could be seen that the application of entropy weight TOPSIS method in selecting low radiological decorative building materials was feasible, and it will also provide the method reference. (authors)

  20. An evaluation of the implementation of maternal obesity pathways of care: a mixed methods study with data integration.

    Directory of Open Access Journals (Sweden)

    Nicola Heslehurst

    Full Text Available Maternal obesity has multiple associated risks and requires substantial intervention. This research evaluated the implementation of maternal obesity care pathways from multiple stakeholder perspectives.A simultaneous mixed methods model with data integration was used. Three component studies were given equal priority. 1: Semi-structured qualitative interviews explored obese pregnant women's experiences of being on the pathways. 2: A quantitative and qualitative postal survey explored healthcare professionals' experiences of delivering the pathways. 3: A case note audit quantitatively assessed pathway compliance. Data were integrated using following a thread and convergence coding matrix methods to search for agreement and disagreement between studies.Study 1: Four themes were identified: women's overall (positive and negative views of the pathways; knowledge and understanding of the pathways; views on clinical and weight management advice and support; and views on the information leaflet. Key results included positive views of receiving additional clinical care, negative experiences of risk communication, and weight management support was considered a priority. Study 2: Healthcare professionals felt the pathways were worthwhile, facilitated good practice, and increased confidence. Training was consistently identified as being required. Healthcare professionals predominantly focussed on women's response to sensitive obesity communication. Study 3: There was good compliance with antenatal clinical interventions. However, there was poor compliance with public health and postnatal interventions. There were some strong areas of agreement between component studies which can inform future development of the pathways. However, disagreement between studies included a lack of shared priorities between healthcare professionals and women, different perspectives on communication issues, and different perspectives on women's prioritisation of weight

  1. An Evaluation of the Implementation of Maternal Obesity Pathways of Care: A Mixed Methods Study with Data Integration

    Science.gov (United States)

    Heslehurst, Nicola; Dinsdale, Sarah; Sedgewick, Gillian; Simpson, Helen; Sen, Seema; Summerbell, Carolyn Dawn; Rankin, Judith

    2015-01-01

    Objectives Maternal obesity has multiple associated risks and requires substantial intervention. This research evaluated the implementation of maternal obesity care pathways from multiple stakeholder perspectives. Study Design A simultaneous mixed methods model with data integration was used. Three component studies were given equal priority. 1: Semi-structured qualitative interviews explored obese pregnant women’s experiences of being on the pathways. 2: A quantitative and qualitative postal survey explored healthcare professionals’ experiences of delivering the pathways. 3: A case note audit quantitatively assessed pathway compliance. Data were integrated using following a thread and convergence coding matrix methods to search for agreement and disagreement between studies. Results Study 1: Four themes were identified: women’s overall (positive and negative) views of the pathways; knowledge and understanding of the pathways; views on clinical and weight management advice and support; and views on the information leaflet. Key results included positive views of receiving additional clinical care, negative experiences of risk communication, and weight management support was considered a priority. Study 2: Healthcare professionals felt the pathways were worthwhile, facilitated good practice, and increased confidence. Training was consistently identified as being required. Healthcare professionals predominantly focussed on women’s response to sensitive obesity communication. Study 3: There was good compliance with antenatal clinical interventions. However, there was poor compliance with public health and postnatal interventions. There were some strong areas of agreement between component studies which can inform future development of the pathways. However, disagreement between studies included a lack of shared priorities between healthcare professionals and women, different perspectives on communication issues, and different perspectives on women

  2. Improved superficial brain hemorrhage visualization in susceptibility weighted images by constrained minimum intensity projection

    Science.gov (United States)

    Castro, Marcelo A.; Pham, Dzung L.; Butman, John

    2016-03-01

    Minimum intensity projection is a technique commonly used to display magnetic resonance susceptibility weighted images, allowing the observer to better visualize hemorrhages and vasculature. The technique displays the minimum intensity in a given projection within a thick slab, allowing different connectivity patterns to be easily revealed. Unfortunately, the low signal intensity of the skull within the thick slab can mask superficial tissues near the skull base and other regions. Because superficial microhemorrhages are a common feature of traumatic brain injury, this effect limits the ability to proper diagnose and follow up patients. In order to overcome this limitation, we developed a method to allow minimum intensity projection to properly display superficial tissues adjacent to the skull. Our approach is based on two brain masks, the largest of which includes extracerebral voxels. The analysis of the rind within both masks containing the actual brain boundary allows reclassification of those voxels initially missed in the smaller mask. Morphological operations are applied to guarantee accuracy and topological correctness, and the mean intensity within the mask is assigned to all outer voxels. This prevents bone from dominating superficial regions in the projection, enabling superior visualization of cortical hemorrhages and vessels.

  3. Path-Constrained Motion Planning for Robotics Based on Kinematic Constraints

    NARCIS (Netherlands)

    Dijk, van N.J.M.; Wouw, van de N.; Pancras, W.C.M.; Nijmeijer, H.

    2007-01-01

    Common robotic tracking tasks consist of motions along predefined paths. The design of time-optimal path-constrained trajectories for robotic applications is discussed in this paper. To increase industrial applicability, the proposed method accounts for robot kinematics together with actuator

  4. An inexact fuzzy-chance-constrained air quality management model.

    Science.gov (United States)

    Xu, Ye; Huang, Guohe; Qin, Xiaosheng

    2010-07-01

    Regional air pollution is a major concern for almost every country because it not only directly relates to economic development, but also poses significant threats to environment and public health. In this study, an inexact fuzzy-chance-constrained air quality management model (IFAMM) was developed for regional air quality management under uncertainty. IFAMM was formulated through integrating interval linear programming (ILP) within a fuzzy-chance-constrained programming (FCCP) framework and could deal with uncertainties expressed as not only possibilistic distributions but also discrete intervals in air quality management systems. Moreover, the constraints with fuzzy variables could be satisfied at different confidence levels such that various solutions with different risk and cost considerations could be obtained. The developed model was applied to a hypothetical case of regional air quality management. Six abatement technologies and sulfur dioxide (SO2) emission trading under uncertainty were taken into consideration. The results demonstrated that IFAMM could help decision-makers generate cost-effective air quality management patterns, gain in-depth insights into effects of the uncertainties, and analyze tradeoffs between system economy and reliability. The results also implied that the trading scheme could achieve lower total abatement cost than a nontrading one.

  5. On weighted dyadic Carleson's inequalities

    Directory of Open Access Journals (Sweden)

    Tachizawa K

    2001-01-01

    Full Text Available We give an alternate proof of weighted dyadic Carleson's inequalities which are essentially proved by Sawyer and Wheeden. We use the Bellman function approach of Nazarov and Treil. As an application we give an alternate proof of weighted inequalities for dyadic fractional maximal operators. A result on weighted inequalities for fractional integral operators is given.

  6. Integrating Weight Bias Awareness and Mental Health Promotion Into Obesity Prevention Delivery: A Public Health Pilot Study

    OpenAIRE

    McVey, Gail L.; Walker, Kathryn S.; Beyers, Joanne; Harrison, Heather L.; Simkins, Sari W.; Russell-Mayhew, Shelly

    2013-01-01

    Introduction Promoting healthy weight is a top priority in Canada. Recent federal guidelines call for sustained, multisectoral partnerships that address childhood obesity on multiple levels. Current healthy weight messaging does not fully acknowledge the influence of social determinants of health on weight. Methods An interactive workshop was developed and implemented by a team of academic researchers and health promoters from the psychology and public health disciplines to raise awareness ab...

  7. A dynamic integrated fault diagnosis method for power transformers.

    Science.gov (United States)

    Gao, Wensheng; Bai, Cuifen; Liu, Tong

    2015-01-01

    In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified.

  8. A Dynamic Integrated Fault Diagnosis Method for Power Transformers

    Science.gov (United States)

    Gao, Wensheng; Liu, Tong

    2015-01-01

    In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified. PMID:25685841

  9. Constrained least squares regularization in PET

    International Nuclear Information System (INIS)

    Choudhury, K.R.; O'Sullivan, F.O.

    1996-01-01

    Standard reconstruction methods used in tomography produce images with undesirable negative artifacts in background and in areas of high local contrast. While sophisticated statistical reconstruction methods can be devised to correct for these artifacts, their computational implementation is excessive for routine operational use. This work describes a technique for rapid computation of approximate constrained least squares regularization estimates. The unique feature of the approach is that it involves no iterative projection or backprojection steps. This contrasts with the familiar computationally intensive algorithms based on algebraic reconstruction (ART) or expectation-maximization (EM) methods. Experimentation with the new approach for deconvolution and mixture analysis shows that the root mean square error quality of estimators based on the proposed algorithm matches and usually dominates that of more elaborate maximum likelihood, at a fraction of the computational effort

  10. The goldstino brane, the constrained superfields and matter in N=1 supergravity

    International Nuclear Information System (INIS)

    Bandos, Igor; Heller, Markus; Kuzenko, Sergei M.; Martucci, Luca; Sorokin, Dmitri

    2016-01-01

    We show that different (brane and constrained superfield) descriptions for the Volkov-Akulov goldstino coupled to N=1, D=4 supergravity with matter produce similar wide classes of models with spontaneously broken local supersymmetry and discuss the relation between the different formulations. As with the formulations with irreducible constrained superfields, the geometric goldstino brane approach has the advantage of being manifestly off-shell supersymmetric without the need to introduce auxiliary fields. It provides an explicit solution of the nilpotent superfield constraints and avoids issues with non-Gaussian integration of auxiliary fields. We describe general couplings of the supersymmetry breaking sector, including the goldstino and other non-supersymmetric matter, to supergravity and matter supermultiplets. Among various examples, we discuss a goldstino brane contribution to the gravitino mass term and the supersymmetrization of the anti-D3-brane contribution to the effective theory of type IIB warped flux compactifications.

  11. The goldstino brane, the constrained superfields and matter in N=1 supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Bandos, Igor [Department of Theoretical Physics, University of the Basque Country UPV/EHU,P.O. Box 644, 48080 Bilbao (Spain); IKERBASQUE, Basque Foundation for Science,48011, Bilbao (Spain); Heller, Markus [Dipartimento di Fisica e Astronomia “Galileo Galilei' , Università degli Studi di Padova,Via Marzolo 8, 35131 Padova (Italy); Institut für Theoretische Physik, Ruprecht-Karls-Universität,Philosophenweg 19, 69120 Heidelberg (Germany); Kuzenko, Sergei M. [School of Physics M013, The University of Western Australia35 Stirling Highway, Crawley W.A. 6009 (Australia); Martucci, Luca [Dipartimento di Fisica e Astronomia “Galileo Galilei' , Università degli Studi di Padova,Via Marzolo 8, 35131 Padova (Italy); INFN - Sezione di Padova,Via Marzolo 8, 35131 Padova (Italy); Sorokin, Dmitri [INFN - Sezione di Padova,Via Marzolo 8, 35131 Padova (Italy); Dipartimento di Fisica e Astronomia “Galileo Galilei' , Università degli Studi di Padova,Via Marzolo 8, 35131 Padova (Italy)

    2016-11-21

    We show that different (brane and constrained superfield) descriptions for the Volkov-Akulov goldstino coupled to N=1, D=4 supergravity with matter produce similar wide classes of models with spontaneously broken local supersymmetry and discuss the relation between the different formulations. As with the formulations with irreducible constrained superfields, the geometric goldstino brane approach has the advantage of being manifestly off-shell supersymmetric without the need to introduce auxiliary fields. It provides an explicit solution of the nilpotent superfield constraints and avoids issues with non-Gaussian integration of auxiliary fields. We describe general couplings of the supersymmetry breaking sector, including the goldstino and other non-supersymmetric matter, to supergravity and matter supermultiplets. Among various examples, we discuss a goldstino brane contribution to the gravitino mass term and the supersymmetrization of the anti-D3-brane contribution to the effective theory of type IIB warped flux compactifications.

  12. A Gradient Weighted Moving Finite-Element Method with Polynomial Approximation of Any Degree

    Directory of Open Access Journals (Sweden)

    Ali R. Soheili

    2009-01-01

    Full Text Available A gradient weighted moving finite element method (GWMFE based on piecewise polynomial of any degree is developed to solve time-dependent problems in two space dimensions. Numerical experiments are employed to test the accuracy and effciency of the proposed method with nonlinear Burger equation.

  13. 14th International Conference on Integral Methods in Science and Engineering

    CERN Document Server

    Riva, Matteo; Lamberti, Pier; Musolino, Paolo

    2017-01-01

    This contributed volume contains a collection of articles on the most recent advances in integral methods.  The first of two volumes, this work focuses on the construction of theoretical integral methods. Written by internationally recognized researchers, the chapters in this book are based on talks given at the Fourteenth International Conference on Integral Methods in Science and Engineering, held July 25-29, 2016, in Padova, Italy. A broad range of topics is addressed, such as: • Integral equations • Homogenization • Duality methods • Optimal design • Conformal techniques This collection will be of interest to researchers in applied mathematics, physics, and mechanical and electrical engineering, as well as graduate students in these disciplines, and to other professionals who use integration as an essential tool in their work.

  14. Volume-weighted particle-tracking method for solute-transport modeling; Implementation in MODFLOW–GWT

    Science.gov (United States)

    Winston, Richard B.; Konikow, Leonard F.; Hornberger, George Z.

    2018-02-16

    In the traditional method of characteristics for groundwater solute-transport models, advective transport is represented by moving particles that track concentration. This approach can lead to global mass-balance problems because in models of aquifers having complex boundary conditions and heterogeneous properties, particles can originate in cells having different pore volumes and (or) be introduced (or removed) at cells representing fluid sources (or sinks) of varying strengths. Use of volume-weighted particles means that each particle tracks solute mass. In source or sink cells, the changes in particle weights will match the volume of water added or removed through external fluxes. This enables the new method to conserve mass in source or sink cells as well as globally. This approach also leads to potential efficiencies by allowing the number of particles per cell to vary spatially—using more particles where concentration gradients are high and fewer where gradients are low. The approach also eliminates the need for the model user to have to distinguish between “weak” and “strong” fluid source (or sink) cells. The new model determines whether solute mass added by fluid sources in a cell should be represented by (1) new particles having weights representing appropriate fractions of the volume of water added by the source, or (2) distributing the solute mass added over all particles already in the source cell. The first option is more appropriate for the condition of a strong source; the latter option is more appropriate for a weak source. At sinks, decisions whether or not to remove a particle are replaced by a reduction in particle weight in proportion to the volume of water removed. A number of test cases demonstrate that the new method works well and conserves mass. The method is incorporated into a new version of the U.S. Geological Survey’s MODFLOW–GWT solute-transport model.

  15. Cultural adaptation and translation of measures: an integrated method.

    Science.gov (United States)

    Sidani, Souraya; Guruge, Sepali; Miranda, Joyal; Ford-Gilboe, Marilyn; Varcoe, Colleen

    2010-04-01

    Differences in the conceptualization and operationalization of health-related concepts may exist across cultures. Such differences underscore the importance of examining conceptual equivalence when adapting and translating instruments. In this article, we describe an integrated method for exploring conceptual equivalence within the process of adapting and translating measures. The integrated method involves five phases including selection of instruments for cultural adaptation and translation; assessment of conceptual equivalence, leading to the generation of a set of items deemed to be culturally and linguistically appropriate to assess the concept of interest in the target community; forward translation; back translation (optional); and pre-testing of the set of items. Strengths and limitations of the proposed integrated method are discussed. (c) 2010 Wiley Periodicals, Inc.

  16. A novel orthoimage mosaic method using a weighted A∗ algorithm - Implementation and evaluation

    Science.gov (United States)

    Zheng, Maoteng; Xiong, Xiaodong; Zhu, Junfeng

    2018-04-01

    The implementation and evaluation of a weighted A∗ algorithm for orthoimage mosaic with UAV (Unmanned Aircraft Vehicle) imagery is proposed. The initial seam-line network is firstly generated by standard Voronoi Diagram algorithm; an edge diagram is generated based on DSM (Digital Surface Model) data; the vertices (conjunction nodes of seam-lines) of the initial network are relocated if they are on high objects (buildings, trees and other artificial structures); and the initial seam-lines are refined using the weighted A∗ algorithm based on the edge diagram and the relocated vertices. Our method was tested with three real UAV datasets. Two quantitative terms are introduced to evaluate the results of the proposed method. Preliminary results show that the method is suitable for regular and irregular aligned UAV images for most terrain types (flat or mountainous areas), and is better than the state-of-the-art method in both quality and efficiency based on the test datasets.

  17. Does the Method of Weight Loss Effect Long-Term Changes in Weight, Body Composition or Chronic Disease Risk Factors in Overweight or Obese Adults? A Systematic Review

    Science.gov (United States)

    Washburn, Richard A.; Szabo, Amanda N.; Lambourne, Kate; Willis, Erik A.; Ptomey, Lauren T.; Honas, Jeffery J.; Herrmann, Stephen D.; Donnelly, Joseph E.

    2014-01-01

    Background Differences in biological changes from weight loss by energy restriction and/or exercise may be associated with differences in long-term weight loss/regain. Objective To assess the effect of weight loss method on long-term changes in weight, body composition and chronic disease risk factors. Data Sources PubMed and Embase were searched (January 1990-October 2013) for studies with data on the effect of energy restriction, exercise (aerobic and resistance) on long-term weight loss. Twenty articles were included in this review. Study Eligibility Criteria Primary source, peer reviewed randomized trials published in English with an active weight loss period of >6 months, or active weight loss with a follow-up period of any duration, conducted in overweight or obese adults were included. Study Appraisal and Synthesis Methods Considerable heterogeneity across trials existed for important study parameters, therefore a meta-analysis was considered inappropriate. Results were synthesized and grouped by comparisons (e.g. diet vs. aerobic exercise, diet vs. diet + aerobic exercise etc.) and study design (long-term or weight loss/follow-up). Results Forty percent of trials reported significantly greater long-term weight loss with diet compared with aerobic exercise, while results for differences in weight regain were inconclusive. Diet+aerobic exercise resulted in significantly greater weight loss than diet alone in 50% of trials. However, weight regain (∼55% of loss) was similar in diet and diet+aerobic exercise groups. Fat-free mass tended to be preserved when interventions included exercise. PMID:25333384

  18. Density and dry weight of pigweed by various weed control methods ...

    African Journals Online (AJOL)

    This study evaluates effects of various weeds control methods and nitrogen fertilizer resources on density and dry weight of pigweed and the performance of corn forage as factorial in full random block design with 3 repetitions in research farm of Ferdowsi Mashhad University in 2014. The test treatments include weed ...

  19. Development and validation of a method to estimate body weight in ...

    African Journals Online (AJOL)

    Mid-arm circumference (MAC) has previously been used as a surrogate indicator of habitus, and the objective of this study was to determine whether MAC cut-off values could be used to predict habitus scores (HSs) to create an objective and standardised weight estimation methodology, the PAWPER XL-MAC method.

  20. A novel orthoimage mosaic method using the weighted A* algorithm for UAV imagery

    Science.gov (United States)

    Zheng, Maoteng; Zhou, Shunping; Xiong, Xiaodong; Zhu, Junfeng

    2017-12-01

    A weighted A* algorithm is proposed to select optimal seam-lines in orthoimage mosaic for UAV (Unmanned Aircraft Vehicle) imagery. The whole workflow includes four steps: the initial seam-line network is firstly generated by standard Voronoi Diagram algorithm; an edge diagram is then detected based on DSM (Digital Surface Model) data; the vertices (conjunction nodes) of initial network are relocated since some of them are on the high objects (buildings, trees and other artificial structures); and, the initial seam-lines are finally refined using the weighted A* algorithm based on the edge diagram and the relocated vertices. The method was tested with two real UAV datasets. Preliminary results show that the proposed method produces acceptable mosaic images in both the urban and mountainous areas, and is better than the result of the state-of-the-art methods on the datasets.

  1. WASTK: A Weighted Abstract Syntax Tree Kernel Method for Source Code Plagiarism Detection

    Directory of Open Access Journals (Sweden)

    Deqiang Fu

    2017-01-01

    Full Text Available In this paper, we introduce a source code plagiarism detection method, named WASTK (Weighted Abstract Syntax Tree Kernel, for computer science education. Different from other plagiarism detection methods, WASTK takes some aspects other than the similarity between programs into account. WASTK firstly transfers the source code of a program to an abstract syntax tree and then gets the similarity by calculating the tree kernel of two abstract syntax trees. To avoid misjudgment caused by trivial code snippets or frameworks given by instructors, an idea similar to TF-IDF (Term Frequency-Inverse Document Frequency in the field of information retrieval is applied. Each node in an abstract syntax tree is assigned a weight by TF-IDF. WASTK is evaluated on different datasets and, as a result, performs much better than other popular methods like Sim and JPlag.

  2. ARE METHODS USED TO INTEGRATE STANDARDIZED MANAGEMENT SYSTEMS A CONDITIONING FACTOR OF THE LEVEL OF INTEGRATION? AN EMPIRICAL STUDY

    Directory of Open Access Journals (Sweden)

    Merce Bernardo

    2011-09-01

    Full Text Available Organizations are increasingly implementing multiple Management System Standards (M SSs and considering managing the related Management Systems (MSs as a single system.The aim of this paper is to analyze if methods us ed to integrate standardized MSs condition the level of integration of those MSs. A descriptive methodology has been applied to 343 Spanish organizations registered to, at least, ISO 9001 and ISO 14001. Seven groups of these organizations using different combinations of methods have been analyzed Results show that these organizations have a high level of integration of their MSs. The most common method used, was the process map. Organizations using a combination of different methods achieve higher levels of integration than those using a single method. However, no evidence has been found to confirm the relationship between the method used and the integration level achieved.

  3. Performance analysis of smart laminated composite plate integrated with distributed AFC material undergoing geometrically nonlinear transient vibrations

    Science.gov (United States)

    Shivakumar, J.; Ashok, M. H.; Khadakbhavi, Vishwanath; Pujari, Sanjay; Nandurkar, Santosh

    2018-02-01

    The present work focuses on geometrically nonlinear transient analysis of laminated smart composite plates integrated with the patches of Active fiber composites (AFC) using Active constrained layer damping (ACLD) as the distributed actuators. The analysis has been carried out using generalised energy based finite element model. The coupled electromechanical finite element model is derived using Von Karman type nonlinear strain displacement relations and a first-order shear deformation theory (FSDT). Eight-node iso-parametric serendipity elements are used for discretization of the overall plate integrated with AFC patch material. The viscoelastic constrained layer is modelled using GHM method. The numerical results shows the improvement in the active damping characteristics of the laminated composite plates over the passive damping for suppressing the geometrically nonlinear transient vibrations of laminated composite plates with AFC as patch material.

  4. A decision support system for the promotion of Employee in Plaza Asia Method Using Weighted Product

    Directory of Open Access Journals (Sweden)

    Egi Badar Sambani

    2016-06-01

    Full Text Available Decision-making in a company is important because decisions taken by managers is the result of a final thought to be carried out by employees. Asia is the largest mall Plaza sepriangan east, where the assessment process includes the promotion employee attendance, productivity (work, integrity (nature, skill (ability and loyalty (faithfulness. Method Using Weighted Product (WP can help in decision-making to determine the promotion of employees in the company, as well as the appraisal process more efficient so the store manager can determine employee promotions quickly. By using decision support system that has a database, employee data can be stored in the database. So that in case of errors in inputting can be corrected without having to re-enter the data. With the Decision Support System will address the issues raised in the Plaza Asia, so the promotion process will be faster.

  5. INTEGRATED FUSION METHOD FOR MULTIPLE TEMPORAL-SPATIAL-SPECTRAL IMAGES

    Directory of Open Access Journals (Sweden)

    H. Shen

    2012-08-01

    Full Text Available Data fusion techniques have been widely researched and applied in remote sensing field. In this paper, an integrated fusion method for remotely sensed images is presented. Differently from the existed methods, the proposed method has the performance to integrate the complementary information in multiple temporal-spatial-spectral images. In order to represent and process the images in one unified framework, two general image observation models are firstly presented, and then the maximum a posteriori (MAP framework is used to set up the fusion model. The gradient descent method is employed to solve the fused image. The efficacy of the proposed method is validated using simulated images.

  6. Investigating Environmentally Sustainable Transport Based on DALY weights and SIR Method

    Directory of Open Access Journals (Sweden)

    Hossein Nezamianpour Jahromi

    2012-09-01

    Full Text Available Accessibility is one of the main causes of well-being and growth in contemporary societies. Transportation is the backbone of accessibility systems that lead to the growth of economic and social networks and spatial dispersion of activities. Unfortunately, the adverse effects of transportation have a great impact on the natural and human environment. Since transportation is associated with fossil fuel combustion, it results in emissions of pollutants that cause damage to human health. To save the global eco-system, sustainable development has become an international priority. To deal with the sustainability of transportation systems is an important issue as testified by a growing number of initiatives framed to define and measure sustainability in transportation planning and infrastructure planning as well. The capability of environmental assessment as a sustainability instrument is well known. This study proposes a new approach to rank countries based on environmental sustainability development applying disability adjusted life year (DALY weights for transportation sector emissions. DALY weights consider actual impacts of pollutants on human health. By employing SIR method, a superiority and inferiority ranking method is presented for multiple criteria decision making, the sustainability ranking of a number of European countries is presented. Three various ranking methods extracted from SIR ranking method are discussed and the results and the correlation among them are demonstrated.

  7. Integrated Sachs-Wolfe effect versus redshift test for the cosmological parameters

    Science.gov (United States)

    Kantowski, R.; Chen, B.; Dai, X.

    2015-04-01

    We describe a method using the integrated Sachs-Wolfe (ISW) effect caused by individual inhomogeneities to determine the cosmological parameters H0, Ωm , and ΩΛ, etc. This ISW-redshift test requires detailed knowledge of the internal kinematics of a set of individual density perturbations, e.g., galaxy clusters and/or cosmic voids, in particular their density and velocity profiles, and their mass accretion rates. It assumes the density perturbations are isolated and embedded (equivalently compensated) and makes use of the newly found relation between the ISW temperature perturbation of the cosmic microwave background (CMB) and the Fermat potential of the lens. Given measurements of the amplitudes of the temperature variations in the CMB caused by such clusters or voids at various redshifts and estimates of their angular sizes or masses, one can constrain the cosmological parameters. More realistically, the converse is more likely, i.e., if the background cosmology is sufficiently constrained, measurement of ISW profiles of clusters and voids (e.g., hot and cold spots and rings) can constrain dynamical properties of the dark matter, including accretion, associated with such lenses and thus constrain the evolution of these objects with redshift.

  8. IMPLEMENTATION OF SIMPLE ADDITIVE WEIGHTING (SAW METHODE IN DETERMINING HIGH SCHOOL STUDENT’S INTEREST

    Directory of Open Access Journals (Sweden)

    Prind Triajeng Pungkasanti

    2017-09-01

    Full Text Available The Ministry of Research, Technology, and Higher Education of Republic of Indonesia has set a regulation about curriculum applied in education field named Kurikulum 2013. One of the subsections in the Kurikulum 2013 regulates all requirements of majoring in high school. High school students determine their major based on Kurrikulum 2013 as they are on the 10th grade. The purpose of the majoring in education is to allow children development based on their skills and interests, because before, majoring have been done based on scores obtained. The main problem is the majoring requirements considered are admission test score and Junior High School National Test score. Both scores are not sufficient enough to determine the students major therefore academic aptitude test score is required. In term of weighting, the school has not been imposed the weighting system so the scores obtained is the average of admission test score and national test score. Based on the issue above, a solution required to solve the issue using a method. Method used in this research is Simple Additive Weighting (SAW, wherein this methode is looking for the weighted sum of performance rate on every alternative of atributes. This research provides the information about which potential students is suitable to enter the science major and social major so this results can be used as consideration of school decisions.

  9. Application of heat-balance integral method to conjugate thermal explosion

    Directory of Open Access Journals (Sweden)

    Novozhilov Vasily

    2009-01-01

    Full Text Available Conjugate thermal explosion is an extension of the classical theory, proposed and studied recently by the author. The paper reports application of heat-balance integral method for developing phase portraits for systems undergoing conjugate thermal explosion. The heat-balance integral method is used as an averaging method reducing partical differential equation problem to the set of first-order ordinary differential equations. The latter reduced problem allows natural interpretation in appropriately chosen phase space. It is shown that, with the help of heat-balance integral technique, conjugate thermal explosion problem can be described with a good accuracy by the set of non-linear first-order differential equations involving complex error function. Phase trajectories are presented for typical regimes emerging in conjugate thermal explosion. Use of heat-balance integral as a spatial averaging method allows efficient description of system evolution to be developed.

  10. LINKS: learning-based multi-source IntegratioN frameworK for Segmentation of infant brain images.

    Science.gov (United States)

    Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang; Gilmore, John H; Lin, Weili; Shen, Dinggang

    2015-03-01

    Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination processes. In the first year of life, the image contrast between white and gray matters of the infant brain undergoes dramatic changes. In particular, the image contrast is inverted around 6-8months of age, and the white and gray matter tissues are isointense in both T1- and T2-weighted MR images and thus exhibit the extremely low tissue contrast, which poses significant challenges for automated segmentation. Most previous studies used multi-atlas label fusion strategy, which has the limitation of equally treating the different available image modalities and is often computationally expensive. To cope with these limitations, in this paper, we propose a novel learning-based multi-source integration framework for segmentation of infant brain images. Specifically, we employ the random forest technique to effectively integrate features from multi-source images together for tissue segmentation. Here, the multi-source images include initially only the multi-modality (T1, T2 and FA) images and later also the iteratively estimated and refined tissue probability maps of gray matter, white matter, and cerebrospinal fluid. Experimental results on 119 infants show that the proposed method achieves better performance than other state-of-the-art automated segmentation methods. Further validation was performed on the MICCAI grand challenge and the proposed method was ranked top among all competing methods. Moreover, to alleviate the possible anatomical errors, our method can also be combined with an anatomically-constrained multi-atlas labeling approach for further improving the segmentation accuracy. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. An integrated lean-methods approach to hospital facilities redesign.

    Science.gov (United States)

    Nicholas, John

    2012-01-01

    Lean production methods for eliminating waste and improving processes in manufacturing are now being applied in healthcare. As the author shows, the methods are appropriate for redesigning hospital facilities. When used in an integrated manner and employing teams of mostly clinicians, the methods produce facility designs that are custom-fit to patient needs and caregiver work processes, and reduce operational costs. The author reviews lean methods and an approach for integrating them in the redesign of hospital facilities. A case example of the redesign of an emergency department shows the feasibility and benefits of the approach.

  12. Method of manufacturing Josephson junction integrated circuits

    International Nuclear Information System (INIS)

    Jillie, D.W. Jr.; Smith, L.N.

    1985-01-01

    Josephson junction integrated circuits of the current injection type and magnetically controlled type utilize a superconductive layer that forms both Josephson junction electrode for the Josephson junction devices on the integrated circuit as well as a ground plane for the integrated circuit. Large area Josephson junctions are utilized for effecting contact to lower superconductive layers and islands are formed in superconductive layers to provide isolation between the groudplane function and the Josephson junction electrode function as well as to effect crossovers. A superconductor-barrier-superconductor trilayer patterned by local anodization is also utilized with additional layers formed thereover. Methods of manufacturing the embodiments of the invention are disclosed

  13. Is the Evaluation of the Students' Values Possible? An Integrated Approach to Determining the Weights of Students' Personal Goals Using Multiple-Criteria Methods

    Science.gov (United States)

    Dadelo, Stanislav; Turskis, Zenonas; Zavadskas, Edmundas Kazimieras; Kacerauskas, Tomas; Dadeliene, Ruta

    2016-01-01

    To maximize the effectiveness of a decision, it is necessary to support decision-making with integrated methods. It can be assumed that subjective evaluation (considering only absolute values) is only remotely connected with the evaluation of real processes. Therefore, relying solely on these values in process management decision-making would be a…

  14. Criteria for quantitative and qualitative data integration: mixed-methods research methodology.

    Science.gov (United States)

    Lee, Seonah; Smith, Carrol A M

    2012-05-01

    Many studies have emphasized the need and importance of a mixed-methods approach for evaluation of clinical information systems. However, those studies had no criteria to guide integration of multiple data sets. Integrating different data sets serves to actualize the paradigm that a mixed-methods approach argues; thus, we require criteria that provide the right direction to integrate quantitative and qualitative data. The first author used a set of criteria organized from a literature search for integration of multiple data sets from mixed-methods research. The purpose of this article was to reorganize the identified criteria. Through critical appraisal of the reasons for designing mixed-methods research, three criteria resulted: validation, complementarity, and discrepancy. In applying the criteria to empirical data of a previous mixed methods study, integration of quantitative and qualitative data was achieved in a systematic manner. It helped us obtain a better organized understanding of the results. The criteria of this article offer the potential to produce insightful analyses of mixed-methods evaluations of health information systems.

  15. A hybrid guided neighborhood search for the disjunctively constrained knapsack problem

    Directory of Open Access Journals (Sweden)

    Mhand Hifi

    2015-12-01

    Full Text Available In this paper, we investigate the use of a hybrid guided neighborhood search for solving the disjunctively constrained knapsack problem. The studied problem may be viewed as a combination of two NP-hard combinatorial optimization problems: the weighted-independent set and the classical binary knapsack. The proposed algorithm is a hybrid approach that combines both deterministic and random local searches. The deterministic local search is based on a descent method, where both building and exploring procedures are alternatively used for improving the solution at hand. In order to escape from a local optima, a random local search strategy is introduced which is based on a modified ant colony optimization system. During the search process, the ant colony optimization system tries to diversify and to enhance the solutions using some informations collected from the previous iterations. Finally, the proposed algorithm is computationally analyzed on a set of benchmark instances available in the literature. The provided results are compared to those realized by both the Cplex solver and a recent algorithm of the literature. The computational part shows that the obtained results improve most existing solution values.

  16. WE-AB-209-12: Quasi Constrained Multi-Criteria Optimization for Automated Radiation Therapy Treatment Planning

    Energy Technology Data Exchange (ETDEWEB)

    Watkins, W.T.; Siebers, J.V. [University of Virginia, Charlottesville, VA (United States)

    2016-06-15

    Purpose: To introduce quasi-constrained Multi-Criteria Optimization (qcMCO) for unsupervised radiation therapy optimization which generates alternative patient-specific plans emphasizing dosimetric tradeoffs and conformance to clinical constraints for multiple delivery techniques. Methods: For N Organs At Risk (OARs) and M delivery techniques, qcMCO generates M(N+1) alternative treatment plans per patient. Objective weight variations for OARs and targets are used to generate alternative qcMCO plans. For 30 locally advanced lung cancer patients, qcMCO plans were generated for dosimetric tradeoffs to four OARs: each lung, heart, and esophagus (N=4) and 4 delivery techniques (simple 4-field arrangements, 9-field coplanar IMRT, 27-field non-coplanar IMRT, and non-coplanar Arc IMRT). Quasi-constrained objectives included target prescription isodose to 95% (PTV-D95), maximum PTV dose (PTV-Dmax)< 110% of prescription, and spinal cord Dmax<45 Gy. The algorithm’s ability to meet these constraints while simultaneously revealing dosimetric tradeoffs was investigated. Statistically significant dosimetric tradeoffs were defined such that the coefficient of determination between dosimetric indices which varied by at least 5 Gy between different plans was >0.8. Results: The qcMCO plans varied mean dose by >5 Gy to ipsilateral lung for 24/30 patients, contralateral lung for 29/30 patients, esophagus for 29/30 patients, and heart for 19/30 patients. In the 600 plans computed without human interaction, average PTV-D95=67.4±3.3 Gy, PTV-Dmax=79.2±5.3 Gy, and spinal cord Dmax was >45 Gy in 93 plans (>50 Gy in 2/600 plans). Statistically significant dosimetric tradeoffs were evident in 19/30 plans, including multiple tradeoffs of at least 5 Gy between multiple OARs in 7/30 cases. The most common statistically significant tradeoff was increasing PTV-Dmax to reduce OAR dose (15/30 patients). Conclusion: The qcMCO method can conform to quasi-constrained objectives while revealing

  17. WE-AB-209-12: Quasi Constrained Multi-Criteria Optimization for Automated Radiation Therapy Treatment Planning

    International Nuclear Information System (INIS)

    Watkins, W.T.; Siebers, J.V.

    2016-01-01

    Purpose: To introduce quasi-constrained Multi-Criteria Optimization (qcMCO) for unsupervised radiation therapy optimization which generates alternative patient-specific plans emphasizing dosimetric tradeoffs and conformance to clinical constraints for multiple delivery techniques. Methods: For N Organs At Risk (OARs) and M delivery techniques, qcMCO generates M(N+1) alternative treatment plans per patient. Objective weight variations for OARs and targets are used to generate alternative qcMCO plans. For 30 locally advanced lung cancer patients, qcMCO plans were generated for dosimetric tradeoffs to four OARs: each lung, heart, and esophagus (N=4) and 4 delivery techniques (simple 4-field arrangements, 9-field coplanar IMRT, 27-field non-coplanar IMRT, and non-coplanar Arc IMRT). Quasi-constrained objectives included target prescription isodose to 95% (PTV-D95), maximum PTV dose (PTV-Dmax)< 110% of prescription, and spinal cord Dmax<45 Gy. The algorithm’s ability to meet these constraints while simultaneously revealing dosimetric tradeoffs was investigated. Statistically significant dosimetric tradeoffs were defined such that the coefficient of determination between dosimetric indices which varied by at least 5 Gy between different plans was >0.8. Results: The qcMCO plans varied mean dose by >5 Gy to ipsilateral lung for 24/30 patients, contralateral lung for 29/30 patients, esophagus for 29/30 patients, and heart for 19/30 patients. In the 600 plans computed without human interaction, average PTV-D95=67.4±3.3 Gy, PTV-Dmax=79.2±5.3 Gy, and spinal cord Dmax was >45 Gy in 93 plans (>50 Gy in 2/600 plans). Statistically significant dosimetric tradeoffs were evident in 19/30 plans, including multiple tradeoffs of at least 5 Gy between multiple OARs in 7/30 cases. The most common statistically significant tradeoff was increasing PTV-Dmax to reduce OAR dose (15/30 patients). Conclusion: The qcMCO method can conform to quasi-constrained objectives while revealing

  18. Integration of Active and Passive Safety Technologies--A Method to Study and Estimate Field Capability.

    Science.gov (United States)

    Hu, Jingwen; Flannagan, Carol A; Bao, Shan; McCoy, Robert W; Siasoco, Kevin M; Barbat, Saeed

    2015-11-01

    The objective of this study is to develop a method that uses a combination of field data analysis, naturalistic driving data analysis, and computational simulations to explore the potential injury reduction capabilities of integrating passive and active safety systems in frontal impact conditions. For the purposes of this study, the active safety system is actually a driver assist (DA) feature that has the potential to reduce delta-V prior to a crash, in frontal or other crash scenarios. A field data analysis was first conducted to estimate the delta-V distribution change based on an assumption of 20% crash avoidance resulting from a pre-crash braking DA feature. Analysis of changes in driver head location during 470 hard braking events in a naturalistic driving study found that drivers' head positions were mostly in the center position before the braking onset, while the percentage of time drivers leaning forward or backward increased significantly after the braking onset. Parametric studies with a total of 4800 MADYMO simulations showed that both delta-V and occupant pre-crash posture had pronounced effects on occupant injury risks and on the optimal restraint designs. By combining the results for the delta-V and head position distribution changes, a weighted average of injury risk reduction of 17% and 48% was predicted by the 50th percentile Anthropomorphic Test Device (ATD) model and human body model, respectively, with the assumption that the restraint system can adapt to the specific delta-V and pre-crash posture. This study demonstrated the potential for further reducing occupant injury risk in frontal crashes by the integration of a passive safety system with a DA feature. Future analyses considering more vehicle models, various crash conditions, and variations of occupant characteristics, such as age, gender, weight, and height, are necessary to further investigate the potential capability of integrating passive and DA or active safety systems.

  19. Integrative methods for analyzing big data in precision medicine.

    Science.gov (United States)

    Gligorijević, Vladimir; Malod-Dognin, Noël; Pržulj, Nataša

    2016-03-01

    We provide an overview of recent developments in big data analyses in the context of precision medicine and health informatics. With the advance in technologies capturing molecular and medical data, we entered the area of "Big Data" in biology and medicine. These data offer many opportunities to advance precision medicine. We outline key challenges in precision medicine and present recent advances in data integration-based methods to uncover personalized information from big data produced by various omics studies. We survey recent integrative methods for disease subtyping, biomarkers discovery, and drug repurposing, and list the tools that are available to domain scientists. Given the ever-growing nature of these big data, we highlight key issues that big data integration methods will face. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Lightweight cryptography for constrained devices

    DEFF Research Database (Denmark)

    Alippi, Cesare; Bogdanov, Andrey; Regazzoni, Francesco

    2014-01-01

    Lightweight cryptography is a rapidly evolving research field that responds to the request for security in resource constrained devices. This need arises from crucial pervasive IT applications, such as those based on RFID tags where cost and energy constraints drastically limit the solution...... complexity, with the consequence that traditional cryptography solutions become too costly to be implemented. In this paper, we survey design strategies and techniques suitable for implementing security primitives in constrained devices....

  1. A Multi-Objective Optimization Method to integrate Heat Pumps in Industrial Processes

    OpenAIRE

    Becker, Helen; Spinato, Giulia; Maréchal, François

    2011-01-01

    Aim of process integration methods is to increase the efficiency of industrial processes by using pinch analysis combined with process design methods. In this context, appropriate integrated utilities offer promising opportunities to reduce energy consumption, operating costs and pollutants emissions. Energy integration methods are able to integrate any type of predefined utility, but so far there is no systematic approach to generate potential utilities models based on their technology limit...

  2. An Accurate Integral Method for Vibration Signal Based on Feature Information Extraction

    Directory of Open Access Journals (Sweden)

    Yong Zhu

    2015-01-01

    Full Text Available After summarizing the advantages and disadvantages of current integral methods, a novel vibration signal integral method based on feature information extraction was proposed. This method took full advantage of the self-adaptive filter characteristic and waveform correction feature of ensemble empirical mode decomposition in dealing with nonlinear and nonstationary signals. This research merged the superiorities of kurtosis, mean square error, energy, and singular value decomposition on signal feature extraction. The values of the four indexes aforementioned were combined into a feature vector. Then, the connotative characteristic components in vibration signal were accurately extracted by Euclidean distance search, and the desired integral signals were precisely reconstructed. With this method, the interference problem of invalid signal such as trend item and noise which plague traditional methods is commendably solved. The great cumulative error from the traditional time-domain integral is effectively overcome. Moreover, the large low-frequency error from the traditional frequency-domain integral is successfully avoided. Comparing with the traditional integral methods, this method is outstanding at removing noise and retaining useful feature information and shows higher accuracy and superiority.

  3. Efficient Constrained Local Model Fitting for Non-Rigid Face Alignment.

    Science.gov (United States)

    Lucey, Simon; Wang, Yang; Cox, Mark; Sridharan, Sridha; Cohn, Jeffery F

    2009-11-01

    Active appearance models (AAMs) have demonstrated great utility when being employed for non-rigid face alignment/tracking. The "simultaneous" algorithm for fitting an AAM achieves good non-rigid face registration performance, but has poor real time performance (2-3 fps). The "project-out" algorithm for fitting an AAM achieves faster than real time performance (> 200 fps) but suffers from poor generic alignment performance. In this paper we introduce an extension to a discriminative method for non-rigid face registration/tracking referred to as a constrained local model (CLM). Our proposed method is able to achieve superior performance to the "simultaneous" AAM algorithm along with real time fitting speeds (35 fps). We improve upon the canonical CLM formulation, to gain this performance, in a number of ways by employing: (i) linear SVMs as patch-experts, (ii) a simplified optimization criteria, and (iii) a composite rather than additive warp update step. Most notably, our simplified optimization criteria for fitting the CLM divides the problem of finding a single complex registration/warp displacement into that of finding N simple warp displacements. From these N simple warp displacements, a single complex warp displacement is estimated using a weighted least-squares constraint. Another major advantage of this simplified optimization lends from its ability to be parallelized, a step which we also theoretically explore in this paper. We refer to our approach for fitting the CLM as the "exhaustive local search" (ELS) algorithm. Experiments were conducted on the CMU Multi-PIE database.

  4. Multidisciplinary group performance – measuring integration intensity in the context of the North West London Integrated Care Pilot

    Directory of Open Access Journals (Sweden)

    Matthew Harris

    2013-02-01

    Full Text Available Introduction: Multidisciplinary Group meeting (MDGs are seen as key facilitators of integration, moving from individual to multi-disciplinary decision making, and from a focus on individual patients to a focus on patient groups.  We have developed a method for coding MDG transcripts to identify whether they are or are not vehicles for delivering the anticipated efficiency improvements across various providers and apply it to a test case in the North West London Integrated Care Pilot.  Methods:  We defined 'integrating' as the process within the MDG meeting that enables or promotes an improved collaboration, improved understanding, and improved awareness of self and others within the local healthcare economy such that efficiency improvements could be identified and action taken.  Utterances within the MDGs are coded according to three distinct domains grounded in concepts from communication, group decision-making, and integrated care literatures - the Valence, the Focus, and the Level.  Standardized weighted integrative intensity scores are calculated across ten time deciles in the Case Discussion providing a graphical representation of its integrative intensity. Results: Intra- and Inter-rater reliability of the coding scheme was very good as measured by the Prevalence and Bias-adjusted Kappa Score.  Standardized Weighted Integrative Intensity graph mirrored closely the verbatim transcript and is a convenient representation of complex communication dynamics. Trend in integrative intensity can be calculated and the characteristics of the MDG can be pragmatically described. Conclusion: This is a novel and potentially useful method for researchers, managers and practitioners to better understand MDG dynamics and to identify whether participants are integrating.  The degree to which participants use MDG meetings to develop an integrated way of working is likely to require management, leadership and shared values.

  5. Multidisciplinary group performance – measuring integration intensity in the context of the North West London Integrated Care Pilot

    Directory of Open Access Journals (Sweden)

    Matthew Harris

    2013-02-01

    Full Text Available Introduction: Multidisciplinary Group meeting (MDGs are seen as key facilitators of integration, moving from individual to multi-disciplinary decision making, and from a focus on individual patients to a focus on patient groups.  We have developed a method for coding MDG transcripts to identify whether they are or are not vehicles for delivering the anticipated efficiency improvements across various providers and apply it to a test case in the North West London Integrated Care Pilot. Methods:  We defined 'integrating' as the process within the MDG meeting that enables or promotes an improved collaboration, improved understanding, and improved awareness of self and others within the local healthcare economy such that efficiency improvements could be identified and action taken.  Utterances within the MDGs are coded according to three distinct domains grounded in concepts from communication, group decision-making, and integrated care literatures - the Valence, the Focus, and the Level.  Standardized weighted integrative intensity scores are calculated across ten time deciles in the Case Discussion providing a graphical representation of its integrative intensity.Results: Intra- and Inter-rater reliability of the coding scheme was very good as measured by the Prevalence and Bias-adjusted Kappa Score.  Standardized Weighted Integrative Intensity graph mirrored closely the verbatim transcript and is a convenient representation of complex communication dynamics. Trend in integrative intensity can be calculated and the characteristics of the MDG can be pragmatically described.Conclusion: This is a novel and potentially useful method for researchers, managers and practitioners to better understand MDG dynamics and to identify whether participants are integrating.  The degree to which participants use MDG meetings to develop an integrated way of working is likely to require management, leadership and shared values.

  6. Integration of constrained electrical and seismic tomographies to study the landslide affecting the cathedral of Agrigento

    International Nuclear Information System (INIS)

    Capizzi, P; Martorana, R

    2014-01-01

    The Cathedral of Saint Gerland, located on the top of the hill of Agrigento, is an important historical church, which dates back to the Arab–Norman period (XI century). Unfortunately throughout its history the Cathedral and the adjacent famous Archaeological Park of the ‘Valley of the Temples’ have been affected by landslides. In this area the interleaving of calcarenites, silt, sand and clay is complicated by the presence of dislocated rock blocks and cavities and by a system of fractures partly filled with clay or water. Integrated geophysical surveys were carried out on the north side of the hill, on which the Cathedral of Agrigento is founded, to define lithological structures involved in the failure process. Because of the landslide, the cathedral has been affected by fractures, which resulted in the overall instability of the structure. Along each of four footpaths a combination of 2D electrical resistivity tomographies (ERT) and 2D seismic refraction tomographies (SRT) was performed. Moreover, along two of these footpaths microtremor (HVSR) and surface wave soundings (MASW) were carried out to reconstruct 2D sections of shear waves velocity. Furthermore a 3D electrical resistivity tomography was carried out in a limited area characterized by gentle slopes. After a preliminary phase, in which the data were processed independently, a subsequent inversion of seismic and electrical data was constrained with stratigraphic information obtained from geognostic continuous core boreholes located along the geophysical lines. This process allowed us to significantly increase the robustness of the geophysical models. The acquired data were interpolated to construct 3D geophysical models of the electrical resistivity and of the P-wave velocity. The interpolation algorithm took into account the average direction and immersion of geological strata. Results led to a better understanding of the complexity of the subsoil in the investigated area. The use of integrated

  7. A First-order Prediction-Correction Algorithm for Time-varying (Constrained) Optimization: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Dall-Anese, Emiliano [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Simonetto, Andrea [Universite catholique de Louvain

    2017-07-25

    This paper focuses on the design of online algorithms based on prediction-correction steps to track the optimal solution of a time-varying constrained problem. Existing prediction-correction methods have been shown to work well for unconstrained convex problems and for settings where obtaining the inverse of the Hessian of the cost function can be computationally affordable. The prediction-correction algorithm proposed in this paper addresses the limitations of existing methods by tackling constrained problems and by designing a first-order prediction step that relies on the Hessian of the cost function (and do not require the computation of its inverse). Analytical results are established to quantify the tracking error. Numerical simulations corroborate the analytical results and showcase performance and benefits of the algorithms.

  8. Tau method approximation of the Hubbell rectangular source integral

    International Nuclear Information System (INIS)

    Kalla, S.L.; Khajah, H.G.

    2000-01-01

    The Tau method is applied to obtain expansions, in terms of Chebyshev polynomials, which approximate the Hubbell rectangular source integral:I(a,b)=∫ b 0 (1/(√(1+x 2 )) arctan(a/(√(1+x 2 )))) This integral corresponds to the response of an omni-directional radiation detector situated over a corner of a plane isotropic rectangular source. A discussion of the error in the Tau method approximation follows

  9. Indirect methods for wake potential integration

    International Nuclear Information System (INIS)

    Zagorodnov, I.

    2006-05-01

    The development of the modern accelerator and free-electron laser projects requires to consider wake fields of very short bunches in arbitrary three dimensional structures. To obtain the wake numerically by direct integration is difficult, since it takes a long time for the scattered fields to catch up to the bunch. On the other hand no general algorithm for indirect wake field integration is available in the literature so far. In this paper we review the know indirect methods to compute wake potentials in rotationally symmetric and cavity-like three dimensional structures. For arbitrary three dimensional geometries we introduce several new techniques and test them numerically. (Orig.)

  10. Intermittent Fasting: Is the Wait Worth the Weight?

    Science.gov (United States)

    Stockman, Mary-Catherine; Thomas, Dylan; Burke, Jacquelyn; Apovian, Caroline M

    2018-06-01

    We review the underlying mechanisms and potential benefits of intermittent fasting (IF) from animal models and recent clinical trials. Numerous variations of IF exist, and study protocols vary greatly in their interpretations of this weight loss trend. Most human IF studies result in minimal weight loss and marginal improvements in metabolic biomarkers, though outcomes vary. Some animal models have found that IF reduces oxidative stress, improves cognition, and delays aging. Additionally, IF has anti-inflammatory effects, promotes autophagy, and benefits the gut microbiome. The benefit-to-harm ratio varies by model, IF protocol, age at initiation, and duration. We provide an integrated perspective on potential benefits of IF as well as key areas for future investigation. In clinical trials, caloric restriction and IF result in similar degrees of weight loss and improvement in insulin sensitivity. Although these data suggest that IF may be a promising weight loss method, IF trials have been of moderate sample size and limited duration. More rigorous research is needed.

  11. Variational method for integrating radial gradient field

    Science.gov (United States)

    Legarda-Saenz, Ricardo; Brito-Loeza, Carlos; Rivera, Mariano; Espinosa-Romero, Arturo

    2014-12-01

    We propose a variational method for integrating information obtained from circular fringe pattern. The proposed method is a suitable choice for objects with radial symmetry. First, we analyze the information contained in the fringe pattern captured by the experimental setup and then move to formulate the problem of recovering the wavefront using techniques from calculus of variations. The performance of the method is demonstrated by numerical experiments with both synthetic and real data.

  12. First integral method for an oscillator system

    Directory of Open Access Journals (Sweden)

    Xiaoqian Gong

    2013-04-01

    Full Text Available In this article, we consider the nonlinear Duffing-van der Pol-type oscillator system by means of the first integral method. This system has physical relevance as a model in certain flow-induced structural vibration problems, which includes the van der Pol oscillator and the damped Duffing oscillator etc as particular cases. Firstly, we apply the Division Theorem for two variables in the complex domain, which is based on the ring theory of commutative algebra, to explore a quasi-polynomial first integral to an equivalent autonomous system. Then, through solving an algebraic system we derive the first integral of the Duffing-van der Pol-type oscillator system under certain parametric condition.

  13. Conservative multi-implicit integral deferred correction methods with adaptive mesh refinement

    International Nuclear Information System (INIS)

    Layton, A.T.

    2004-01-01

    In most models of reacting gas dynamics, the characteristic time scales of chemical reactions are much shorter than the hydrodynamic and diffusive time scales, rendering the reaction part of the model equations stiff. Moreover, nonlinear forcings may introduce into the solutions sharp gradients or shocks, the robust behavior and correct propagation of which require the use of specialized spatial discretization procedures. This study presents high-order conservative methods for the temporal integration of model equations of reacting flows. By means of a method of lines discretization on the flux difference form of the equations, these methods compute approximations to the cell-averaged or finite-volume solution. The temporal discretization is based on a multi-implicit generalization of integral deferred correction methods. The advection term is integrated explicitly, and the diffusion and reaction terms are treated implicitly but independently, with the splitting errors present in traditional operator splitting methods reduced via the integral deferred correction procedure. To reduce computational cost, time steps used to integrate processes with widely-differing time scales may differ in size. (author)

  14. Mature Basin Development Portfolio Management in a Resource Constrained Environment

    International Nuclear Information System (INIS)

    Mandhane, J. M.; Udo, S. D.

    2002-01-01

    Nigerian Petroleum industry is constantly faced with management of resource constraints stemming from capital and operating budget, availability of skilled manpower, capacity of an existing surface facility, size of well assets, amount of soft and hard information, etceteras. Constrained capital forces the industry to rank subsurface resource and potential before proceeding with preparation of development scenarios. Availability of skilled manpower limits scope of integrated reservoir studies. Level of information forces technical and management to find low-risk development alternative in a limited time. Volume of either oil or natural gas or water or combination of them may be constrained due to design limits of the existing facility, or an external OPEC quota, requires high portfolio management skills.The first part of the paper statistically analyses development portfolio of a mature basin for (a) subsurface resources volume, (b) developed and undeveloped and undeveloped volumes, (c) sweating of wells, and (d) facility assets. The analysis presented conclusively demonstrates that the 80/20 is active in the statistical sample. The 80/20 refers to 80% of the effect coming from the 20% of the cause. The second part of the paper deals with how 80/20 could be applied to manage portfolio for a given set of constraints. Three application examples are discussed. Feedback on implementation of them resulting in focussed resource management with handsome rewards is documented.The statistical analysis and application examples from a mature basin form a way forward for a development portfolio management in an resource constrained environment

  15. Linear, Transfinite and Weighted Method for Interpolation from Grid Lines Applied to OCT Images

    DEFF Research Database (Denmark)

    Lindberg, Anne-Sofie Wessel; Jørgensen, Thomas Martini; Dahl, Vedrana Andersen

    2018-01-01

    of a square grid, but are unknown inside each square. To view these values as an image, intensities need to be interpolated at regularly spaced pixel positions. In this paper we evaluate three methods for interpolation from grid lines: linear, transfinite and weighted. The linear method does not preserve...... and the stability of the linear method further away. An important parameter influencing the performance of the interpolation methods is the upsampling rate. We perform an extensive evaluation of the three interpolation methods across a range of upsampling rates. Our statistical analysis shows significant difference...... in the performance of the three methods. We find that the transfinite interpolation works well for small upsampling rates and the proposed weighted interpolation method performs very well for all upsampling rates typically used in practice. On the basis of these findings we propose an approach for combining two OCT...

  16. New method for calculation of integral characteristics of thermal plumes

    DEFF Research Database (Denmark)

    Zukowska, Daria; Popiolek, Zbigniew; Melikov, Arsen Krikor

    2008-01-01

    A method for calculation of integral characteristics of thermal plumes is proposed. The method allows for determination of the integral parameters of plumes based on speed measurements performed with omnidirectional low velocity thermoanemometers. The method includes a procedure for calculation...... of the directional velocity (upward component of the mean velocity). The method is applied for determination of the characteristics of an asymmetric thermal plume generated by a sitting person. The method was validated in full-scale experiments in a climatic chamber with a thermal manikin as a simulator of a sitting...

  17. A user friendly method for image based acquisition of constraint information during constrained motion of servo manipulator in hot-cells

    International Nuclear Information System (INIS)

    Saini, Surendra Singh; Sarkar, Ushnish; Swaroop, Tumapala Teja; Panjikkal, Sreejith; Ray, Debasish Datta

    2016-01-01

    In master slave manipulator, slave arm is controlled by an operator to manipulate the objects in remote environment using an iso-kinematic master arm which is located in the control room. In such a scenario, where the actual work environment is separated from the operator, formulation of techniques for assisting the operator to execute constrained motion (preferential inclusion or preferential exclusion of workspace zones) in the slave environment are not only helpful, but also essential. We had earlier demonstrated the efficacy of constraint motion with predefined geometrical constraints of various types. However, in a hot-cell scenario the generation of the constraint equations is difficult since we shall not have access to the cell for taking measurements. In this paper, a user friendly method is proposed for image based acquisition of the various constraint geometries thus eliminating the need to take in-cell measurements. For this purpose various hot cell tasks and required geometrical primitives pertaining to these tasks have been surveyed and an algorithm has been developed for generating the constraint geometry for each primitive. This methodology shall increase the efficiency and ease of use of the hot cell Telemanipulator by providing real time constraint acquisition and subsequent assistive force based constrained motion. (author)

  18. Weighted SGD for ℓp Regression with Randomized Preconditioning*

    Science.gov (United States)

    Yang, Jiyan; Chow, Yin-Lam; Ré, Christopher; Mahoney, Michael W.

    2018-01-01

    In recent years, stochastic gradient descent (SGD) methods and randomized linear algebra (RLA) algorithms have been applied to many large-scale problems in machine learning and data analysis. SGD methods are easy to implement and applicable to a wide range of convex optimization problems. In contrast, RLA algorithms provide much stronger performance guarantees but are applicable to a narrower class of problems. We aim to bridge the gap between these two methods in solving constrained overdetermined linear regression problems—e.g., ℓ2 and ℓ1 regression problems. We propose a hybrid algorithm named pwSGD that uses RLA techniques for preconditioning and constructing an importance sampling distribution, and then performs an SGD-like iterative process with weighted sampling on the preconditioned system.By rewriting a deterministic ℓp regression problem as a stochastic optimization problem, we connect pwSGD to several existing ℓp solvers including RLA methods with algorithmic leveraging (RLA for short).We prove that pwSGD inherits faster convergence rates that only depend on the lower dimension of the linear system, while maintaining low computation complexity. Such SGD convergence rates are superior to other related SGD algorithm such as the weighted randomized Kaczmarz algorithm.Particularly, when solving ℓ1 regression with size n by d, pwSGD returns an approximate solution with ε relative error in the objective value in 𝒪(log n·nnz(A)+poly(d)/ε2) time. This complexity is uniformly better than that of RLA methods in terms of both ε and d when the problem is unconstrained. In the presence of constraints, pwSGD only has to solve a sequence of much simpler and smaller optimization problem over the same constraints. In general this is more efficient than solving the constrained subproblem required in RLA.For ℓ2 regression, pwSGD returns an approximate solution with ε relative error in the objective value and the solution vector measured in

  19. Positive Scattering Cross Sections using Constrained Least Squares

    International Nuclear Information System (INIS)

    Dahl, J.A.; Ganapol, B.D.; Morel, J.E.

    1999-01-01

    A method which creates a positive Legendre expansion from truncated Legendre cross section libraries is presented. The cross section moments of order two and greater are modified by a constrained least squares algorithm, subject to the constraints that the zeroth and first moments remain constant, and that the standard discrete ordinate scattering matrix is positive. A method using the maximum entropy representation of the cross section which reduces the error of these modified moments is also presented. These methods are implemented in PARTISN, and numerical results from a transport calculation using highly anisotropic scattering cross sections with the exponential discontinuous spatial scheme is presented

  20. Data-driven automatic parking constrained control for four-wheeled mobile vehicles

    Directory of Open Access Journals (Sweden)

    Wenxu Yan

    2016-11-01

    Full Text Available In this article, a novel data-driven constrained control scheme is proposed for automatic parking systems. The design of the proposed scheme only depends on the steering angle and the orientation angle of the car, and it does not involve any model information of the car. Therefore, the proposed scheme-based automatic parking system is applicable to different kinds of cars. In order to further reduce the desired trajectory coordinate tracking errors, a coordinates compensation algorithm is also proposed. In the design procedure of the controller, a novel dynamic anti-windup compensator is used to deal with the change magnitude and rate saturations of automatic parking control input. It is theoretically proven that all the signals in the closed-loop system are uniformly ultimately bounded based on Lyapunov stability analysis method. Finally, a simulation comparison among the proposed scheme with coordinates compensation and Proportion Integration Differentiation (PID control algorithm is given. It is shown that the proposed scheme with coordinates compensation has smaller tracking errors and more rapid responses than PID scheme.

  1. Do-it-yourself networks: a novel method of generating weighted networks.

    Science.gov (United States)

    Shanafelt, D W; Salau, K R; Baggio, J A

    2017-11-01

    Network theory is finding applications in the life and social sciences for ecology, epidemiology, finance and social-ecological systems. While there are methods to generate specific types of networks, the broad literature is focused on generating unweighted networks. In this paper, we present a framework for generating weighted networks that satisfy user-defined criteria. Each criterion hierarchically defines a feature of the network and, in doing so, complements existing algorithms in the literature. We use a general example of ecological species dispersal to illustrate the method and provide open-source code for academic purposes.

  2. Improving the Reliability of Network Metrics in Structural Brain Networks by Integrating Different Network Weighting Strategies into a Single Graph

    Directory of Open Access Journals (Sweden)

    Stavros I. Dimitriadis

    2017-12-01

    Full Text Available Structural brain networks estimated from diffusion MRI (dMRI via tractography have been widely studied in healthy controls and patients with neurological and psychiatric diseases. However, few studies have addressed the reliability of derived network metrics both node-specific and network-wide. Different network weighting strategies (NWS can be adopted to weight the strength of connection between two nodes yielding structural brain networks that are almost fully-weighted. Here, we scanned five healthy participants five times each, using a diffusion-weighted MRI protocol and computed edges between 90 regions of interest (ROI from the Automated Anatomical Labeling (AAL template. The edges were weighted according to nine different methods. We propose a linear combination of these nine NWS into a single graph using an appropriate diffusion distance metric. We refer to the resulting weighted graph as an Integrated Weighted Structural Brain Network (ISWBN. Additionally, we consider a topological filtering scheme that maximizes the information flow in the brain network under the constraint of the overall cost of the surviving connections. We compared each of the nine NWS and the ISWBN based on the improvement of: (a intra-class correlation coefficient (ICC of well-known network metrics, both node-wise and per network level; and (b the recognition accuracy of each subject compared to the remainder of the cohort, as an attempt to access the uniqueness of the structural brain network for each subject, after first applying our proposed topological filtering scheme. Based on a threshold where the network level ICC should be >0.90, our findings revealed that six out of nine NWS lead to unreliable results at the network level, while all nine NWS were unreliable at the node level. In comparison, our proposed ISWBN performed as well as the best performing individual NWS at the network level, and the ICC was higher compared to all individual NWS at the node

  3. A Mixed Methods Evaluation of a 12-Week Insurance-Sponsored Weight Management Program Incorporating Cognitive-Behavioral Counseling

    Science.gov (United States)

    Abildso, Christiaan; Zizzi, Sam; Gilleland, Diana; Thomas, James; Bonner, Daniel

    2010-01-01

    Physical activity is critical in healthy weight loss, yet there is still much to be learned about psychosocial mechanisms of physical activity behavior change in weight loss. A sequential mixed methods approach was used to assess the physical and psychosocial impact of a 12-week cognitive-behavioral weight management program and explore factors…

  4. A constrained variational calculation for beta-stable matter

    International Nuclear Information System (INIS)

    Howes, C.; Bishop, R.F.; Irvine, J.M

    1978-01-01

    A method of lowest-order constrained variation previously applied by the authors to asymmetric nuclear matter is extended to include electrons and muons making the nucleon fluid electrically neutral and stable against beta decay. The equilibrium composition of a nucleon fluid is calculated as a function of baryon number density and an equation of state for beta-stable matter is deduced for the Reid soft-core interaction. (author)

  5. Measuring Spatial Distribution Characteristics of Heavy Metal Contaminations in a Network-Constrained Environment: A Case Study in River Network of Daye, China

    Directory of Open Access Journals (Sweden)

    Zhensheng Wang

    2017-06-01

    Full Text Available Measuring the spatial distribution of heavy metal contaminants is the basis of pollution evaluation and risk control. Considering the cost of soil sampling and analysis, spatial interpolation methods have been widely applied to estimate the heavy metal concentrations at unsampled locations. However, traditional spatial interpolation methods assume the sample sites can be located stochastically on a plane and the spatial association between sample locations is analyzed using Euclidean distances, which may lead to biased conclusions in some circumstances. This study aims to analyze the spatial distribution characteristics of copper and lead contamination in river sediments of Daye using network spatial analysis methods. The results demonstrate that network inverse distance weighted interpolation methods are more accurate than planar interpolation methods. Furthermore, the method named local indicators of network-constrained clusters based on local Moran’ I statistic (ILINCS is applied to explore the local spatial patterns of copper and lead pollution in river sediments, which is helpful for identifying the contaminated areas and assessing heavy metal pollution of Daye.

  6. Minimal constrained supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Cribiori, N. [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Dall' Agata, G., E-mail: dallagat@pd.infn.it [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Farakos, F. [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Porrati, M. [Center for Cosmology and Particle Physics, Department of Physics, New York University, 4 Washington Place, New York, NY 10003 (United States)

    2017-01-10

    We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  7. Minimal constrained supergravity

    International Nuclear Information System (INIS)

    Cribiori, N.; Dall'Agata, G.; Farakos, F.; Porrati, M.

    2017-01-01

    We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  8. High resolution integral holography using Fourier ptychographic approach.

    Science.gov (United States)

    Li, Zhaohui; Zhang, Jianqi; Wang, Xiaorui; Liu, Delian

    2014-12-29

    An innovative approach is proposed for calculating high resolution computer generated integral holograms by using the Fourier Ptychographic (FP) algorithm. The approach initializes a high resolution complex hologram with a random guess, and then stitches together low resolution multi-view images, synthesized from the elemental images captured by integral imaging (II), to recover the high resolution hologram through an iterative retrieval with FP constrains. This paper begins with an analysis of the principle of hologram synthesis from multi-projections, followed by an accurate determination of the constrains required in the Fourier ptychographic integral-holography (FPIH). Next, the procedure of the approach is described in detail. Finally, optical reconstructions are performed and the results are demonstrated. Theoretical analysis and experiments show that our proposed approach can reconstruct 3D scenes with high resolution.

  9. A combined approach of AHP and TOPSIS methods applied in the field of integrated software systems

    Science.gov (United States)

    Berdie, A. D.; Osaci, M.; Muscalagiu, I.; Barz, C.

    2017-05-01

    Adopting the most appropriate technology for developing applications on an integrated software system for enterprises, may result in great savings both in cost and hours of work. This paper proposes a research study for the determination of a hierarchy between three SAP (System Applications and Products in Data Processing) technologies. The technologies Web Dynpro -WD, Floorplan Manager - FPM and CRM WebClient UI - CRM WCUI are multi-criteria evaluated in terms of the obtained performances through the implementation of the same web business application. To establish the hierarchy a multi-criteria analysis model that combines the AHP (Analytic Hierarchy Process) and the TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) methods was proposed. This model was built with the help of the SuperDecision software. This software is based on the AHP method and determines the weights for the selected sets of criteria. The TOPSIS method was used to obtain the final ranking and the technologies hierarchy.

  10. A Methodology for Conducting Integrative Mixed Methods Research and Data Analyses

    Science.gov (United States)

    Castro, Felipe González; Kellison, Joshua G.; Boyd, Stephen J.; Kopak, Albert

    2011-01-01

    Mixed methods research has gained visibility within the last few years, although limitations persist regarding the scientific caliber of certain mixed methods research designs and methods. The need exists for rigorous mixed methods designs that integrate various data analytic procedures for a seamless transfer of evidence across qualitative and quantitative modalities. Such designs can offer the strength of confirmatory results drawn from quantitative multivariate analyses, along with “deep structure” explanatory descriptions as drawn from qualitative analyses. This article presents evidence generated from over a decade of pilot research in developing an integrative mixed methods methodology. It presents a conceptual framework and methodological and data analytic procedures for conducting mixed methods research studies, and it also presents illustrative examples from the authors' ongoing integrative mixed methods research studies. PMID:22167325

  11. Derivation of weighting factors for cost and radiological impact for use in comparison of waste management methods

    International Nuclear Information System (INIS)

    Allen, P.T.; Lee, T.R.

    1991-01-01

    Nuclear waste management decisions are complex, and must include considerations of cost and social factors in addition to dose limitation. Decision-aiding techniques, such as multi-attribute analysis, can assist in structuring the problem and can icorporate as many factors, or attributes, as required. However, the relative weights of such attributes need to be established. Methods were devised which could be compared with one another. These were questionnaire-based but, in order to examine the possible influence of the measurement procedures on the results, two of the methods were combined in an experimental design. The two direct methods for obtaining weights (the conventional rating scales and the direct rating task) showed good agreement and yielded different values for separate social groups, such as industrial employees and lay public. The main conclusion is that the elicitation of weighting factors from the public is possible and that the resulting weights are meaningful and could have significant effects on the choice of waste management options

  12. A Study of a Load Cell Based High Speed Weighting Method for a Potato Sorter

    International Nuclear Information System (INIS)

    Yang, Jong Hoon

    2002-02-01

    Potatoes, together with tangerines, are one of the major agricultural products in Jeju, and the production account for more than 30 % of the domestic production. Recently some kinds of sorting machine for potatoes are available, but they are not extensively used because their performance is not satisfactory and/or they are very expensive. This paper presents a load cell based high speed weighting method for sorting the potatoes. This method is based on the fact that the linear momentum of a potato is proportional to the mass of it. To test the performance of the weighting system, we developed load cell based automatic sorting system for potatoes. The system does not adopt an additional mechanism for weighting the potato such as a cup conveyer. It uses normal flat conveyers themselves so that the cost for maintenance and establishment will be lower than other system. Through sets of experiments, the developed weighting system was proved to be very reliable, and its performance is good enough to use as a practical sorting system

  13. A systematic and efficient method to compute multi-loop master integrals

    Science.gov (United States)

    Liu, Xiao; Ma, Yan-Qing; Wang, Chen-Yu

    2018-04-01

    We propose a novel method to compute multi-loop master integrals by constructing and numerically solving a system of ordinary differential equations, with almost trivial boundary conditions. Thus it can be systematically applied to problems with arbitrary kinematic configurations. Numerical tests show that our method can not only achieve results with high precision, but also be much faster than the only existing systematic method sector decomposition. As a by product, we find a new strategy to compute scalar one-loop integrals without reducing them to master integrals.

  14. Computation of rectangular source integral by rational parameter polynomial method

    International Nuclear Information System (INIS)

    Prabha, Hem

    2001-01-01

    Hubbell et al. (J. Res. Nat Bureau Standards 64C, (1960) 121) have obtained a series expansion for the calculation of the radiation field generated by a plane isotropic rectangular source (plaque), in which leading term is the integral H(a,b). In this paper another integral I(a,b), which is related with the integral H(a,b) has been solved by the rational parameter polynomial method. From I(a,b), we compute H(a,b). Using this method the integral I(a,b) is expressed in the form of a polynomial of a rational parameter. Generally, a function f (x) is expressed in terms of x. In this method this is expressed in terms of x/(1+x). In this way, the accuracy of the expression is good over a wide range of x as compared to the earlier approach. The results for I(a,b) and H(a,b) are given for a sixth degree polynomial and are found to be in good agreement with the results obtained by numerically integrating the integral. Accuracy could be increased either by increasing the degree of the polynomial or by dividing the range of integration. The results of H(a,b) and I(a,b) are given for values of b and a up to 2.0 and 20.0, respectively

  15. Learning to Recommend Point-of-Interest with the Weighted Bayesian Personalized Ranking Method in LBSNs

    Directory of Open Access Journals (Sweden)

    Lei Guo

    2017-02-01

    Full Text Available Point-of-interest (POI recommendation has been well studied in recent years. However, most of the existing methods focus on the recommendation scenarios where users can provide explicit feedback. In most cases, however, the feedback is not explicit, but implicit. For example, we can only get a user’s check-in behaviors from the history of what POIs she/he has visited, but never know how much she/he likes and why she/he does not like them. Recently, some researchers have noticed this problem and began to learn the user preferences from the partial order of POIs. However, these works give equal weight to each POI pair and cannot distinguish the contributions from different POI pairs. Intuitively, for the two POIs in a POI pair, the larger the frequency difference of being visited and the farther the geographical distance between them, the higher the contribution of this POI pair to the ranking function. Based on the above observations, we propose a weighted ranking method for POI recommendation. Specifically, we first introduce a Bayesian personalized ranking criterion designed for implicit feedback to POI recommendation. To fully utilize the partial order of POIs, we then treat the cost function in a weighted way, that is give each POI pair a different weight according to their frequency of being visited and the geographical distance between them. Data analysis and experimental results on two real-world datasets demonstrate the existence of user preference on different POI pairs and the effectiveness of our weighted ranking method.

  16. Inverse probability weighting in STI/HIV prevention research: methods for evaluating social and community interventions

    Science.gov (United States)

    Lippman, Sheri A.; Shade, Starley B.; Hubbard, Alan E.

    2011-01-01

    Background Intervention effects estimated from non-randomized intervention studies are plagued by biases, yet social or structural intervention studies are rarely randomized. There are underutilized statistical methods available to mitigate biases due to self-selection, missing data, and confounding in longitudinal, observational data permitting estimation of causal effects. We demonstrate the use of Inverse Probability Weighting (IPW) to evaluate the effect of participating in a combined clinical and social STI/HIV prevention intervention on reduction of incident chlamydia and gonorrhea infections among sex workers in Brazil. Methods We demonstrate the step-by-step use of IPW, including presentation of the theoretical background, data set up, model selection for weighting, application of weights, estimation of effects using varied modeling procedures, and discussion of assumptions for use of IPW. Results 420 sex workers contributed data on 840 incident chlamydia and gonorrhea infections. Participators were compared to non-participators following application of inverse probability weights to correct for differences in covariate patterns between exposed and unexposed participants and between those who remained in the intervention and those who were lost-to-follow-up. Estimators using four model selection procedures provided estimates of intervention effect between odds ratio (OR) .43 (95% CI:.22-.85) and .53 (95% CI:.26-1.1). Conclusions After correcting for selection bias, loss-to-follow-up, and confounding, our analysis suggests a protective effect of participating in the Encontros intervention. Evaluations of behavioral, social, and multi-level interventions to prevent STI can benefit by introduction of weighting methods such as IPW. PMID:20375927

  17. Temporal trends in pregnancy weight gain and birth weight in Bavaria 2000–2007: slightly decreasing birth weight with increasing weight gain in pregnancy

    OpenAIRE

    Schiessl, Barbara; Beyerlein, Andreas; Lack, Nicholas; Kries, Rüdiger von

    2009-01-01

    Aims: To assess temporal trends in birth weight and pregnancy weight gain in Bavaria from 2000 to 2007. Methods: Data on 695,707 mother and infant pairs (singleton term births) were available from a compulsory reporting system for quality assurance, including information on birth weight, maternal weight at delivery and at booking, maternal smoking, age, and further anthropometric and lifestyle factors. Pregnancy weight gain was defined as: weight prior to delivery minus weight at first booki...

  18. Promoting weight loss methods in parenting magazines: Implications for women.

    Science.gov (United States)

    Basch, Corey H; Roberts, Katherine J; Samayoa-Kozlowsky, Sandra; Glaser, Debra B

    2016-01-01

    Weight gain before and after pregnancy is important for women's health. The purpose of this study was to assess articles and advertisements related to weight loss in three widely read parenting magazines, "Parenting School Years," "Parenting Early Years," and "Parenting," which have an estimated combined readership of approximately 24 million (mainly women readers). Almost a quarter (23.7%, n = 32) of the 135 magazine issues over a four year period included at least one feature article on weight loss. A variety of topics were covered in the featured articles, with the most frequent topics being on losing weight to please yourself (25.2%), healthy ways to lose weight (21.1%), and how to keep the weight off (14.7%). Less than half (45.9%) of the articles displayed author credentials, such as their degree, qualifications, or expertise. A fifth (20.0%, n = 27) of the magazines included at least one prominent advertisement for weight loss products. Almost half (46.9%) of the weight loss advertisements were for weight loss programs followed by weight loss food products (25.0%), weight loss aids (21.9%), and only 6.2% of the advertisements for weight loss were on fitness. Parenting magazines should advocate for healthy weight loss, including lifestyle changes for sustained health.

  19. A study of compositional verification based IMA integration method

    Science.gov (United States)

    Huang, Hui; Zhang, Guoquan; Xu, Wanmeng

    2018-03-01

    The rapid development of avionics systems is driving the application of integrated modular avionics (IMA) systems. But meanwhile it is improving avionics system integration, complexity of system test. Then we need simplify the method of IMA system test. The IMA system supports a module platform that runs multiple applications, and shares processing resources. Compared with federated avionics system, IMA system is difficult to isolate failure. Therefore, IMA system verification will face the critical problem is how to test shared resources of multiple application. For a simple avionics system, traditional test methods are easily realizing to test a whole system. But for a complex system, it is hard completed to totally test a huge and integrated avionics system. Then this paper provides using compositional-verification theory in IMA system test, so that reducing processes of test and improving efficiency, consequently economizing costs of IMA system integration.

  20. Weighted similarity-based clustering of chemical structures and bioactivity data in early drug discovery.

    Science.gov (United States)

    Perualila-Tan, Nolen Joy; Shkedy, Ziv; Talloen, Willem; Göhlmann, Hinrich W H; Moerbeke, Marijke Van; Kasim, Adetayo

    2016-08-01

    The modern process of discovering candidate molecules in early drug discovery phase includes a wide range of approaches to extract vital information from the intersection of biology and chemistry. A typical strategy in compound selection involves compound clustering based on chemical similarity to obtain representative chemically diverse compounds (not incorporating potency information). In this paper, we propose an integrative clustering approach that makes use of both biological (compound efficacy) and chemical (structural features) data sources for the purpose of discovering a subset of compounds with aligned structural and biological properties. The datasets are integrated at the similarity level by assigning complementary weights to produce a weighted similarity matrix, serving as a generic input in any clustering algorithm. This new analysis work flow is semi-supervised method since, after the determination of clusters, a secondary analysis is performed wherein it finds differentially expressed genes associated to the derived integrated cluster(s) to further explain the compound-induced biological effects inside the cell. In this paper, datasets from two drug development oncology projects are used to illustrate the usefulness of the weighted similarity-based clustering approach to integrate multi-source high-dimensional information to aid drug discovery. Compounds that are structurally and biologically similar to the reference compounds are discovered using this proposed integrative approach.